Improving Neural Radiance Fields for More Efficient, Tailored, View-Synthesis

dc.contributor.authorNair, Saeejith Muralidharan
dc.date.accessioned2024-09-17T14:32:02Z
dc.date.available2024-09-17T14:32:02Z
dc.date.issued2024-09-17
dc.date.submitted2024-09-04
dc.description.abstractNeural radiance fields (NeRFs) have revolutionized novel view synthesis, enabling high-quality 3D scene reconstruction from sparse 2D images. However, their computational intensity often hinders real-time applications and deployment on resource-constrained devices. Traditional NeRF models can require days of training for a single scene and demand significant computational resources for rendering, with some implementations necessitating over 150 million network evaluations per rendered image. While various approaches have been proposed to improve NeRF efficiency, they often employ fixed network architectures that may not be optimal for all scenes. This research introduces NAS-NeRF, an new approach that employs generative neural architecture search (NAS) to discover compact, scene-specialized NeRF architectures. NAS, a technique for automatically designing neural network architectures, is investigated as a potential method for optimizing NeRFs by tailoring network architectures to the specific complexities of individual scenes. NAS-NeRF reformulates the NeRF architecture into configurable field cells, enabling efficient exploration of the architecture space while maintaining compatibility with various NeRF variants. Our method incorporates a scene-specific optimization strategy that considers the unique characteristics of each 3D environment to guide architecture search. We also introduce a quality-constrained generation approach that allows for the specification of target performance metrics within the search process. Experiments on the Blender synthetic dataset demonstrate the effectiveness of NAS-NeRF in generating a family of architectures tailored to different efficiency-quality trade-offs. Our most efficient models (NAS-NeRF XXS) achieve up to 23× reduction in parameters and 22× fewer FLOPs compared to baseline NeRF, with only a 5.3% average drop in structural similarity (SSIM). Meanwhile, our high-quality models (NAS-NeRF S) match or exceed baseline performance while reducing parameters by 2-4× and offering up to 1.93× faster inference. These results suggest that high-quality novel view synthesis can be achieved with more compact models, particularly when architectures are tailored to specific scenes. NAS-NeRF contributes to the ongoing research into efficient 3D scene representation methods, helping enable applications in resource-constrained environments and real-time scenarios.
dc.identifier.urihttps://hdl.handle.net/10012/21012
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectneural radiance field
dc.subjectneural architecture search
dc.titleImproving Neural Radiance Fields for More Efficient, Tailored, View-Synthesis
dc.typeMaster Thesis
uws-etd.degreeMaster of Applied Science
uws-etd.degree.departmentSystems Design Engineering
uws-etd.degree.disciplineSystem Design Engineering
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0
uws.contributor.advisorWong, Alexander
uws.contributor.advisorShafiee, Mohammad Javad
uws.contributor.affiliation1Faculty of Engineering
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Nair_Saeejith.pdf
Size:
6.55 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: