Correlation-Aware Rendering: Improving Sampling and Denoising for Realistic Image Synthesis

Loading...
Thumbnail Image

Advisor

Hachisuka, Toshiya

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

In realistic image synthesis, Monte Carlo integration is the foundation of most rendering algorithms, but it inevitably introduces noise. To reduce such noise, advanced sampling strategies—such as Markov chain Monte Carlo (MCMC), resampled importance sampling (RIS), and modern denoising techniques—have been proposed. However, these methods of ten introduce correlations that can manifest as new artifacts. This thesis investigates three distinct research directions, spanning from mitigating correlation to actively exploiting it. The first direction tackles correlation in MCMC methods. Traditional MCMC often suffers from low acceptance rates, producing visually “spiky” noise. We propose combining MCMCwithpathguiding techniques to improve acceptance probabilities, thereby reducing correlation artifact and improving image quality. The second direction addresses correlation artifacts in the widely used Reservoir-based Spatiotemporal Importance Resampling (ReSTIR) algorithm. While ReSTIR achieves ef f icient sampling by reusing samples across pixels and frames, this reuse can lead to blotchy artifacts, as many pixels may end up sharing only a few important samples. Observing par allels between ReSTIR and MCMC, we introduce a new spatiotemporal MCMC framework that replaces reservoir resampling. Applied to both direct illumination and path tracing, our approach significantly reduces correlation artifacts while retaining efficiency. The final direction shifts from reducing correlation to exploiting it. We present a gener alized combination framework that leverages spatial, temporal, and multiscale correlations to reduce error. This method enables robust cross-domain fusion, effectively suppressing systematic artifacts and improving temporal coherence—particularly crucial in animation. Through extensive experiments, we demonstrate that our framework enhances temporal stability, visual appearance, and residual error reduction across diverse rendering scenarios.

Description

Keywords

LC Subject Headings

Citation