Constrained Diffusion: Applications to Image Generation, Manifold Learning, and Motion Planning

dc.contributor.advisorYu, Yaoliang
dc.contributor.authorSzabados, Spencer
dc.date.accessioned2025-01-14T21:00:52Z
dc.date.available2025-01-14T21:00:52Z
dc.date.issued2025-01-14
dc.date.submitted2025-01-08
dc.description.abstractThis thesis delves into the theoretical foundations, extensions, and applications of diffusion modelling in generative tasks. Diffusion models have garnered significant attention due to their stability during training and superior performance compared to competing methods. In an attempt to make this work approachable for those not already familiar with diffusion, we begin by developing diffusion models from the ground up, starting with continuous diffusion processes and later deriving popular discrete diffusion models via discretization, providing insights into their mechanics. Motivated by work in the physical sciences, where datasets reside on curved surfaces, we describe extensions to Riemannian manifolds by redefining Brownian motion in these domains and formulating stochastic differential equations that describe continuous diffusion processes on manifolds. In much the same vein, as many real-world datasets are constrained within specific boundaries, we explore reflected diffusion processes. These processes describe diffusion processes that are constrained to a bounded region without absorption at the boundary, ensuring that generated data remains within a desired support. At the end of each of these chapters, we address the numerous practical challenges in training neural diffusion models on these different processes, as well as developing a few techniques that improve training stability of such models. Further, we investigate structure-preserving diffusion models that respect inherent symmetries present in data, such as rotational invariance in imaging applications. We provide a complete characterization on the form of drift and diffusion terms required to ensure the diffusion processes, and diffusion model, accurately preserve affine group invariances present within target distributions. Three core techniques are discussed for achieving such group invaraince, with each being evaluated over a set of datasets focused on applications in Medical imaging. In closing out this section, we discuss in detail extensions of this work to reflected diffusion processes and Riemann manifolds. Finally, we highlight some proof-of-concept work on applying reflected diffusion models to the domain of robotic motion planning. Focusing on generating collision-free paths for robot navigation and multi-segment robotic arms, we demonstrate how diffusion models can address the complexities inherent in planning under motion constraints. This application showcases the practical utility of the extended diffusion modeling framework in solving real-world problems.
dc.identifier.urihttps://hdl.handle.net/10012/21365
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectMachine Learning
dc.subjectDiffusion
dc.subjectMotion Planning
dc.subjectImage Generation
dc.subjectDifferential Geometry
dc.titleConstrained Diffusion: Applications to Image Generation, Manifold Learning, and Motion Planning
dc.typeMaster Thesis
uws-etd.degreeMaster of Mathematics
uws-etd.degree.departmentDavid R. Cheriton School of Computer Science
uws-etd.degree.disciplineComputer Science
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0
uws.contributor.advisorYu, Yaoliang
uws.contributor.affiliation1Faculty of Mathematics
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
szabados_spencer.pdf
Size:
15.66 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: