Theses
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6
The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.
This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)
This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)
Browse
Recent Submissions
Item Facing the Flood: Amphibious Architecture for Flood Resilience in Peguis, Manitoba(University of Waterloo, 2025-04-17) Holder, Alexa MeganFloods are Canada's most common and costly natural hazard and flood risks are increasing due to climate change. When floods affect housing, they not only inflict economic and property damage but also displace residents from their homes and affect people’s sense of safety and community. Conventionally, settlements built in flood-prone regions are protected by flood control infrastructure, like levees, dikes, dams, and diversion channels. This infrastructure cannot easily or quickly respond to changing flood conditions and sometimes transfers flood risk rather than mitigating it equitably. We see this in Manitoba’s Interlake Region, where First Nations communities bear a disproportionate burden from water diversion to protect large urban centers. Where conventional solutions have failed and new tools are urgently needed, amphibious construction can provide an option for mitigation. Amphibious structures sit on dry land when water levels are normal, like an ordinary building. However, there is a buoyancy system, allowing the structure to float on the water in a flood. Vertical guidance, often posts, holds the building in place laterally while floating. When flood waters recede the building returns to its original position undamaged. Inspired by community members who wish to stay on their land despite flood risks, this thesis proposes amphibious architecture for a site in the First Nations Community of Peguis, Manitoba. Relocated to flood-prone land after a fraudulent land transfer in 1907, the community experiences chronic flooding. They faced several floods in the last two decades and had their worst flood on record in 2022, emphasizing the urgency of providing solutions for residents. This thesis examines this history and a specific site in Peguis, identifying key considerations for implementing amphibious architecture there. Then, it assesses existing amphibious architecture precedents, looking at how these projects address common challenges. Drawing insights from this analysis, this thesis proposes a prototype design for the site in Peguis.Item FPGA-Accelerated Deep Learning for Denoising Low-Dose PET Scans(University of Waterloo, 2025-04-17) Dao, Eric-Khang TanPositron Emission Tomography (PET) is an essential imaging technique used in clinical settings for diagnosing conditions such as cancer and neurological disorders; however, its dependence on radiopharmaceuticals poses potential radiation exposure risks. Lowering the administered dose can help improve patient safety but results in imagery with reduced Signal-to-Noise Ratio (SNR), impacting diagnostic accuracy. The trade-off between minimizing radiation exposure and maintaining image quality remains a key challenge in PET imaging. Recently, deep learning-based denoising techniques, such as Denoising Convolutional Neural Network (DnCNN), have proven effective in restoring noisy images to standard quality. Traditional implementations relying on CPUs and GPUs are often constrained by high power consumption and hardware overhead, limiting feasibility in edge-compute applications. To address these challenges, this thesis explores FPGA-based acceleration for PET image denoising. A dataset is constructed using PET scans from 10 Alzheimer’s disease patients from the ADNI database, with only 0.5% of the original radiotracer dose used. A software-based implementation is developed using a proposed U-Net-like architecture, then ported to an FPGA using OpenVINO and Intel’s FPGA AI Suite for hardware emulation. Experimental results show the FPGA implementation offers a 77% improvement in performance-to-watt ratio compared to the GPU-based solution, and a 2x reduction in latency compared to the CPU-based solution.Item Queer Arrival: Uncovering the Spatial Narratives of QTPOC Newcomers in Toronto(University of Waterloo, 2025-04-17) Liao, SimonToronto’s urban landscape is continuously shaped by immigrants, queer, and marginalized communities. Historically, immigrants have established ethnic “Arrival Cities” to foster mutual support, and queer communities have carved out queer spaces like the Church-Wellesley Village to cultivate safety, belonging, and visibility. Positioned at the intersectionality of marginalized identities, Queer and Trans People of Colour (QTPOC) newcomers are also actively contributing to the evolution of the urban landscape, giving rise to a new spatial typology – the “Queer Arrival City”. Existing research on Arrival Cities and queer enclaves remains constrained within narrow conceptual boundaries, overlooking the broader spectrum of urban arrival. Arrival Cities are typically examined through an ethnic minority lens, focusing on neighbourhood dynamics, while queer enclaves are studied predominantly from a white, middle-class gay male perspective. These approaches neglect the intersectionality of race, gender, sexuality, and class in the production of diasporic spaces, leaving QTPOC newcomers underrepresented in both academic and public spheres. This thesis addresses these gaps by uncovering the spatial narratives of Toronto’s QTPOC newcomers in constructing their “Queer Arrival City”. It specifically examines how QTPOC newcomers navigate Toronto’s built environment and the role of the Church-Wellesley Village in their migration. Furthermore, it explores the design of a public space that materializes QTPOC newcomers’ spatial narratives as a place of belonging and visibility. This research employs a Queer of Colour Methodology (QOCM) integrated with Participatory Action Research (PAR) to foreground intersectionality and actively engage QTPOC newcomers in both the research and design process. Drawing on qualitative and quantitative data from three phases of community engagement - surveys, interviews, and focus groups, this thesis introduces the novel “Queer Arrival” framework, encompassing both infrastructural and individualized spatial typologies, while articulating a collective “Queer Diaspora Spatial Consciousness” in inhabiting public space. The research culminates in a design proposal shaped by the active contribution and lived experiences of QTPOC newcomers. Ultimately, by positioning QTPOC newcomers as the primary holders of knowledge production, this thesis fosters an inclusive, community-driven research environment and design process, while prioritizing QTPOC newcomers’ empowerment and agency in shaping their future built environment.Item Grafting of Starch Nanoparticles with Polymers(University of Waterloo, 2025-04-17) Fernandez, JoanneAs a biocompatible and biodegradable polysaccharide, starch has sparked significant interest for various industrial applications, but its poor mechanical properties limit its uses without chemical or physical modification. The work reported herein concerns the development of synthetic techniques to modify starch by graft polymerization via cerium (IV) activation. Starch nanoparticles (SNPs) were modified with acrylic acid (AA) in water under acidic conditions via activation with cerium (IV) in combination with potassium persulfate (KPS). The reactions were conducted with either the as-supplied SNPs containing glyoxal, or after purification (without glyoxal), for different target molar substitution (MS) values. A novel purification protocol using methanol extraction and centrifugation was implemented to purify the samples. This method proved to be selective to isolate the poly(acrylic acid) (PAA) homopolymer contaminant from the starch-g-PAA copolymer, and more reliable than the gravimetric analysis methods reported in the literature. The starch-g-PAA copolymers were characterized by dynamic light scattering (DLS), and degradation of the starch substrate allowed the determination of the molar mass of the PAA side chains via gel permeation chromatography (GPC) analysis. In the presence of aldehydes the rate of polymerization of AA increased significantly (by > 37 %), and the highest grafting efficiencies were obtained for glyoxal and butyraldehyde. The combination of cerium (IV) with glyoxal and KPS resulted in the highest polymerization rate and grafting efficiency. Increasing the glyoxal concentration also increased the rate of monomer conversion and the grafting efficiency. The increased rate of polymerization provided further insight into the grafting mechanism, as it was discovered that esterification reactions between starch and PAA also contributed significantly to the grafting process, particularly at longer reaction times. In the presence of aldehydes, the production of large amounts of PAA homopolymer resulted in esterification dominating the grafting process. Model reactions involving direct coupling of linear PAA samples with starch were investigated. All the reactions were characterized by high coupling efficiencies for a target MS = 3, and higher molar mass PAA samples (30 and 250 kDa) coupled faster than a lower molar mass sample (1.8 kDa), as expected in terms of reaction probabilities. The importance of esterification was also confirmed with model reactions using 2-hydroxyethyl acrylate, a monomer not containing a free carboxylic acid functional group, which yielded notably lower grafting efficiencies. Overall, the grafting mechanism for starch and acrylic acid promoted by cerium (IV) therefore appears more complex than described previously, particularly in the presence of aldehydes: The high overall grafting efficiencies observed result from two distinct reactions occurring concurrently, namely grafting via cerium (IV) activation, as well as the esterification of free PAA homopolymer. The additional insight gained for these reactions was possible due to the newly developed purification protocol, used in combination with NMR spectroscopy analysis, which provided detailed composition data for the different sample fractions and a better understanding of the grafting mechanism. Furthermore, preliminary results were obtained for starch modified with acrylonitrile and cerium (IV) in water under acidic conditions. Extraction of the polyacrylonitrile (PAN) homopolymer component was more difficult due to its solubility characteristics, but mixtures of dimethylacetamide with water (up to 10 % by volume) provided consistent results. High grafting efficiencies (> 67 %) were obtained for the starch-g-PAN copolymers, and characterization of the products was performed by Fourier transform-infrared spectroscopy, DLS, GPC, and atomic force spectroscopy. Hydrolysis of the starch substrate yielded hollow PAN shells or spheres, depending on the MS level of the copolymer, with potential applications in nanoencapsulation.Item Language Model Inference on FPGA with Integer-only Operations(University of Waterloo, 2025-04-17) Bekmyrza, MaratLarge Language Models (LLMs) are currently dominating the field of Artificial Intelligence (AI) applications, but their integration for edge computing purposes is rather limited due to computational complexity and power consumption. This thesis addresses this challenge by investigating the integer-only acceleration of transformer models on FPGAs, focusing on the BERT architecture. We demonstrate that by removing the floating-point operations from the inference pipeline, especially from non-linear functions like GELU, Softmax, and Layer Normalization, we can improve the performance without sacrificing accuracy. Our pipelined, batched architecture processes multiple sequences in parallel and optimizes the FPGA resources. We achieve a 2.6x throughput improvement over a single-sequence inference and at least 10x speedup over the offloading to CPU approach. The results of the experiments show that our implementation has comparable accuracy to the floating-point models for the GLUE benchmark tasks with INT8 quantization. These findings reveal that integer-only transformer inference on FPGAs is a feasible way of implementing complex language models on resource-limited edge devices, with the potential for new privacy-conscious, low-latency AI applications.Item NaviX: A Native Vector Index Design for Graph DBMSs With Robust Predicate-Agnostic Search Performance(University of Waterloo, 2025-04-17) Sehgal, GauravThere is an increasing demand for extending existing DBMSs with vector indices to become unified systems that can support modern predictive applications, which require joint querying of vector embeddings and structured properties and connections of objects. We present NaviX, a Native vector indeX for graph DBMSs (GDBMSs) that has two main design goals. First, we aim to implement a disk-based vector index that leverages the core storage and query processing capabilities of the underlying GDBMS. To this end, NaviX is a hiearchical navigable small world (HNSW) index, which is itself a graph-based structure. Second, we aim to evaluate predicate-agnostic vector search queries, where the k nearest neighbors (kNNs) of a query vector vQ is searched across an arbitrary subset S of vectors that is specified by an ad-hoc selection sub-query QS. We adopt a prefiltering based approach that evaluates QS first and passes the full information about S to the kNN search operator. We study how to design a pre-filtering-based search algorithm that is robust under different selectivities as well as correlations of S with vQ. We propose an adaptive algorithm that utilizes local selectivity of each vector in the HNSW graph to pick a suitable heuristic at each iteration of the kNN search algorithm. We demonstrate NaviX’s robustness and efficiency through extensive experiments against both existing prefiltering- and postfiltering-based baselines that include specialized vector databases as well as DBMSs.Item Inclusivity in Communication: Exploring Social Robots for Cultural Integration and Stuttering Therapy(University of Waterloo, 2025-04-16) Avijeet, PriyankCommunication is fundamental for humans to exchange ideas, interact, and collaborate. These factors make it important to explore human-robot interactions through the lens of effective communication. This work explores inclusivity from two key dimensions: cultural integration and speech impairments. Human-robot interaction (HRI) research has increasingly recognised the influence of culture on human-robot interactions, highlighting both the opportunities it brings and the need for careful consideration due to the evolving nature of cultural dynamics. In an online study with 103 participants, we explored how preferences for cross-cultural greetings performed by a humanoid robot change based on a restaurant theme. We examined factors influencing these preferences by analyzing two groups who experienced different ethnic greetings. We studied how ethnicity, percentage of life lived in Western countries, personality characteristics, and implementation of cultural aspects influenced the likability of a robot’s greeting gestures. In the context of speech impairments, the use of social robots in clinical settings for stuttering therapy of preschoolers by Speech-Language Pathologists (SLPs) remains a relatively unexplored area. To help HRI researchers explore this field, we identified potential applications of social robots for stuttering therapy. We discussed key considerations for these interactions by conducting interviews and shadowing sessions with two SLPs. The results suggest practical applications and interaction and communication design considerations for integrating social robots into speech therapy sessions for preschoolers who stutter. Key applications for social robots include engaging children through various games and providing a consistent speech model, among others. Essential properties of interaction design for sessions include being interactive, structured, and supervised by SLPs. Additionally, the robot’s communication could involve slow, relaxed, personalised speech and other techniques to support children’s speech goals. Following recommendations from a handbook on stuttering and SLPs, we explored an application area focused on slow and relaxed speech that is to be used by parents and caregivers of children who stutter. As a first step and proof-of-concept, I carried out an HRI study with university students. I designed two sessions with four and five tasks, collecting natural speech samples from 63 participants across different verbal tasks. Throughout the interactions, the voice system (two conditions: the robot speaking or the same voice coming from a laptop) maintained a slow speech. I report our results on which kind of tasks had more impact on participants’ speech and the difference in participants’ speech metrics in the two sessions. People with different English language-related self-reported factors also showed varied patterns in adapting to the Voice System’s slow speech, showcasing the need for customised implementations in the future. This work contributes to HRI by highlighting the challenges of developing culturally adaptive robots and providing design considerations for using social robots in stuttering therapy for preschoolers. By studying speech entrainment through a Voice System exhibiting slow speech, we explored how different verbal tasks influence entrainment levels, with potential future applications in training parents and caregivers of children who stutter.Item Applications of Lévy Semistationary Processes to Storable Commodities(University of Waterloo, 2025-04-16) Lacoste-Bouchet, SimonVolatility Modulated Lévy-driven Volterra (VMLV) processes have been applied by Barndorff-Nielsen, Benth and Veraart (2013) to construct a new framework for modelling spot prices of non-storable commodities, namely energy. In this thesis, we extend this framework to storable commodities by showing that successful classical models belong to the framework albeit under some parameter restrictions (a result which to our knowledge is new). Additionally, we propose a new model for spot prices of storable commodities which is built on the VMLV processes and their important subclass of so-called Lévy semi-stationary (LSS) processes. The main feature of the framework exploited in the model proposed in this thesis is the memory of the VMLV processes which is used judiciously to account for cumulative changes in inventory over time and the corresponding expected changes in prices and volatility. To the best of our knowledge, this is the first study which uses the LSS processes to investigate pricing in storable (as opposed to non-storable) commodity markets to account for the impact of inventory on pricing. To complement the theoretical development of the new model, we also provide in this thesis a companion set of calibration and empirical analyses to shed light on the new model’s performance compared to previously established models in the literature.Item Towards Decision Support and Automation for Safety Critical Ultrasonic Nondestructive Evaluation Data Analysis(University of Waterloo, 2025-04-16) Torenvliet, NicholasA set of machine learning techniques that provide decision support and automation to the analysis of data taken during ultrasonic non-destructive evaluation of Canada Deuterium Uranium reactor pressure tubes is proposed. Data analysis is carried out primarily to identify and characterizes the geometry of flaws or defects on the pressure tube inner diameter surface. A baseline approach utilizing a variational auto-encoder ranks data by likelihood and performs analysis using Nominal Profiling (NPROF), a novel technique that characterize the very likely nominal component of the dataset and determines variance from it. While effective, the baseline method expresses limitations, including sensitivity to outliers, challenged explainability, and the absence of a strong fault diagnosis and error remediation mechanism. To address these shortcomings, Diffusion Partition Consensus (DiffPaC), a novel method integrating Conditional Score-Based Diffusion with Savitzky-Golay Filters, is proposed. The approach includes a mechanism for outlier removal during training that reliably improves model performance. It also features strong explainability and, with a human in the loop, mechanisms for fault diagnosis and error correction. These features advance applicability in safety-critical contexts such as nuclear nondestructive evaluation. Methods are integrated and scaled to provide: (a) a principled probabilistic performance model, (b) enhanced explainability through interpretable outputs, (c) fault diagnosis and error correction with a human-in-the-loop, (e) independence from dataset curation and out-of-distribution generalization (f) strong preliminary results that meet accuracy requirements on dimensional estimates as specified by the regulator in \cite{cog2008inspection}. Though not directly comparable, the integrated set of methods makes many qualitative improvements upon prior work, which is largely based on discriminative methods or heuristics. And whose results rely on data annotation, pre-processing, parameter selection, and out of distribution generalization. In regard to these, the integrated set of fully learned data driven methods may be considered state of the art for applications in this niche context. The probabilistic model, and corroborating results, imply a principled basis underlying model behaviors and provide a means to interface with regulatory bodies seeking some justification for usage of novel methods in safety critical contexts. The process is largely autonomous, but may include a human in the loop for fail-safe analysis. The integrated methods make a significant step forward in applying machine learning in this safety-critical context. And provide a state-of-the-art proof of concept, or minimum viable product, upon which a new and fully refactored process for utility owner operators may be developed.Item Cloud-Connected Model Predictive Control for Autonomous Mobile Robots in the Presence of Network Latency and Message Losses(University of Waterloo, 2025-04-16) Thakur, PrajwalAutonomous driving systems have become increasingly popular due to their potential to reduce traffic accidents compared to human-operated vehicles, as well as their significant potential to optimize traffic flow and reduce pollution.However, there remain cases in which autonomous driving system malfunctions lead to accidents. These failures often arise from the partial observability of the environment and the inherent limitations of onboard sensor systems. To address these shortcomings, Cloud-Connected Autonomous System (CCAS) has emerged as a promising alternative, leveraging cloud-based computation and multiple distributed sensors to build a comprehensive understanding of the environment (including the ego vehicle) and make collective decisions for all controlled agents in the cloud. This approach improves decision-making and reduces the need to install costly onboard hardware on every vehicle by centrally sharing sensors and computational resources. Despite its advantages, introducing cloud connectivity into autonomous systems presents significant challenges, particularly network latency and message loss. Such latencies can negatively impact vehicle control and safety, especially when they exceed the control sampling interval, leading to unstable maneuvers or potential collisions. This work introduces a cloud controller designed to compensate for the effects of latencies longer than a single control sampling interval. A Robot Operating System (ROS)-based simulation environment was developed for rapid algorithm prototyping and seamless integration with a real testbed. The proposed solution was validated through both simulations and real-world tests involving an autonomous hospital bed using Ackermann steering in an indoor scenario. The experimental outcomes highlight the method’s effectiveness and practical potential.Item Enhancing Large Language Model Fine-Tuning for Classification Using Conditional Mutual Information(University of Waterloo, 2025-04-16) Sivakaran, ThanushonLarge language models (LLMs) have achieved impressive advancements in recent years, showcasing their versatility and effectiveness in various tasks such as natural language understanding, generation, and translation. Despite these advancements, the full potential of information theory (IT) to further enhance the development of LLMs has yet to be fully explored. This thesis aims to bridge this gap by introducing the information-theoretic concept of Conditional Mutual Information (CMI) and applying it to the fine-tuning process of LLMs for classification tasks. We explore the promise of CMI in two primary ways: minimizing CMI to optimize a model's standalone performance and maximizing CMI to improve knowledge distillation (KD) and create more capable student models. To implement CMI in LLM fine-tuning, we adapt the recently proposed CMI-constrained deep learning framework, initially developed for image classification tasks, with necessary modifications for LLMs. In our experiments, we focus on applying CMI to LLM fine-tuning and knowledge distillation using the GLUE benchmark, a widely used suite of classification tasks for evaluating the performance of language models. Through minimizing CMI during the fine-tuning process, we achieve superior performance on 6 out of 8 GLUE classification tasks compared to the baseline BERT model. Furthermore, we explore the use of CMI to maximize information transfer during the KD process, where a smaller "student" model is trained to mimic the behavior of a larger, more powerful "teacher" model. By maximizing the teacher's CMI, we ensure that richer semantic information is passed to the student, improving performance. Our results show that maximizing CMI during KD leads to substantial improvements in 6 out of 8 GLUE classification tasks when compared to DistilBERT, a popular distilled version of BERT.Item A Theoretical and Empirical Investigation into Payments for Watershed Ecosystem Services(University of Waterloo, 2025-04-16) Mir, KhusroThis thesis contains three research chapters on the economics of Payments for Watershed Services (PWS). While each chapter covers a different aspect of PWS schemes, all three provide insights into the management of uncertainty in ecosystem services. The first chapter serves as an introduction to the problem and the main research question addressed by each paper. Forested watersheds provide a variety of ecosystem services. Their economic valuation has increased significantly over the past decades, but the literature is fragmented and heterogenous and little has been done to systematically analyse estimated values. This study presents a global analysis of the economic valuation of forested watershed services. We address two methodological issues in the literature: the impact of ecosystem service classification on value estimates and sensitivity to scale. The latter is measured as the forested watershed area size compared to common practices to measure overall area size including other land cover and use. In the former case, we compare the detailed Common International Classification of Ecosystem Services with more simple and informal classifications in the literature based for example on the Millenium Ecosystem Assessment. We show that both the explanatory and predictive power of the estimated meta-regression models increase as we include more details about the valued ecosystem services and use more accurate estimations of the forested area size. Findings are, where possible, cross-validated with the existing forest hydrology literature. The study highlights the economic significance of maintaining forest cover in watershed areas and emphasises the role economics can play in determining high-value uses for ecosystem services. The second chapter utilises the 2015 Survey of Drinking Water Plants, employing spatial regression models and mediation analysis to examine the relationships between land use, raw water quality, and treatment costs. The study reveals a significant and economically substantial impact of forest cover on reducing treatment costs, primarily through its influence on turbidity levels. Forest cover significantly reduces turbidity levels, thereby decreasing treatment costs. The results indicate that converting agricultural land to forest within a 5km buffer zone around a treatment facility can generate savings of $18.92 CAD per hectare per year, while the savings relative to urban cover can reach up to $21.37 CAD per hectare per year. These savings diminish as the distance from the facility increases, with lower per hectare savings observed at the 10km buffer and sub-sub drainage basin scales. Further, the study accounts for spatial auto-correlation and the effects of wildfires on treatment costs. Wildfires are shown to lead to substantial increases in turbidity, significantly impacting treatment costs. The spatial error and lag models used highlight the importance of considering spatial dependencies in ecosystem service valuations. This research underscores the economic value of forest conservation and management in supporting water treatment processes and provides valuable insights for policymakers and stakeholders in water resources management. The study represents a significant step forward in understanding the interplay between land use, water quality, and treatment costs in Canada. The main objective of the final study presented here is to develop a novel modelling framework in the context of Payments for Ecosystem Services (PES) to mimic and simulate behaviour of agents (e.g., landowners) providing ecosystem services and a principal (e.g., government, municipality) buying them under uncertainty. Uncertainty is defined as the case where the principal and agents lack precise knowledge on the parameters that govern ES output, but knows the range these parameters belong to. We compare contracts under two different decision-making paradigms, namely standard and robust optimisation. With robust optimisation, an uncertainty-averse principal designs a contract to maximise their worst-case outcome and obtain a performance guarantee, i.e., a minimum acceptable performance. The results show that with standard, input-based contracts, the only way for a principal to achieve this guarantee is to invest conservatively in ES. In this setting, the result holds with adverse selection and moral hazard. However, when input is unobservable, using output-based payments, the principal can achieve this performance guarantee only by sharing some of the value of ES output with the landowner.Item Heat Production and Transfer in Earth’s Continental Crust(University of Waterloo, 2025-04-15) Kinney, CarsonPlanetary differentiation through tectonism reflects heat production and transfer, reducing internal energy and shaping planetary interiors. Geologic heat originates from two primary sources: primordial heat from planetary formation and the radiogenic decay of isotopes like U, Th, and K. Shearing can generate heat on local scales, but heat transfer predominantly occurs through conduction, where energy flows from hotter to cooler regions via atomic vibrations. In tectonically active areas, advection of magma and convection in fluids are more efficient mechanisms. Local heat transfer often relies on one dominant process, while large-scale systems involve a mix of conduction, convection, and advection. The causes and proportions of each heat source and mantle and crustal radiogenic contributions remain challenging to quantify. Understanding the re-distribution of heat-producing (i.e., radioactive) elements during metamorphism and crustal differentiation is achieved via combining natural observations with trace element and accessory mineral modelling. Six potential end-members were considered for the protolith of mid-crustal tonalite-trondhjemite-granodiorite packages—often considered the product of lower-crustal melting. Model results suggest heat-producing elements partition subequally between solid and melt at typical pressure-temperature conditions for crustal differentiation. Accessory minerals like apatite, feldspar, amphibole, and epidote are primary repositories for radioactive elements, with their stability in pressure-temperature space governing heat removal from the lower crust. Observations from the Archean Kapuskasing uplift reveal a similar partitioning pattern during mafic rock melting, supporting the notion that radiogenic heat equally influences mantle and crustal processes. Examining the crustal heat-production record provides insights into continental growth, thickness, and preservation. A database of crustal rocks, including trace element compositions and crystallization ages, reveals trends in heat production over time with implications for basalt formation and crustal evolution. While Archean mantle melting produced less heat-producing basalt than today due to higher degrees of partial melting, the total crustal heat-production rate has remained relatively constant. However, modern crust exhibits more significant variability, suggesting recent enrichment in heat-producing elements contrasts with a more homogenized Archean crust. Crustal growth pulses marked by mafic-to-felsic transitions imply a cyclic nature of crustal differentiation, with crustal thickness remaining stable due to self-organizing thermal processes. Trace element substitution in autocrystic zircon is a valuable tool for reconstructing deep geologic processes, but some influences on these proxies remain underexplored. Elevated titanium rims on volcanic zircon, often attributed to magma recharge, could also result from adiabatic ascent. Modelling shows that decompression melting and system expansion during ascent drive cooling, while subvolcanic boiling induces crystallization and latent heat release. Zircon growth during ascent can record these processes, and high-titanium rims may form in a single magma pulse without recharge, emphasizing the importance of multiple geochemical tools to interpret magma evolution. Ultra-high temperature (UHT) metamorphism represents the thermal extreme of crustal processes, yet its mechanisms and energy sources remain debated; common explanations include mafic underplating or mantle upwelling. The Frontenac Terrane in southeastern Ontario records UHT conditions during the Mesoproterozoic. Back-arc sedimentation between 1390–1200 Ma preceded Shawinigan (~1180–1160 Ma) and Ottawan (~1060 Ma) orogenic events. Granitic and minor mafic intrusions during Shawinigan times triggered regionally advective UHT metamorphism preserved through subsequent reheating events. The Frontenac Terrane provides critical insights into the Grenville Province assembly, with felsic intrusions providing a plausible mechanism for UHT conditions during orogenesis. Heat production and transfer fundamentally shape Earth's structure and behaviour. Radiogenic heat from U, Th, and K and primordial heat drive crustal and mantle dynamics. Although incompatible with most rock-forming minerals, heat-producing elements are preferentially incorporated into accessory minerals like apatite and zircon, depleting their source rocks. Increased melting reduces system-wide heat production as these elements concentrate in the melt. Advective heating is crucial for crustal growth, enabling magmas to ascend nearly adiabatically and providing a heat source for crustal reworking. These processes dictate metallogenic fluid generation, crustal heterogeneity, and long-term crustal stabilization.Item The Weight of Leaving: Gender, Identity Shifts, and Health Challenges in the Post-Competition Lives of Women Bodybuilders(University of Waterloo, 2025-04-15) Matharu, AmarjitDrawing from my own experience as a bodybuilder in the bikini division, the purpose of this research is to explore the retirement transition for women bodybuilders. More specifically, this study explores how women bodybuilders are influenced by gendered ideologies and how these ideologies affect their transition experience in relation to identity, health, and well-being. There is limited literature exploring bodybuilding retirement, and no literature exploring women’s experiences during this transition. Guided by a sport feminism theoretical orientation, I conducted reflexive dyadic interviews with 15 women who retired, or were thinking of retiring, from bodybuilding. I used narrative inquiry to guide the methodological process and observed online spaces including Instagram and Reddit. I constructed five composite characters to represent the diverse experiences of my participants and to present the findings, I crafted a series of narratives, authored by these characters, while also including my own narration to provide context throughout the series of stories. The narratives are organized into two parts. Part one highlights the impact of retirement on the women’s physical health, mental health, and well-being, while part two reveals the influence of gender ideologies on the retirement experience regarding gendered expectations both inside and outside of the industry. I conclude that gender plays a significant role in the retirement transition as gender ideologies are engrained deeply within the industry and society, leading to a dynamic experience impacting identity, health, and well-being during the transition, unique to women. This research is the first of its kind to address women’s bodybuilding retirement in the academic literature, sheds light on an understudied topic and offers insights that may resonate and validate women who share this experience.Item Dynamics of Golf Discs using Trajectory Experiments for Parameter Identification and Model Validation(University of Waterloo, 2025-04-15) Turner, AdamThe trajectories of flying discs are heavily affected by their aerodynamics and can vary greatly. The growing sport of disc golf takes advantage of these variations, offering seemingly endless disc designs to use in a round. Despite the increasing popularity of disc golf, most manufacturers lack a scientific approach to disc design and instead use subjective assessments and inconsistent disc rating systems to characterize disc performance. This leads to more guess work for players. This thesis addresses this issue by presenting a physics-based disc trajectory model optimized using experimental trajectory data, and by exploring the possibility for a standardized disc rating system. A novel stereo-camera-based methodology was developed to capture three-dimensional initial conditions and trajectories of disc golf throws. This data was used to identify the aerodynamic coefficients of physics-based models. These models included six aerodynamic coefficients that depended on five independent variables. Disc wobble was included as a variable affecting the aerodynamic coefficients for the first time. Its effect on model performance was compared to simpler models, which excluded it. The models used various coefficient estimation methods for parameter identification, including polynomial functions and a recently proposed deep-learning approach. The deep-learning approach modelled some relationships with a neural network, which had the benefit of allowing the model to form the most appropriate relationships without relying on functional approximations. Polynomial functions were also used to augment a model that used coefficients previously determined from computational fluid dynamics. These approaches were validated using experimental trajectory data. The model using a mix of computational fluid dynamics data and polynomial functions showed significant improvement over the baseline computational fluid dynamics model. The complete polynomial approaches resulted in the best performing models and showed good agreement with the validation data. The neural network approaches mostly performed well, but could not beat the pure polynomial approaches. The incorporation of disc wobble as a variable affecting the aerodynamic coefficients showed a negligible improvement over the models that disregarded it. Further model improvement is unlikely without first addressing measurement errors in data collection, particularly pertaining to disc attitude, which is the disc plane's orientation relative to the global coordinate system. The possibility of a trajectory-based test standard for discs was also explored, highlighting the need to carefully choose standardized initial conditions to evaluate disc trajectories with a wide range of flight characteristics. Possible approaches for quantifying flight numbers were also discussed. Considerations for disc mass, initial spin ratio, and air density were also highlighted as these factors were shown to affect disc flight and can have implications for a testing standard. This research contributes to the growing work surrounding disc golf, by proposing a capture method for three-dimensional disc golf trajectories and validated physics-based disc trajectory models, and by exploring a standardized disc rating system. This work contributes to the understanding of disc behaviour for both manufacturers and players alike, and propels disc golf towards a more scientifically informed future.Item Towards SLAM-Centric Inspection of Infrastructure(University of Waterloo, 2025-04-15) Charron, NicholasThe inspection and maintenance of civil infrastructure are essential for ensuring public safety, minimizing economic losses, and extending the lifespan of critical assets such as bridges and parking garages. Traditional inspection methods rely heavily on manual visual assessments, which are often subjective, labor-intensive, and inconsistent. These limitations have driven the development of robotic-aided inspection techniques that leverage mobile robotics, sensor fusion, computer vision, and machine learning to enhance inspection efficiency and accuracy. Despite advancements in robotic-aided inspection, existing works often focus on isolated components of the inspection process—such as improving data collection or automating defect detection—without providing a complete end-to-end solution. Many approaches utilize robotics to capture 2D images for inspection, but these lack spatial context, making it difficult to accurately locate, quantify, and track defects over multiple inspections. Other works extend this by detecting defects within images; however, without a robust 3D representation, defects cannot be precisely geolocated or measured in real-world dimensions, limiting their utility for long-term monitoring. While some studies explore 3D mapping for inspection, the majority rely on image-only Structure-from-Motion, which is known to be unreliable for generating dense and accurate maps, or are restricted to mapping along 2D surfaces, thereby failing to capture the full complexity of infrastructure assets. This thesis introduces a novel SLAM (Simultaneous Localization and Mapping)-centric framework for robotic infrastructure inspection, addressing these limitations by integrating lidar, cameras, and inertial measurement units (IMUs) into a mobile robotic platform. This system enables precise and repeatable localization, 3D mapping, and automated inspection of infrastructure assets. Three key challenges that hinder the development of a practical SLAM-centric inspection system are identified and addressed in this work. The first challenge pertains to the design and implementation of SLAM-centric robotic systems. This thesis demonstrates how sensor selection and configuration can be optimized to simultaneously support both highaccuracy SLAM and high-quality inspection data collection. Additionally, it establishes a robotic platform-agnostic design, allowing for flexibility across different infrastructure inspection applications. The second challenge involves precise and reliable calibration of camera-lidar systems, particularly when sensors have non-overlapping fields of view as is the case with the proposed inspection systems. To address this, a novel target-based extrinsic calibration technique is developed, leveraging a motion capture system to achieve high-precision calibration across both sensing modalities. This ensures accurate sensor fusion, yielding geometrically consistent inspection outputs. The third challenge is the development of a complete end-to-end inspection methodology. This research implements state-of-the-art online camera-lidar-IMU SLAM, with an added offline refinement process and a decoupled mapping framework. This approach enables the generation of high-quality 3D maps that are specifically tailored for infrastructure inspection by prioritizing accuracy, density, and low noise in the map. Machine learning-based defect detection is then integrated into the pipeline, coupled with a novel 3D map labeling method that transfers visual and defect information onto the 3D inspection map. Finally, an automated defect quantification and tracking system is introduced, allowing for defects to be monitored across multiple inspection cycles—completing the full end-to-end inspection workflow. The proposed SLAM-centric inspection system is validated through extensive real-world experiments on infrastructure assets, including a bridge and a parking garage. Results demonstrate that the system generates highly accurate, repeatable, and metrically consistent inspection data, significantly improving upon traditional manual inspection methods. By enabling automated defect detection, precise localization, and long-term defect tracking within a robust 3D mapping framework, this research represents a paradigm shift in infrastructure assessment—transitioning from qualitative visual inspections to scalable, data-driven, and quantitative condition monitoring. Ultimately, this thesis advances the field of robotic infrastructure inspection by presenting a comprehensive SLAM-centric framework that integrates state-of-the-art sensing, calibration, and mapping techniques. The findings have broad implications for the future of automated infrastructure management, providing a foundation for intelligent inspection systems that can enhance the efficiency, reliability, and safety of civil infrastructure maintenance worldwide.Item Trendy Dorms for Grown-Ups? The Creation of The Corporate Co-Living Submarket and The Production of Youthified Space(University of Waterloo, 2025-04-15) Ignatovich, EleonoraCorporate co-living has emerged as a fast-growing, investor-driven housing model marketed primarily to young professionals. While often promoted as an affordable and flexible alternative for urban renters, corporate co-living reflects deeper shifts in housing provision, including the commodification of domestic life, the reduction of private space, and the bundling of services to enhance asset value. This dissertation examines the rise of corporate co-living as a spatial, economic, and social phenomenon, situating it within broader transformations in housing markets, urban governance, and young adult residential trajectories. Focusing on London, UK - a city with a dynamic co-living sector and a large population of young knowledge-economy workers - the study investigates how corporate co-living is structured, marketed, and distributed and what it reveals about contemporary urban change. It concentrates on a specific variant: for-profit, amenitized, and digitally mediated shared housing developed and managed by private firms. The findings are presented across three article-based chapters. The first chapter conceptualizes corporate co-living as a platform-integrated, service-oriented housing product and introduces a revised framework for understanding this model. The second chapter examines how co-living providers use service bundling and brand narratives to enhance tenant mobility and scale their operations across urban and financial geographies. The third maps the spatial distribution of corporate co-living in London, revealing its tendency to cluster in regeneration zones and transit-accessible outer boroughs - patterns that may contribute to increasingly age-selective residential landscapes. By analyzing the emergence of corporate co-living at the intersection of housing financialization, digital platform logic, and evolving lifestyles among young people, this dissertation contributes to critical urban scholarship on the reconfiguration of rental housing and the socio-spatial impacts of new residential formats.Item Advances in the Analysis of Irregular Longitudinal Data Using Inverse Intensity Weighting(University of Waterloo, 2025-04-14) Tompkins, GraceThe analysis of irregular longitudinal data can be complicated by the fact that the timing at which individuals are observed in the data are related to the longitudinal outcome. For example, this can occur when patients are more likely to visit a clinician when their symptoms are worse. In such settings, the observation process is referred to as informative, and any analysis that ignores the observation process can be biased. Inverse intensity weighting (IIW) is a method that has been developed to handle specific cases of informative observation processes. IIW weights observations by the inverse probability of being observed at any given time, and creates a pseudopopulation where the observation process is subsequently ignorable. IIW can also be easily combined with inverse probability of treatment weighting (IPTW) to handle non-ignorable treatment assignment processes. While IIW is relatively intuitive and easy to implement compared to other existing methods, there are few peer-reviewed papers examining IIW and its underlying assumptions. In this thesis, we begin by evaluating a flexible weighting method which combines IIW and IPTW through multiplication to handle informative observation processes and non-randomized treatment assignment processes. We show that the FIPTIW weighting method is sensitive to violations of the noninformative censoring assumption and show that a previously proposed extension fails under such violations. We also show that variables confounding the observation and outcome processes should always be included in the observation intensity model. Finally, we show scenarios where weight trimming should and should not be used, and highlight sensitivities of the FIPTIW method to extreme weights. We also include an application of the methodology to a real data set to examine the impacts of household water sources on malaria diagnoses of children in Uganda. Next, we investigate the impact of missing data on the estimation of IIW weights, and evaluate the performance of existing missing data methods through empirical simulation. We show that there is no "one-size-fits-all" approach to handling missing data in the IIW model, and show that the results are highly dependent on the type of covariates that are missing in the observation times model. We then apply the missing data methods to a real data set to estimate the association between sex assigned at birth and malaria diagnoses in children living in Uganda. Finally, we provide an in-depth evaluation on the assumptions made on IIW across various peer-reviewed papers published in the literature. For each set of assumptions, we construct directed acyclic graphs (DAGs) to visualize the assumptions made on the observation and censoring processes which we use to highlight inconsistencies and potential ambiguity among the assumptions presented in existing works involving IIW. We also discuss when causal estimates of the marginal outcome model can be obtained, and propose a general set of assumptions for IIW.Item Heartworks: Feminist Encounters with the Gendered Selves of Young Divorcées(University of Waterloo, 2025-04-14) Valtchanov, Bronwen L.Compelled by personal connections to women in my life experiencing the gendered complexities of divorce, this study explores how young, divorced women (in their 20s and 30s), without children, were influenced by different gendered ideologies—including femininity, coupledom, and pronatalism—along with other social, cultural, and relational contexts and pressures, all of which can be variously experienced, reproduced, and resisted within leisure. Aligning feminist theory with narrative inquiry, I conducted one-to-one interviews and group interviews with 12 young, divorced women. I represented the findings using Creative Analytic Practice through a variety of literary forms, including monologues, social media posts, and researcher field notes. The findings elucidate women’s experiences within a framework I conceptualize as the Heartworks, which details the heart-work of women’s divorce processes and the feminist research praxis it fosters. Collectively, the findings highlight the challenges women faced as they navigated the “shattering” of their married selves and engaged in “re-creating” distinct post-divorce selves against the sociocultural backdrop of gendered ideologies. This research expands current conceptualizations of identity, grief, transition, and transformation. It also adds complexity to our thinking about women’s relationships as a shifting cultural nexus where leisure contexts both confine and expand notions of femininity and love. As a feminist social justice project, this research exposes the marginalization and stigmatization faced by young, divorced women and shares new understandings of their complex, lived experiences, including possibilities for resisting and re-creating limiting narratives of women’s divorce through counter-narratives of (re)claimed agency, solidarity, and empowerment.Item "I Could be Suborned with a Sardine": A Material Culture Study of Teresa of Ávila's Letters(University of Waterloo, 2025-04-14) Zehr, RachelEarly modern Carmelite reformer and mystic Teresa of Ávila (1515-1582) maintained an extensive correspondence, which has received less scholarly attention than her major autobiographical and mystical works. Her letters reached a broad audience, from her brother and later her nephew in colonial Peru, to her advisor Jerónimo Gracián, to the prioresses of the convents she founded across Spain, to noble patrons and friends, and even to Philip II, King of Spain. Her preoccupation with material objects spanned this diverse range. This thesis applies a material culture study to the objects mentioned in Teresa’s letters, offering new insights into her experience of the fragility of human embodiment. It analyzes Teresa’s experience with medical remedies, religious textiles, food, and possessions, illuminating both Teresa’s anxiety with the vulnerability of the fragile human body as well as her respect and care for the condition of human embodiment. This attention to Teresa’s concern with embodied materiality nuances our understanding of how early modern nuns navigated the spiritual and material demands placed on them by monastic poverty.