The Libraries will be performing routine maintenance on UWSpace on July 15th-16th, 2025. UWSpace will be available, though users may experience service lags during this time. We recommend all users avoid submitting new items to UWSpace until maintenance is completed.
 

Theses

Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6

The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.

This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)

This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)

Browse

Recent Submissions

Now showing 1 - 20 of 16738
  • Item
    Learning the Quantum, Scrambling the Universe
    (University of Waterloo, 2025-07-10) Liu, Shuwei
    This thesis explores how quantum information behaves in extreme physical settings, from black hole interiors to noisy quantum devices. First, we derive a thermodynamic relation linking gravitational shockwaves to microscopic deformations of the black hole horizon, illuminating the connection between quantum chaos and horizon area deformation. Next, we explore the black hole information problem through the lens of holography, demonstrating how scrambling and recoverability emerge from gravitational backreaction in shockwave geometries. Finally, we shift to quantum technologies, introducing noise-strength-adapted (NSA) quantum error-correcting codes discovered via hybrid machine learning. These non-stabilizer codes outperform conventional designs under amplitude damping and generalize to larger systems. Together, these works reveal how quantum information unifies seemingly disparate domains, offering both conceptual insights into spacetime and practical tools for building resilient quantum systems.
  • Item
    Bending the curve of biodiversity loss: Identifying barriers and opportunities to accelerate endangered species recovery in Canada
    (University of Waterloo, 2025-07-09) Kraus, Daniel
    The decline of wild species represents one of the most urgent crises of our time, with significant ecological, cultural, and economic implications. Understanding the barriers and opportunities to accelerate wildlife recovery is essential to inform effective conservation planning, policymaking, and action, and ultimately to halt and reverse the loss of nature. Research for this thesis was guided by three interconnected objectives: 1) identify the patterns and processes of wildlife extinction and recovery in Canada, with a detailed examination of nationally endemic species, 2) compare and examine the effectiveness of national approaches to endangered species assessment, listing and recovery, thereby identifying bridges and barriers to recovery, and 3) develop and advance new approaches to planning and implementation that will accelerate endangered species recovery in Canada. These objectives are intended to provide novel contributions that fill key knowledge gaps to support the practice of endangered species conservation. This research describes over 200 species ‘missing’ from Canada since European settlement, revealing significantly more extinctions and extirpations than reported under the Species at Risk Act. These losses are concentrated in Ontario, BC, and Quebec, with unsustainable harvesting historically driving extinctions, and habitat degradation emerging as the dominant contemporary threat. In contrast, the research also identifies 49 species with genuine improvements in conservation status, as well as over 50 species that began to recover before formal national assessments began. Key drivers of recovery include harvest management, pollution abatement, with more contemporary recoveries resulting from translocations, stewardship, and protected areas. The research also highlights that most improvements in the conservation status of species at risk are the result of discovering new populations and cautions against misclassifying these as conservation successes. This research also provides the first comprehensive inventory of Canada’s 308 nationally endemic species, approximately 90% of which are of global conservation concern. The analysis identifies 27 spatial concentrations of endemic species, many of which are associated with glacial refugia, islands, coasts, and unique habitats. Despite their significance, nationally endemic species have not been prioritized in national conservation efforts, but their conservation will play an essential role in Canada’s contribution to preventing global extinctions. Drawing on comparisons with the US and Australia, the thesis identifies systemic barriers to endangered species recovery and offers ten strategic "bridges" to overcome them. These include ecosystem-based recovery, community co-governance, linking wildlife recovery to ecosystem services, and improving public narratives around wildlife loss and recovery. Insights from a survey of 136 Canadian recovery planning practitioners further highlighted that effective implementation of SARA remains illusive, with respondents emphasizing the need for improved consultations, co-production with Indigenous communities, streamlined processes, and knowledge sharing. The thesis concludes by proposing pathways to reduce extinction risks and accelerate recoveries that are based on the relationships between processes, places and peoples. These include approaches to increase proactive conservation, supporting community-based recovery planning and action, and improving knowledge mobilization. These recommendations aim to strengthen Canada’s capacity to meet its national and global biodiversity commitments and bending the curve of biodiversity loss.
  • Item
    On the distributions of prime divisor counting functions
    (University of Waterloo, 2025-07-09) Das, Sourabhashis
    Let k and n be natural numbers. Let ω(n) denote the number of distinct prime factors of n, Ω(n) denote the total number of prime factors of n counted with multiplicity, and ω_k(n) denote the number of distinct prime factors of n that occur with multiplicity exactly k. Let h ≥ 2 be a natural number. We say that n is h-free if every prime factor of n has multiplicity less than h, and h-full if all prime factors of n have multiplicity at least h. In 1917, Hardy and Ramanujan proved that both ω(n) and Ω(n) have normal order log log n over the natural numbers. In this thesis, using a new counting argument, we establish the first and second moments of all these arithmetic functions over the sets of h-free and h-full numbers. We show that the normal order of ω(n) is log log n for both h-free and h-full numbers. For Ω(n), the normal order is log log n over h-free numbers and h log log n over h-full numbers. We also show that ω_1(n) has normal order log log n over h-free numbers, and ω_h(n) has normal order log log n over h-full numbers. Moreover, we prove that the functions ω_k(n) with 1 < k < h do not have a normal order over h-free numbers, and that the functions ω_k(n) with k > h do not have a normal order over h-full numbers. In their seminal work, Erdős and Kac showed that ω(n) is normally distributed over the natural numbers. Later, Liu extended this result by proving a subset generalization of the Erdős–Kac theorem. In this thesis, we leverage Liu’s framework to establish the Erdős–Kac theorem for both h-free and h-full numbers. Additionally, we show that ω_1(n) satisfies the Erdős–Kac theorem over h-free numbers, while ω_h(n) satisfies it over h-full numbers.
  • Item
    Development of Ecohydrological Processes on a Partially Removed Well Pad Undergoing Restoration to a Peatland on the Western Boreal Plain, Alberta, Canada
    (University of Waterloo, 2025-07-09) McKinnon, Murdoch
    Peatlands on the Western Boreal Plain have been disturbed at a landscape scale by industrial developments including those associated with the oil and gas industry. Among these disturbances are in-situ well pads, which are constructed to provide a stable base for oil and gas drilling and extraction infrastructure. In the province of Alberta, Canada, well pads must legally be returned to a state of ‘equivalent land capability’ after decommissioning. For well pads constructed in peatlands, equivalent land capability has recently been defined as including the reestablishment of a self-sustaining and peat accumulating vegetation community. One method proposed to reintroduce peatland vegetation (including peatland mosses) onto decommissioned well pads involves the partial removal of the mineral fill used to construct a well pad. Termed the ‘Partial Removal Technique,’ this approach aligns the reprofiled surface elevation of a pad with that of the water table in the surrounding peatland. Peatland vegetation propagules are then introduced onto the residual mineral substrate using a modified version of the established Moss Layer Transfer Technique. However, considerable uncertainty has remained surrounding the efficacy of the technique as a form of peatland restoration, as it had not yet been applied at the scale of a full-size well pad. Accordingly, a five-year ecohydrological study was undertaken following the first full-scale implementation of the Partial Removal Technique on a well pad. The subject well pad was located in a fen complex on the Western Boreal Plain near the town of Slave Lake, Alberta, Canada. A series of field studies were undertaken to assess the extent to which the residual mineral substrate would support environmental conditions requisite for the initiation and establishment of a peatland vegetation community. Specific objectives addressed included characterization of the hydrophysical properties of the residual mineral fill and their effect on hydrological connectivity with an adjacent fen, and assessment of whether hydrological connectivity was sufficient to maintain a near-surface water table and optimal moisture availability to mosses across the entire site. The role of additional water balance terms in supporting near-surface water tables and water availability was also assessed, including quantification of snowmelt, vertical groundwater exchange, and evapotranspiration. Additionally, monitoring of the development of biogeochemical processes in the first five years post-partial removal was undertaken, including quantification of the rates of nutrient cycling and supply. The effects of microtopography and application of straw mulch and rock phosphate fertilizer on moisture and nutrient dynamics were also assessed. Results indicate that hydrological connectivity between the residual well pad and the adjacent fen was limited by the low hydraulic conductivity of the mineral fill and the compacted peat underlying it. Combined with rapid drainage from the mineral fill into the underlying peat following rainfall, this resulted in the water table being poorly regulated across just over half of the pad’s surface area. The deeper water tables observed in those areas were associated with non-optimal moisture availability to mosses (i.e., exceedance of literature desiccation thresholds), particularly in the late growing season when rainfall inputs were infrequent. Combined with high rates of water loss through evapotranspiration, it appears that much of the pad’s surface area is likely to be favourable for the establishment of only those mosses with a high desiccation tolerance. The establishment of a vegetation community characteristic of swamps may thus occur over the long term in areas that are hydrologically disconnected from the fen. Nonetheless, hydrological connectivity with the adjacent fen was sufficient to maintain a water table within 6 cm of the surface in areas located within approximately 20 to 30 metres of the upgradient pad edges. This water table depth was associated with optimal water supply at the surface for moss survival and growth. As such, the establishment of a peatland true moss community is likely to be supported across just under half of the pad’s surface area. Snowmelt may also have provided a large source of water in the early season, although additional study is required to determine the extent to which snowmelt may be lost from the pad as overland flow. Surface runoff from an upland feature constructed out of the excess mineral fill produced during the partial removal process did not constitute an appreciable source of water to the pad. Nutrient cycling and availability demonstrated limited spatial variability across the residual well pad. Owing to the high cation content of the calcareous residual mineral fill, cation supply rates were sufficiently high to further increase the likelihood of peatland true moss establishment in areas with optimal substrate moisture availability. However, low rates of nitrogen production and a low ratio of nitrogen to phosphorus supply rates indicate that productivity of the vegetation community on the residual pad may be nitrogen limited. This may change over time, as a layer of organic litter was observed to accumulate on the surface of the residual well pad during the study. This is likely to result in increased rates of decomposition, and thus also of nutrient mineralization over time. Combined, the results of this thesis indicate that there is a need to increase horizontal hydrological connectivity with adjacent peatlands in future implementations of the Partial Removal Technique. This may improve the availability of moisture across a greater proportion of the surface area of residual well pads, while also ensuring the long-term development of anaerobic biogeochemical processes. Additional work is also required to reduce water losses in the form of both vertical drainage from residual mineral substrates and evapotranspiration from the surfaces of residual well pads. Overall, the Partial Removal Technique appears to have promise as a strategy to create favourable environmental conditions for the initiation and establishment of peatland mosses on decommissioned well pads.
  • Item
    Detection of Biological Tissue Anomalies Using Low-Frequency Electromagnetic Fields
    (University of Waterloo, 2025-07-09) Akbari Chelaresi, Hamid
    This PhD thesis presents a novel biomedical imaging modality—proposed and developed for the first time—for the detection of breast cancer using low-frequency electromagnetic (EM) fields. The core principle stems from the fact that the penetration depth of EM waves into biological tissues is inversely proportional to their operating frequency. Unlike conventional high-frequency imaging techniques, this approach leverages sub-GHz frequencies (hundreds of MHz), which offer significantly deeper tissue penetration, making them particularly suitable for imaging dense breast tissues (BI-RADS categories C and D), where conventional X-ray mammography fails. Operating at low frequencies introduces critical challenges in designing radio-frequency (RF) components that are compact, human-compatible, and suitable for clinical deployment. To address this, a novel low-frequency metasurface-based film antenna—conceptually analogous to traditional X-ray films—has been developed. This metasurface sensor effectively captures scattered EM fields after interaction with biological tissues, enabling high-fidelity imaging while operating within a non-ionizing and biologically safe frequency range. The proposed system is cost-effective and portable, with strong potential for widespread deployment in low-resource settings where access to magnetic resonance imaging (MRI) is limited. Unlike MRI, which is expensive and not readily available, or ultrasound, which is prone to operator-dependent errors, this technique enables consistent and repeatable screening. Also, this work investigates the impact of various EM sources on image resolution and contrast. It is shown that magnetically enhanced sources significantly improve field-tissue interaction, thereby increasing sensitivity to early-stage tumorous anomalies. Advanced post-processing algorithms, including differentiation techniques and both supervised and unsupervised machine learning models, were implemented to enhance image quality and minimize diagnostic errors, further improving the system’s diagnostic performance. The methodology has been rigorously validated through both numerical simulations and experimental studies. Multiple iterations of the transmitter antennas and metasurface sensors have been developed, optimized, and evaluated throughout the course of the research. The final system demonstrates high accuracy in detecting early-stage abnormalities. Moreover, this thesis introduces a new low-frequency tomography method, also for the first time, that reconstructs images of internal tissues by modeling the X-ray-like behavior of localized, electrically small transmitters and receivers. A novel mathematical framework has been proposed and implemented using Radon transform techniques, enabling accurate spatial reconstruction of the object under test.
  • Item
    Comparison of Flow Path Mapping Between Unreal Engine and ArcGIS: The Potential Role for Game Engines in GIScience
    (University of Waterloo, 2025-07-09) Fang, Amerald
    Advances in the videogame industry, particularly game engines, offer promising, unconventional tools for processing spatial data and representing complex geographical processes through integrated physics. This thesis explores the potential of using Unreal Engine (UE) as a multi-disciplinary platform for combining simulation models from the field of Computational Fluid Dynamics (CFD) with GIS. We present a case study implementing a fluid simulation workflow using Smoothed Particle Hydrodynamics (SPH) and quantitatively compare its results to conventional flowpath mapping methods (D8). A multi-spatial resolution raster comparison revealed that the UE model produced flow paths with a similar length to traditional methods, but with fine-scale disagreements on where flow occurs. The vector path analysis found that the UE model produced more but shorter paths than the D8. The comparison highlights the viability of game engines for dynamic simulation and suggests extensions to broader geocomputation applications such as erosion modelling. Moreover, this research demonstrates how leveraging game engine capabilities can contribute to a more integrative evolution of GIScience.
  • Item
    Dance/Movement Therapy for Dementia Caregiver Resilience: A Mixed-Methods Study
    (University of Waterloo, 2025-07-08) Champagne, Eden
    As Canada’s population continues to age, more individuals will be living with neurodegenerative conditions such as dementia and caring for loved ones with these conditions. The Government of Canada estimates that in 2022-2023, over 400,000 people were living with diagnosed dementia, and close to 99,000 were newly diagnosed that year (StatsCan, 2025). Most individuals living with dementia are taken care of by a family member (romantic partner/spouse or adult child). However, individuals who step in to take on this role often become burdened and distressed due to the grief associated with the losses their loved one is going through (relational, physical, cognitive) and the compounded strains of caregiving. Once dementia emerges and continues to progress, the negative impact on caregivers’ health and well-being is greater than on other caregiving groups (Kim & Schulz, 2008). Thus, it is imperative to explore how caregiver well-being can be maintained despite the ongoing losses with their loved one. The majority of caregiver support programs which have been developed and evaluated are based on stress-process models, aiming to mitigate impacts of illness-related stress through learning communication skills, coping skills, and information about dementia (Schulz et al., 2020). While some of these programs have shown reductions in caregiver depression, most have minimal effect sizes (Schulz et al., 2020) and focus on reducing dysfunction (e.g., burden), rather than promoting holistic resilience factors (Palacio et al., 2020). Resilience literature suggests that taking a strengths-based approach to caregiver support may offer meaningful pathways to caregiver well-being, by promoting malleable factors such as positive affect, self-efficacy, and ways of coping (Palacio et al., 2020). Despite evidence of how the creative-arts therapies (CATs) such as dance/movement therapy (DMT) can promote positive aspects of well-being such as mood and coping in other populations, they remain underexplored for caregivers in their own right (Irons et al., 2020). CAT programs which have been explored often include the caregiver as a co-facilitator of the activity, alongside their loved one with dementia, and thus they may experience burden rather than respite (Irons et al., 2020). Importantly, proposed therapeutic mechanisms of DMT inherently correspond to resilience factors for caregivers (Champagne, 2024), providing a rationale for how DMT may help caregivers to focus on their own needs and build resources. However, scant if any research has designed or evaluated the benefits of DMT for caregivers. The purpose of this exploratory research was to design, facilitate, and evaluate the impact of a 6-session, theory-driven DMT program on resilience for dementia caregivers and to understand their experiences of this program. The objectives and activities in the DMT sessions were informed by resilience theories and previous work on DMT for resilience in other populations. A pretest-posttest convergent mixed-methods design was used. Outcome measures included caregiving burden, resilience, and psychological flourishing. Weekly quantitative measures of active creativity and DMT therapeutic factors were also distributed to consider therapeutic mechanisms of the program. Qualitative data was captured through post-session journal entries and semi-structured debrief interviews. Online survey data was collected at two time points from 10 dementia caregivers (before and after the DMT program). Repeated-measures t-tests were used to examine the changes in caregiver burden and well-being from before to after the DMT program. Results indicated that caregiver burden was significantly reduced from baseline to follow-up, as expected. However, increases in benefit finding, resilience, and psychological flourishing were not statistically significant. Pearson correlations of key study variables indicated that higher resilience immediately following DMT caused significantly lower caregiver burden at follow-up and was significantly associated with higher resilience at follow-up. Additionally, experiencing more DMT therapeutic factors was negatively associated with burden at follow-up, and positively associated with resilience and psychological flourishing at follow-up, with a medium effect size, but these correlations did not reach statistical significance. Thematic findings from qualitative interviews and post-session journals revealed that the DMT program offered participants experiences of holistic engagement, liberation, and meaningful connection with others, which led to benefits of enhanced coping. Participants described their caregiving experiences as exhausting and overwhelming. They reported feeling constrained and that it was hard to find time for self-care. Participants contrasted their experiences in DMT with preexisting caregiver programs and emphasized how creative movement elicited benefits such as feeling “lighter” and empowered, gaining an attitude of acceptance, and emotional regulation. These findings suggest that DMT programs should continue to be designed and offered on a continual basis for dementia caregivers, for the unique ways in which movement provided a needed “release” and “liberation” which promoted experiences of emotional expression and improved coping. Participants suggested that future iterations of the program should have more sessions, longer sessions to enable more deep processing and debriefing, and more social time. Together, the quantitative and qualitative findings suggest preliminary evidence of the potential for DMT to foster resilience factors and benefit caregiver well-being and coping. Participants in the present study emphasized their own surprise at how useful the modality of DMT was for their needs and urged for more DMT programs to be accessible to them.
  • Item
    Algorithmic Tools for Network Analysis
    (University of Waterloo, 2025-07-08) Chen, Jingbang
    Network analysis is a crucial technique used in various fields such as computer science, telecommunications, transportation, social sciences, and biology. Its importance includes optimizing network performance, understanding social and organizational structures, and detecting fraud or misinformation. In this thesis, we propose algorithmic results on several aspects of network analysis. The Abelian sandpile model is recognized as the first dynamical system discovered exhibiting self-organized criticality. We present algorithms that compute the final state of the sandpile instance on various classes of graphs, solving the \textit{sandpile prediction} problem on: (1) general graphs, with further analyses on regular graphs, expander graphs, and hypercubes. (2) trees and paths, surpassing previous methods in time complexity. To analyze the structure and dynamics of networks, counting motifs is one of the most popular methods, as they are considered the basic construction block of the network. In this thesis, we introduce several tools developed for counting motifs on bipartite networks. Despite its importance, counting (p,q)-bicliques is very challenging due to its exponential increase with respect to p and q. We present a new sampling-based method that produces a high-accuracy approximate counting of (p,q)-bicliques, with provably error guarantee and unbiasedness. In another line of work, we consider the temporal bipartite graphs, which edges carry timestamps. To capture the dynamic nature of relationships, we consider counting butterflies ((2,2)-bicliques) in temporal bipartite graphs within specified time windows, called the historical butterfly counting problem. We present a hardness result between memory usage and query time for this problem and a new index algorithm that surpasses the hardness when applied to power-law graphs, with outstanding empirical performance. Lastly, we discuss tools that find the polarized community in the network. A classical model that applies to networks to deal with polarization is the signed graphs, which have positive and negative edges between vertices. A signed graph is balanced if it can be decomposed into two disjoint sets such that positive edges are between vertices in the same set while negative edges are between vertices from different sets. This notion of balance is strict in that no edge can disobey the condition, which seldom appears in reality. To address this phenomenon, we propose a new model for identifying balanced subgraphs with tolerance in signed graphs and a new heuristic algorithm that computes maximal balanced subgraphs under the new tolerance model.
  • Item
    Investigation of Neck Posture and Muscle Activity on Cervical Spine Impact Kinematics Using a Finite Element Human Body Model
    (University of Waterloo, 2025-07-08) Correia, Matheus Augusto
    Whiplash-associated disorders (WAD) define a broad range of symptoms affecting the neck such as pain and stiffness, reported in up to half of motor vehicle collisions. WAD are typically associated with, but not limited to, low-severity rear impacts. The high incidence of WAD and high socioeconomic cost have led to significant, but still inconclusive, efforts to better understand the associated causal injury mechanisms. Neck posture and muscle behaviour are known factors that contribute to neck injuries during low-severity vehicle impacts. Quantifying the effects of such parameters at the tissue level is challenging in experimental studies but may be informed by computational human body models (HBMs). However, three limitations in neck models have been identified: (1) neck muscle controllers were often tuned to a narrow set of specific load cases, (2) neck models were unable to predict the S-shape (upper cervical spine flexion) magnitude observed in experiments during rear impacts, and (3) defining tissue-level injury thresholds remain elusive for the neck. To address these challenges, three studies were defined for this thesis using a contemporary head and neck finite element model from an average-stature male HBM (Global Human Body Models Consortium (GHBMC)) with the aim of enhancing and evaluating the tissue-level response associated with WAD injury risk following rear impact. In the first study, a new closed-loop controller with a single set of parameters for neck muscle activation based on known reflex mechanisms was implemented in the GHBMC model. The updated model was assessed over a range of impact conditions. The closed-loop controller had an average cross-correlation to the experimental data of 0.699 for 14 load cases, including frontal, rear and lateral impacts, within 2% to 9% of previous calibrated open-loop approaches. In the second study, a novel methodology was developed to integrate pre-tension in the neck muscles based on experimental cadaveric and volunteer data and assessed in rear impact scenarios. Only the model with pre-tension achieved flexion of the upper cervical spine at the same magnitude as reported in impact tests with volunteers. Pre-tension increased the muscle tissue strain relative to cases with no pre-tension, and, in some cases, led to potentially injurious-level strains, reinforcing that the initial muscle strain is essential for evaluating WAD injury risk. In the third study, the methods from the first and second studies (closed-loop muscle activation controller and muscle pre-tension) were combined to assess possible WAD injury mechanisms based on tissue-level analysis of stresses and strains in 4g to 10g rear impacts. The existing injury metrics and tissue-level muscle strains identified that hyperextension was the main injurious phase in low-severity rear impacts. In addition, muscle pre-tension and activation changed the distribution of muscle strains, better representing the injury regions reported in the literature. New model developments and knowledge obtained from the three studies completed in this research can be generalizable to other HBMs and can be applied to evaluate the efficacy of vehicle safety systems, ultimately reducing injury risk and diminishing societal costs related to low-severity neck injury in the future. Further, the enhanced neck model developed in this work has identified possible areas of experimental interest for future neck injury research.
  • Item
    Digital Agent-Based Resource Management for Short Video Streaming in Multicast Networks
    (University of Waterloo, 2025-07-07) Huang, Xinyu
    As fifth-generation (5G) networks approach maturity and widespread deployment, both industry and academia are turning their attention to sixth-generation (6G) networks. It is anticipated that 6G networks will support an unprecedented diversity of services with heterogeneous user requirements, accelerating the shift from service-oriented to experience-centric resource management. Among these services, short video streaming has become one of the majority of users’ daily mobile traffic consumptions due to its highly engaging content, but this also leads to substantial traffic increase, especially in densely populated areas. Considering the popularity-based and user similarity-driven recommendation principles in short video platforms, multicast transmission over the air can effectively relieve traffic pressure by delivering the same video data to a group of users with similar characteristics and locations. Quality of experience (QoE), as a subjective performance metric in experience-centric resource management, can reflect the user satisfaction level on multicast short video streaming, which usually consists of rebuffer time, video quality, and video quality variation. To achieve experience-centric resource management, digital agent (DA), as a cutting-edge technology in 6G networks, owns advanced status emulation, data analytics, and decision-making capabilities, which can perceive network dynamics, abstract hidden behavior patterns or QoE models, and solve complex optimization problems. The interesting issue is maximizing user QoE in multicast short video streaming under limited radio and computing resources within dynamic network environments. However, the main technical challenges are: (1) how DAs abstract user swipe behavior patterns for large-timescale resource reservation to enhance resource utilization and improve long-term user QoE; (2) how DAs characterize multicast buffer dynamics for real-time resource allocation to alleviate buffer length overestimation and improve real-time user QoE; (3) how to adaptively select appropriate DA models to assist resource management and timely update them to further improve user QoE. In this thesis, we develop an efficient DA-based resource management framework to enhance user QoE for multicast short video streaming, including swipe behavior-aware resource reservation, multicast buffer-aware resource allocation, and network dynamics-aware DA management. First, we propose a DA-based resource reservation scheme by considering dynamic user swipe behaviors to enhance resource utilization and large-timescale user QoE. Particularly, user DAs are constructed for individual users, which store users’ historical data for updating multicast groups and abstracting useful information. The swipe probability distributions and recommended video lists are abstracted from user DAs to predict bandwidth and computing resource demands. Parameterized sigmoid functions are leveraged to characterize multicast groups’ user QoE. A joint non-convex bandwidth and computing resource reservation problem is formulated and transformed into a convex piecewise problem by utilizing a tangent function to approximately substitute the concave part. A low-complexity scheduling algorithm is developed to find the optimal resource reservation decisions. Simulation results based on the real-world dataset demonstrate that the proposed scheme outperforms benchmark schemes in terms of user QoE and resource utilization. Second, we propose a DA-based resource allocation scheme by considering multicast buffer dynamics to enhance real-time user QoE. In specific, user statuses emulated by DAs are utilized to estimate the transmission capabilities and watching probability distributions of sub-multicast groups for adaptive segment buffering. The sub-multicast groups’ buffers are aligned to the unique virtual buffers managed by DAs for fine-grained buffer updates. A multicast QoE model consisting of multicast rebuffer time, video quality, and quality variation is developed by considering the mutual influence of segment buffering among sub-multicast groups. A joint optimization problem of segment version selection and slot division is formulated to maximize user QoE. To efficiently solve the problem, a data-model-driven algorithm is proposed by integrating a convex optimization method and a deep reinforcement learning (DRL) algorithm. Simulation results based on the real-world dataset demonstrate that the proposed DA-based resource allocation scheme outperforms benchmark schemes in terms of user QoE improvement. Third, we develop an adaptive DA-based resource management scheme to enhance long-term user QoE. Particularly, DAs consist of user status data and data-based models, which can update multicast groups and abstract user swipe features. An adaptive DA management mechanism for DA data processing model selection and update is developed to adapt to user status dynamics. A fine-grained QoE model is established by considering the impact of resource constraints and DA model accuracy. A joint optimization problem of bandwidth and computing resource management is formulated to maximize long-term user QoE. To efficiently solve this problem, a diffusion-based DRL algorithm is proposed, which utilizes the denoising technique to improve the action exploration capabilities of DRL. Simulation results based on a real-world dataset demonstrate that the proposed adaptive DA-based resource management scheme outperforms benchmark schemes in terms of user QoE, with improvements of 18.4\% and 20.5\% under low and high user dynamics, respectively. In summary, we have investigated DA-based radio and computing resource management from the perspectives of large-timescale resource reservation, real-time resource allocation, and adaptive DA management. The proposed approaches and theoretical results provide valuable insights and practical guidelines for experience-centric resource management in future 6G networks.
  • Item
    Statistical Inference in ROC Curve Analysis
    (University of Waterloo, 2025-07-07) Hu, Dingding
    The receiver operating characteristic (ROC) curve is a powerful statistical tool to evaluate the diagnostic abilities of a binary classifier for varied discrimination thresholds. It has been widely applied in various scientific areas. This thesis considers three inference problems in the ROC curve analysis. In Chapter 1, we introduce the basic concept of the ROC curve, along with some of its summary indices. We then provide an overview of the research problems and outline the structure of the subsequent chapters. Chapter 2 focuses on improving the ROC curve analysis with a single biomarker by incorporating the assumption that higher biomarker values indicate greater disease severity or likelihood. We interpret “greater severity” as a higher probability of disease, which corresponds to the likelihood ratio ordering between diseased and healthy individuals. Under this assumption, we propose a Bernstein polynomial-based method to model and estimate the biomarker distributions using the maximum empirical likelihood framework. From the estimated distributions, we derive the ROC curve and its summary indices. We establish the asymptotic consistency of our estimators and validate their performance through extensive simulations and compare them with existing methods. A real-data example is used to demonstrate the practical applicability of our approach. Chapter 3 considers the ROC curve analysis for medical data with non-ignorable missingness in the disease status. In the framework of the logistic regression models for both the disease status and the verification status, we first establish the identifiability of model parameters, and then propose a likelihood method to estimate the model parameters, the ROC curve, and the area under the ROC curve (AUC) for the biomarker. The asymptotic distributions of these estimators are established. Via extensive simulation studies, we compare our method with competing methods in the point estimation and assess the accuracy of confidence interval estimation under various scenarios. To illustrate the application of the proposed method in practical data, we apply our method to the Alzheimer's disease dataset from the National Alzheimer's Coordinating Center. Chapter 4 explores the combination of multiple biomarkers when disease status is determined by an imperfect reference standard, which can lead to misclassification. Previous methods for combining multiple biomarkers typically assume that all disease statuses are determined by a gold standard test, limiting their ability to accurately estimate the ROC curve and AUC in the presence of misclassification. We propose modeling the distributions of biomarkers from truly healthy and diseased individuals using a semiparametric density ratio model. Additionally, we adopt two assumptions from the literature: (1) the biomarkers are conditionally independent of the classification of the imperfect reference standard given the true disease status, and (2) the classification accuracy of the imperfect reference standard is known. Using this framework, we establish the identifiability of model parameters and propose a maximum empirical likelihood method to estimate the ROC curve and AUC for the optimal combination of biomarkers. An Expectation-Maximization algorithm is developed for numerical calculation. Additionally, we propose a bootstrap method to construct the confidence interval for the AUC and the confidence band for the ROC curve. Extensive simulations are conducted to evaluate the robustness of our method with respect to label misclassification. Finally, we demonstrate the effectiveness of our method in a real-data application. In Chapter 5, we provide a brief summary of Chapters 2-4 and outline several directions for future research.
  • Item
    Quantum Monte Carlo Simulations of Rydberg Atom Arrays
    (University of Waterloo, 2025-07-07) Merali, Ejaaz
    Rydberg atom arrays form a promising platform for quantum computation. Through their strong, long-range interaction, they are able to encode various difficult combinatorial problems, as well as hosting a plethora of intriguing physical phenomena. In this thesis, we develop and apply a Stochastic Series Expansion Quantum Monte Carlo method to simulate Rydberg systems at zero-temperature and above. We then apply this simulation method alongside variational models to verify correctness of both methods. The data produced from the simulations is also used to train Neural Network wavefunctions, which we find are effectively able to grasp some of the physics of the Rydberg atom array on a square lattice.
  • Item
    Exploring the Dominant Negative Potential of 𝘊𝘖𝘟11 Mutants in 𝘚𝘢𝘤𝘤𝘩𝘢𝘳𝘰𝘮𝘺𝘤𝘦𝘴 𝘤𝘦𝘳𝘦𝘷𝘪𝘴𝘪𝘢𝘦
    (University of Waterloo, 2025-07-07) Coletta, Genna
    Cytochrome 𝘤 oxidase (COX) is the terminal enzyme of the electron transport chain and plays a crucial role in cellular respiration. As a multisubunit enzyme, COX consists of catalytic core subunits encoded by the mitochondrial genome and requires the coordinated action of more than 30 nuclear-encoded assembly factors for proper biogenesis and function. Human COX deficiencies have been associated with mutations in both nuclear and mitochondrial genes and are thus characterized by immense genetic heterogeneity and a vast spectrum of clinical phenotypes. 𝘊𝘖𝘟11 is a nuclear-encoded copper chaperone that is required for COX assembly. Beyond this well-characterized role, COX11 has an additional, uncharacterized role in cellular redox homeostasis. Caron-Godon et al. reported a patient with compound heterozygous mutations in 𝘊𝘖𝘟11, whose homologous mutations were studied in haploid yeast. When grown on non-fermentable carbon sources, one of the mutant alleles, P238T, demonstrated robust growth, indicative of respiratory competence, in contrast to the truncation mutants, Y250* and R254*, which exhibited a complete and partial respiratory deficiency, respectively. Given that COX deficiencies are inherited in an autosomal recessive manner, this finding adds an unexpected complexity to the patient’s phenotype, which may suggest the possibility of a hypomorphic or dominant negative allele. To better recapitulate the patient’s genotype, I employed a pseudodiploid system in 𝘚𝘢𝘤𝘤𝘩𝘢𝘳𝘰𝘮𝘺𝘤𝘦𝘴 𝘤𝘦𝘳𝘦𝘷𝘪𝘴𝘪𝘢𝘦, which involves the stable co-expression of two mutant 𝘊𝘖𝘟11 alleles in a single haploid background. Functional assays, including growth on non-fermentable carbon and COX enzymatic activity, as well as immunoblotting of core COX subunits, demonstrated that the pseudodiploid double mutants, representative of the patient, supported robust respiration and maintained COX assembly at levels comparable to those of wild-type. In contrast, evaluation of oxidative stress markers revealed defects in cellular redox balance. Double mutant strains, particularly P238T/R254*, exhibited significantly elevated superoxide dismutase activity, a pronounced decrease in mitochondrial aconitase activity, and increased sensitivity to hydrogen peroxide. These data indicate that the redox equilibrium is compromised even when cytochrome 𝘤 oxidase function is preserved.
  • Item
    Causal Inference in the Presence of Heterogeneous Treatment Effects
    (University of Waterloo, 2025-07-07) Liang, Wei
    Causal inference has been widely accepted as a statistical tool in various areas for demystifying causality from data. Treatment effect heterogeneity is a common issue in causal inference which refers to variation in the causal effect of a treatment across different subgroups or individuals within a population. This thesis explores three topics in causal inference in the presence of heterogeneous treatment effects, aiming to provide some insights for this critical issue. Chapter 2 introduces basic notation, frameworks, models, and parameters in causal inference, serving as preliminary material for the three topics studied in Chapters 3 - 5, with a focus on the Rubin causal model. In Chapter 3, we discuss the first topic: causal inference with survey data. In the presence of heterogeneous treatment effects, a causal conclusion based on sample data may not generalize to a broader population if selection bias exists. We propose estimators for population average treatment effects by incorporating survey weights into the propensity score weighting approach to simultaneously mitigate confounding bias and selection bias. A robust sandwich variance estimator is developed to permit valid statistical inference for the population-level causal parameters under a proposed "two-phase randomization model" framework. The proposed estimators and associated inferential procedure are shown to be robust against model misspecifications. We further extend our results to observational non-probability survey samples and demonstrate how to combine auxiliary population in- formation from multiple external reference probability samples for more reliable estimation. We illustrate our proposed methods through Monte Carlo simulation studies and the analysis of a real-world survey dataset. Chapter 4 explores the second topic: estimation of treatment harm rate (THR), the proportion of individuals in a population who are negatively affected by a treatment. The THR is a measure of treatment risk and reveals the treatment effect heterogeneity within a subpopulation. However, the measure is generally non-identifiable even when the treatments are randomly assigned, and existing works focus primarily on the estimation of the THR under either untestable identification or ambiguous model assumptions. We develop a class of partitioning-based bounds for the THR with data from randomized controlled trials with two distinct features: Our proposed bounds effectively use available auxiliary covariates information and they can be consistently estimated without relying on any untestable or ambiguous model assumptions. Our methods are motivated from a key observation that the sharp bounds of the THR can be attained under a partition of the covariates space with at most four cells. Probabilistic classification algorithms are employed to estimate nuisance parameters to realize the partitioning. The resulting interval estimators of the THR are model-assisted in the sense that they are highly efficient when the underlying models are well fitted, while their validity relies solely on the randomization of the trials. Finite sample performances of our proposed interval estimators along with a conservatively extended confidence interval for the THR are evaluated through Monte Carlo simulation studies. An application of the proposed methods to the ACTG 175 data is presented. A Python package named partbte for the partitioning-based algorithm has been developed and is available on https://github.com/w62liang/partition-te. Chapter 5 investigates the third topic: causal mediation analysis in randomized controlled trials with noncompliance. The average causal mediation effect (ACME) and the natural direct effect (NDE) are two parameters of primary interest in causal mediation analysis. However, the two causal parameters are not identifiable in randomized controlled trials in the presence of mediator-outcome confounding and assignment-treatment noncompliance. In such scenarios, we explore partial identification of parameters and derive nonparametric bounds on the ACME and the NDE when the treatment assignment serves as an instrumental variable. The nonparametric sharp bounds for the local causal parameters defined on the subpopulation of treatment-assignment compliers are also provided. We demonstrate the practical application of the proposed bounds through an empirical analysis of a large-scale randomized online advertising dataset. The thesis concludes in Chapter 6 with a brief summary and discussions of future work. Technical details, including the proofs of key propositions and theorems as well as additional simulation results, are provided at the end of each chapter.
  • Item
    The System is Broken
    (University of Waterloo, 2025-07-03) Jeethan, Breanne
    The System is Broken is an exhibition that is a visual reflection of my experiences as a healthcare worker in an emergency department. The works represent abstract scenes of the clinical workspace. They are comprised of monoprints, digital prints, UV ink paintings, sculptural etched glass, and lightboxes. With the use of internal angiogram brain scans and collected dressing materials from the clinical setting, it is a response to the fast-paced, stressful environment of the hospital that is rife with trauma, high emotions, and anguish. Working between the emergency room and the studio, I balance my life between the two workplaces as a fuel to create work. The series speaks both to my continuous navigating of in-betweenness, but also to the overarching hierarchical nature of the medical system. By manipulating and distorting found imagery created by various medical technologies, abnormalities in the imagery are created to signal the bureaucratic structures and power imbalance of the healthcare system.
  • Item
    Characterizing cofree representations of SLn x SLm
    (University of Waterloo, 2025-07-03) Kitt, Nicole
    The study, and in particular classification, of cofree representations has been an interest of research for over 70 years. The Chevalley-Shepard-Todd Theorem provides a beautiful intrinsic characterization for cofree representations of finite groups. Specifically, this theorem says that a representation V of a finite group G is cofree if and only if G is generated by pseudoreflections. Up until the late 1900s, with the exception of finite groups, all of the existing classifications of cofree representations of a particular group consist of an explicit list, as opposed to an intrinsic group-theoretic characterization. However, in 2019, Edidin, Satriano, and Whitehead formulated a conjecture which intrinsically characterizes stable irreducible cofree representations of connected reductive groups and verified their conjecture for simple Lie groups. The conjecture states that for a stable irreducible representation V of a connected reductive group G, V is cofree if and only if V is pure. In comparison to the classifications comprised of a list of cofree representations, this conjecture can be viewed as an analogue of the Chevalley–Shepard–Todd Theorem for actions of connected reductive groups. The aim of this thesis is to further expand upon the techniques formulated by Edidin, Satriano, and Whitehead as a means to work towards the verification of the conjecture for all connected semisimple Lie groups. The main result of this thesis is the verification of the conjecture for stable irreducible representations V\otimes W of SLn x SLm satisfying dim V >= n^2 and dim W >= m^2. As the main group under study in this thesis is SLn x SLm, in Chapter 2 we provide a thorough analysis of the structure of irreducible representations of SLn from the view point of them being in one-to-one correspondence with irreducible representations of the Lie algebra Lie(SLn). The last section of Chapter 2 describes the general theory of irreducible representations of complex semisimple Lie algebras, with SLn as a toy example. In Chapter 3, we provide a brief introduction to Geometric Invariant Theory (GIT) and present the main results of the theory. We then discuss the history of GIT and the known characterization results for properties of representations that arise from GIT. In particular, we introduce cofree representations and the current classification results for cofree representations of certain classes of groups. We finish Chapter 3 by introducing pure representations and the conjecture formulated by Edidin, Satriano, and Whitehead. In Chapter 4, we verify that for all stable irreducible representations V\otimes W of SLn x SLm satisfying dim V >= n^2 and dim W >= m^2, V\otimes W is cofree if and only if V\otimes W if pure. This involves proving an upper bound on the dimension of pure representations of G_1 x G_2, with G_i connected reductive Lie groups. We also introduce two methods that can be used to show that a given representation is not pure. The last section in Chapter 4 discusses the difficulties and obstacles when trying to verify the conjecture for the remaining cases, namely when dim V < n^2 or dim W < m^2.
  • Item
    The failure of Gladue: A critical examination of Indigenous Peoples Courts in Ontario
    (University of Waterloo, 2025-07-03) Jones, Ellora
    In 2001 the first Gladue court opened as an intensive effort to implement the Supreme Court of Canada’s 1999 Gladue decision. This decision calls for a substantive equality approach to sentencing Indigenous people that considers the role of colonialism in their overrepresentation in the legal system. However, despite Gladue and the introduction of these Indigenous Peoples’ Courts, the overincarceration of Indigenous people has continued to steadily increase over the last 25 years. Notably, these rising incarceration rates have a gendered dimension as the incarceration of Indigenous women is growing at triple that of Indigenous men. Indigenous women now account for over half of admissions to federal women’s correctional facilities. While Gladue courts have existed for more than two decades, research on how they operate remains limited with no studies examining the role of gender within them. This dissertation addresses this gap through a critical, intersectional feminist examination of how Gladue courts operate and are understood by their court teams. The analysis draws on courtroom observation, 20 semi-structured interviews with individuals who work in the Gladue courts, and six qualitative surveys completed by judges. The data demonstrates that the criminal legal system cannot mitigate the harm of colonialism despite its efforts. Three main findings emerge within this research. Firstly, Gladue courts have been unsuccessful at reducing the overrepresentation of Indigenous women in the Canadian criminal legal system due to their failure to consider the multiple ways in which intersecting structural oppressions render women uniquely vulnerable to criminalization. Secondly, Gladue courts fail to achieve a restorative and decolonial approach because they remain embedded in the broader criminal legal system, which is rooted in colonial logics. The transformative potential of the courts is in constant tension with the broader goals and logics of punishment, marginalization, and social control. Finally, individual discretion and close working relationships among court team members are central to achieving Gladue courts’ substantive equality objectives; this often takes place informally and off-the-record through actions such as withdrawing charges. While the courts are unable to accomplish their mandate due to the inherent limitations of the criminal legal system, this failure is aggravated by the inclusion of court team members who oppose the substantive equality logics of the court and its abandonment of the foundational features of specialized courts, such as assigned court teams. This dissertation highlights several considerations for developing a decolonial approach to addressing the overincarceration of Indigenous people, including community-based Indigenous-led responses. This work contributes to ongoing sociolegal conversations about the ability of— as well as the contradictions associated with— using the criminal legal system and courts to mitigate social injustices and oppression in society.  Furthermore, it contributes to broader theoretical debates on the effects of problem-solving courts on the lives of criminalized individuals located at the intersections of multiple forms of structural oppression.
  • Item
    High-Performance Coordination with Weaker Protocols: From Shared Registers to Data Feed Processing in Blockchains
    (University of Waterloo, 2025-07-03) Tan, Hao
    Blockchains are decentralized ledger technologies that provide secure, transparent, and tamper-resistant transaction records. Permissioned blockchains restrict participation to authorized entities and employ Byzantine Fault Tolerance (BFT) protocols for consensus. As blockchain systems evolve, the need for exchanging data and assets across heterogeneous networks has grown. To support such functionality, platforms increasingly integrate with external systems, enabling information flow between decentralized and conventional infrastructures. Blockchain oracles serve a key role in this integration by injecting real-world data—such as asset prices or event outcomes—into smart contracts through structured oracle data feeds. Conventional blockchain systems typically employ BFT consensus for oracle data integration, which introduces substantial latency and computational overhead. Although consensus protocols offer robust consistency guarantees, they also incur significant computational and communication overhead, thereby constraining system responsiveness and scalability in blockchain environments. To address these challenges, this thesis investigates weaker coordination protocols as lightweight alternatives that relax strict consistency requirements while preserving sufficient reliability for practical deployment. This thesis evaluates the effectiveness of these protocols in enhancing the performance of read-write operations—a common access pattern in oracle data feed processing—across three distinct projects. First, this thesis assesses the overhead of consensus in key-value processing by comparing consensus protocols with shared register protocols. The comparison reveals two previously overlooked limitations that arise under workload-driven access patterns. Second, it introduces a shared register protocol tailored for wide-area networks, which employs loosely synchronized clocks and asymmetric quorum techniques to reduce coordination latency across heterogeneous client loads. Third, it applies weaker coordination techniques to oracle data feed processing—a fundamental component of many smart contract applications. By decoupling data feed processing from BFT consensus using a probabilistic quorum protocol and a censorship-resistant broadcast primitive, the proposed design improves both throughput and data freshness for on-chain transactions dependent on timely off-chain inputs. Collectively, these contributions demonstrate how workload-specific weaker protocols can enable more efficient distributed systems. By reducing coordination overhead without sacrificing practical correctness, the proposed approaches support the development of scalable, responsive, and application-aware decentralized infrastructures.
  • Item
    Using a Capability Sensitive Design Approach to Support Newcomers Well-being
    (University of Waterloo, 2025-07-03) Bin Hannan, Nabil
    Newcomers transitioning to a new country face many challenges, and their well-being is impacted due to unfamiliarity with self-navigating in a new environment. This thesis explores how Capability Sensitive Design (CSD) can be operationalized to guide the end-to-end design and evaluation of technologies that support the well-being of newcomers during life transitions. While the CSD framework has recently been investigated in Human Computer Interaction (HCI) for its ethical focus on supporting what individuals have reason to value, there remains a gap in how it can be translated into concrete, scalable technology design processes. To address this, we present a multi-stage methodology that includes formative interviews, co-design sessions, prototype development, and a longitudinal field study to evaluate the application prototype. We begin by mapping the lived experiences of newcomers using a capability-oriented interview protocol and with the use of a capability board to surface valued goals and challenges. This informed a co-design process using modified capability cards, where both newcomers and organizational stakeholders ideated design features aligned with the ten central capabilities. Drawing on these insights, we developed the Newcomer App—a multilingual mobile platform offering four core features: goal-oriented planning, capability-aligned suggestions, resource search and browsing, and reflective tracking. We evaluated this platform in an eight-week field study that included in-app activity logging and post study interviews. Our findings show that newcomers were able to identify capability-aligned goals which they found helpful, translate them into intentional plans, and reflect on both their achievements and the conversion factors that influenced outcomes. Importantly, we observed how CSD-informed features constructed self-discovery, increased agency, and facilitated social contribution, particularly in the capabilities of social connection, emotional well-being, and community participation. The study also highlighted the importance of contextual and social barriers in determining whether users could turn suggestions into meaningful actions. This thesis contributes an operational model for applying CSD across the full design lifecycle, offering insights for researchers and practitioners. By translating ethical commitments into deployable technologies, our work extends prior research in HCI and design social justice, demonstrating how technologies can support equitable pathways toward wellbeing for marginalized groups, such as newcomers in navigating complex transitions.
  • Item
    Analytic Property Testing: Directed Isoperimetry and Monotonicity
    (University of Waterloo, 2025-07-03) Ferreira Pinto Junior, Renato
    Property testing is a computational paradigm that aims to design algorithms for extremely fast decision-making about massive inputs. Property testing has been studied for nearly three decades, primarily with a focus on testing properties about discrete objects such as Boolean functions and graphs. In this thesis, we study property testing for inherently continuous objects, namely functions with real-valued domain and range---which we call the analytic setting. We study the central problem of monotonicity testing in this setting, where the input is a continuous function f : [0,1]^d → R and the algorithm must decide whether f is monotone with respect to its input coordinates, or far from monotone in the appropriate sense (namely with respect to the L^p distance). The central theme of this thesis is a connection between monotonicity testing and directed isoperimetric inequalities, which are analogues of classical isoperimetric inequalities that have been shown to be intimately related to monotonicity testing in discrete settings. Indeed, many algorithmic advances in monotonicity testing over the last decade have been obtained via new directed isoperimetric inequalities. We show that the connection between directed isoperimetry and monotonicity also holds in the analytic setting, and indeed reveals new relationships between property testing and areas of mathematics such as partial differential equations and optimal transport theory. The main results in this thesis are the directed Poincaré inequality dist^mono_2(f)^2 ≲ E[|∇^- f|^2} for Lipschitz functions f : [0,1]^d → R defined over the solid unit cube, where dist^mono_2(f) denotes the L^2 distance to monotonicity of f and the "directed gradient" operator ∇^- f measures the local violations of monotonicity of f; and a monotonicity tester for this setting with query complexity \widetilde O(\sqrt{d}). We obtain the directed Poincaré inequality by studying a new partial differential equation called the directed heat equation. In our study of monotonicity testing and its connection to directed isoperimetry, we also systematize classical and directed isoperimetric inequalities in continuous and discrete settings; obtain a variety of upper and lower bounds for monotonicity testing of Lipschitz functions in other settings of interest such as the one-dimensional line and the hypergrid; and develop directed Poincaré inequalities for directed graphs by studying a dynamical process called the directed heat flow via directed analogues of classical spectral theory.