Theses

Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6

The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.

This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)

This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)

Browse

Recent Submissions

Now showing 1 - 20 of 16696
  • Item
    Game Plan for a Warmer World: Assessing the Climate Change Readiness of National-Level Canadian Sport Organizations
    (University of Waterloo, 2025-06-17) Simoes, Kyara
    Climate change is increasingly affecting sports, with warming temperatures and extreme weather events disrupting training and competition schedules, heightening health risks for athletes, coaches, and spectators (e.g., heat-related illnesses), as well as damaging sports infrastructure (e.g., flooded fields). At the same time, many sports and sports tourism are carbon intensive, prompting growing commitments to reduce emissions in line with the Paris Agreement. This study applies a structured content analysis, guided by an adapted climate policy integration (CPI) framework, to assess the climate change readiness of national-level Canadian sport organizations (n=86), including Sport Canada, Multisport Service Organizations (MSOs), and National Sport Organizations (NSOs). The integration of climate or environmental considerations into sport governance is critical for supporting the sector’s transition to low-carbon and climate-resilient operations. However, an analysis of official documents and websites found that the climate responses of national-level Canadian sport organizations are fragmented and insufficient, with 29.1% of organizations referencing climate change or environmental sustainability across any communication platform, 19.8% disclosing mitigation or adaptation initiatives, and only 3.5% showing alignment with international climate policy, such as the UN Sport for Climate Action Framework. It is argued that sport organizations must embed climate objectives into strategic planning, strengthen alignment with national climate policy, and build capacity for implementation. This transition should be supported by federal leadership, access to guidance and sector-specific resources, as well as international climate frameworks and best practices in sport sustainability.
  • Item
    Fast, Private and Fair Federated Learning
    (University of Waterloo, 2025-06-16) Malekmohammadi, Saber
    A remarkable progress in machine learning (ML) technologies has happened during the past decade. Federated learning (FL) is a decentralized ML setting, where some clients collaborate with each other to train a model under the orchestration of a central server. Clients, for instance, can be some mobile devices (in cross-device FL) or some organizations (in cross-silo FL). In FL, the training data of clients remain decentralized, mitigating many systemic privacy risks and costs existing in centralized ML. This thesis investigates three main sub-problems of FL, including: 1. Convergence analysis of FL algorithms 2. Providing formal data privacy guarantees to clients through differentially private FL (DPFL) 3. Performance fairness for clients in DPFL systems under structured data heterogeneity. Over the past years, the FL community has witnessed a plethora of new algorithms. However, there is a lack of thorough comparison of these algorithms and the theory behind them. Our fragmented understanding of the theory behind the existing algorithms is the reason behind the lack of such a formal comparison. This is the main focus of the first chapter, where we show that many of the existing FL algorithms can be understood from the lens of operator splitting theory. This unification allows us to compare different algorithms easily, to refine previous convergence results and to propose new algorithmic variants. Our new theoretical findings reveal the remarkable role played by the step size in FL algorithms. Furthermore, the unification allows us to propose a simple and economic way to accelerate the convergence of different algorithms, which is indeed vital, as communication between the server and clients is often considered as an overhead in FL systems. Despite the fact that FL provides clients with some level of informal data privacy by operating on their decentralized data, the orchestrating server or other malicious third parties can still attack the data privacy of participating clients. Consequently, FL has been augmented with differential privacy (DP) in order to provide rigorous data privacy guarantees to clients. In DPFL, which is the focus of the second chapter, depending on clients’ privacy policies, there is often heterogeneity in their privacy requirements. This heterogeneity as well as the heterogeneity in batch and/or dataset sizes of clients lead to a variation in the DP noise level across clients’ model updates. Hence, straightforward aggregation strategies, e.g., assigning clients’ aggregation weights proportional to their privacy parameters, will lead to a low utility for the system. We propose a DPFL algorithm which efficiently estimates the true noise level in clients’ model updates and uses an aggregation strategy to reduce the noise level in the aggregated model update. Our proposed method improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameters to the server to attack the system’s utility. Finally, in the third chapter, we investigate the intersection of FL, DP and performance fairness, under structured data heterogeneity across clients. This type of data heterogeneity is often addressed through clustering clients (a.k.a. clustered FL). However, the existing clustered FL methods remain sensitive and prone to errors, further exacerbated by the DP noise in the system. In the third chapter, we propose a robust algorithm for differentially private clustered FL. To this end, we propose to use large batch sizes in the first round and cluster clients based on both their model updates as well as their training loss values. Furthermore, for clustering clients’ model updates at the end of the first round, our proposed approach addresses the server’s uncertainties by employing Gaussian Mixture Models (GMM) to reduce the impact of DP and stochastic noise and avoid potential clustering errors. This idea is efficient especially in privacy-sensitive scenarios with more DP noise and leads to a high accuracy in clustering clients.
  • Item
    Evaluating Municipal Climate Action: An Analysis of Performance Measurement Models, Practices, and Indicators
    (University of Waterloo, 2025-06-16) Feor, Leah
    This thesis provides interconnected contributions to theory and practice in climate governance, specifically on municipal performance measurement practices. Key concepts, including the evaluation and control step of the strategic management process, the performance measurement process, and the indicator framework and selection process, frame this research. This thesis contains five chapters. The first and last chapters serve as the introduction and conclusion, respectively. The second, third, and fourth chapter are standalone papers. The first paper of this thesis explored the current state of social impact measurement (SIM) by examining common practices that are used to measure the post-intervention social impact of programs and projects. Using a systematic literature review, this study analyzed a decade's worth of global academic literature on SIM. Through deductive and inductive manual coding of articles in NVivo, this study identified key themes and strategies for improving measurement practices. Findings from this paper suggest strategies for improved measurement such as stakeholder engagement throughout the measurement process, utilizing existing operational data, enhancing measurement capacity, and using a combination of qualitative and quantitative data. This study contributes to the SIM field by offering an in-depth understanding of common measurement models and providing clear recommendations for practitioners to improve SIM. The second paper of this thesis used a contingency theory lens to investigate the climate-related performance measurement practices of 31 Canadian municipalities, with a focus on the influence of population size. Using a case study approach, data were gathered through interviews and document analysis. Data were analyzed through both deductive and inductive coding in NVivo 14. Results indicate that municipalities with larger population sizes prioritize more themes for measurement, employ a broader set of criteria for indicator selection, and report more frequently. Population size does not seem to influence stakeholder involvement in indicator selection or data analysis strategies. By applying contingency theory to Chapter 3, this study examined a situational approach versus the idea of a ‘one-size-fits-all’ solution for local climate-related performance measurement. The final paper explored the climate mitigation indicators currently used in practice and identified those most suitable for measuring local climate-related performance. A document analysis was conducted to identify the climate-related indicators in use by 21 Canadian municipalities, which were categorized and analyzed according to the logic model framework and GHG emissions activity sectors. An indicator evaluation matrix was employed to propose a parsimonious set of 19 new climate mitigation indicators, with the Delphi technique used to achieve consensus among experts. This study found that while a range of indicators exist across the logic model, there is an uneven distribution. The analysis also revealed the emergence of nature-based indicators for local climate mitigation performance measurement. Together, this thesis showcases and defines models and frameworks that municipalities can use to better track their climate performance, while also contributing to the broader academic discourse on measurement practices in the public sector. The findings from this thesis outline streamlined approaches to performance measurement, providing clear pathways for municipalities that are looking to more effectively track progress towards common climate goals.
  • Item
    Impacts of temperature variation on duckweed population growth and distribution in a changing climate
    (University of Waterloo, 2025-06-16) Andrade-Pereira, Debora
    Understanding the impacts of climate change on aquatic plants involves examining how temperature fluctuation patterns influence their temperature-dependent vital rates and distribution. Duckweeds, small aquatic plants with both economic significance and ecological concern, can pose challenges due to overgrowth and the spread of invasive species. The impact of climate-induced temperature changes on aquatic plants remains poorly understood, as many studies use constant conditions that do not account for natural variability in temperature. Research focused on increased average temperatures has shown general ectotherm responses tied to geographic location, such as enhanced growth in temperate regions. However, when temperature fluctuations are considered, responses differ from those under constant conditions due to nonlinear and asymmetrical thermal performance. Increased autocorrelation, with prolonged sequences of unusually high or low temperatures, can affect population growth rates, while nighttime warming alters diel temperature variation and potentially influences time-sensitive processes like photosynthesis and respiration. This thesis investigates the thermal performance and distribution of duckweed species under varying temperature regimes associated with climate change, incorporating both controlled experiments and predictive modeling. The second chapter uses a Maxent species distribution model to predict the potential range expansion of Landoltia punctata (dotted duckweed), an invasive, herbicide-resistant species. Habitat suitability is modeled under current and future climate scenarios, using satellite-derived water temperature data and constraining model features to match the shape of thermal performance curves obtained from laboratory experiments. Results indicate high suitability for this species in Western Europe and Southern Canada, with the Great Lakes region becoming increasingly suitable in the future due to climate warming. These projections underscore the importance of climate-informed management strategies to mitigate the ecological impact of invasive species. The third chapter investigates how diel temperature variability and climate change affect the reproduction of Lemna minor (common duckweed) during spring and summer. Experimental results highlight the importance of temperature variance as opposed to the timing of warming. While increased mean spring temperatures enhance duckweed performance, reduced temperature variance during high summer temperatures in regions such as Canada helps mitigate the negative impacts of otherwise excessively hot fluctuating conditions. These findings emphasize the varying effects of climate change on duckweed's thermal performance across different seasons. The fourth chapter examines the effects of temperature autocorrelation on both common and dotted duckweed reproduction and survival. Experiments show that strongly autocorrelated sequences result in mortality due to heat stress when hot temperature sequences begin with elevated heat. In contrast, autocorrelation has limited impacts under cooler average conditions, likely due to slower physiological responses. These findings align with broader predictions of increased extinction risks for ectotherms under persistent and extreme temperature patterns caused by climate change. This work is a step towards a more realistic understanding of aquatic plant responses to climate change by considering thermal performance responses, diverse temperature fluctuation patterns, and water temperatures. Our results can be used in population dynamics models to make more realistic predictions of climate change responses. The experimental and modeling findings in this thesis advance our understanding of aquatic plant responses to climate change and support the development of informed strategies to manage their ecological impacts and sustainable production in a warming world.
  • Item
    Kinetic Energy Spectra, Backscatter, and Subgrid Parameterization Analysis in Radiative-Convective Equilibrium
    (University of Waterloo, 2025-06-16) Lai, Kwan Tsaan
    This thesis explores how energy is distributed and transferred across scales in convective-permitting radiative-convective equilibrium (RCE) simulations and how these processes can be more accurately represented in numerical models through improved subgrid parameterizations. Aggregation steepens the horizontal kinetic energy spectra by enhancing the large-scale energy, which results in horizontal kinetic energy spectra in both the upper troposphere and lower stratosphere that are close to the mesoscale -5/3 spectrum. In the upper troposphere, spectral energy budget analysis indicates that this is the result of the balance between buoyancy flux and vertical energy flux, rather than a classic direct energy cascade. In the lower stratosphere, there is inverse energy transfer, which may be explained by wave-mean-flow-interaction. Subfilter energy transfer analysis is performed on an idealized RCE simulation by filtering 1-km high-resolution simulation to a horizontal scale of 4 km. The net subfilter energy transfer rate is dissipative in the upper troposphere and backscattering in the lower stratosphere, which are consistent with the direction of energy transfer in the nonlinear transfer energy flux. The stochastic backscatter TKE scheme, a stochastic backscatter-allowing subgrid turbulence scheme created by adding a zero-mean stochastic forcing to the eddy viscosity of the TKE scheme, is proposed and tested on idealized RCE simulations. The stochastic backscatter TKE scheme improves the subgrid local energy transfer when compared to a common stochastic backscatter scheme and the standard TKE scheme without backscatter. Despite the fact that backscatter is still weaker than dissipation in the lower stratosphere in the stochastic backscatter TKE simulations, the kinetic energy spectra are closer to the -5/3 spectrum when compared to the standard TKE simulation. This study advances our understanding of the interscale distribution and transfer of energy in RCE, and the introduction of the stochastic backscatter TKE scheme provides a more realistic representation of dissipation and backscatter by matching the distribution of subfilter energy transfer rate from a high-resolution simulation.
  • Item
    Radar Near-Field Sensing For Biomedical Applications
    (University of Waterloo, 2025-06-16) Bagheri, Omid
    Biomedical sensing technologies are essential for real-time health monitoring and disease management. Among their applications, blood glucose monitoring is of particular importance due to the global prevalence of diabetes that demands early detection and continuous management. Although clinically approved invasive methods exist, they are often inconvenient and unsuitable for continuous monitoring. Despite extensive research, non-invasive glucose monitoring, whether through wearables or smartwatches, remains an unsolved challenge, with no commercially or clinically validated solutions available. Radar-based biomedical sensing offers a promising non-invasive, continuous monitoring approach with tissue penetration capabilities. However, challenges such as suboptimal antenna design, near-field limitations, air-skin impedance mismatch, and poor depth resolution persist. A key objective in improving radar sensing performance is to maximize the transmitted power density from the radar's transmit (TX) antenna into the target medium while simultaneously enhancing the reflected power received by the receive (RX) antenna. This dual enhancement significantly improves the radar’s signal-to-noise ratio (SNR). Integrating advanced lenses and metasurfaces addresses these limitations, enabling efficient, practical deployment without major redesigns. This research introduces the development and implementation of novel radar-based methodologies tailored for biomedical sensing applications. Through innovative system design, advanced signal processing, and rigorous experimental validation, the proposed solutions address key challenges in on-body sensing. The first contribution focuses on advanced lens designs, such as dielectric rod arrays and modified gradient-index (GRIN) Luneburg lenses, aimed at enhancing radar-based external health monitoring at 10 GHz. The second contribution advances metasurface technologies for internal biomarker monitoring, enabling compact, skin-contact wearable systems with enhanced sensitivity and spatial resolution, with a specific emphasis on non-invasive blood glucose detection. Operating in the 58–63 GHz millimeter-wave band, the proposed metasurface-enhanced radar system integrates the BGT60TR13C sensor from Infineon Technologies with a planar, phase-synthesized metasurface for near-field focusing within the skin dermis layer, achieving over 11-fold improvement in SNR by enhancing both transmitted and reflected power. Four progressively objectives are presented, each expanding upon the foundation of the previous stage: single-band, single-focus metasurface, a preliminary design serving as proof of concept; metasurface-enhanced multi-radar fusion for distributed sensing; dual-band, dual-focus metasurface for depth-selective monitoring; and a multi-band, multi-focus non-interleaved metasurface for combined spatial and depth resolution. These innovations have the potential to revolutionize non-invasive, continuous health monitoring.
  • Item
    Reinforcement Learning for Scheduling Processes Under Uncertainty in Chemical Engineering Facilities
    (University of Waterloo, 2025-06-16) Rangel-Martinez, Daniel
    Optimal scheduling of chemical systems has gained interest as it provides economic advantages. The development of methodologies for approaching this problem have expanded considerably bringing multiple options to the process optimization field. Real world applications deal with uncertainty in multiple stages of the process, from price fluctuations to processing times and material quality. Preventive and reactive scheduling techniques, which adapt to uncertainty realizations were developed to approach the scheduling optimization problem under uncertainty. Recently, the use of Deep Neural Networks (DNN) combined with Reinforcement Learning (RL) algorithms became an option to generate policies for decision-making processes. From the context of scheduling, a policy can be beneficial as it produces the schedule online, responding to the needs of the process and the realization of uncertainties in real time. In this thesis, a set of methodologies to approach the scheduling problem under uncertainty for batch systems with Deep Reinforcement Learning (DRL) methods is presented. The state-of-the-art methods available in this area are improved in this work with the development of different techniques that study the translation of the scheduling problem into the framework of Reinforcement Learning. Contrary to multiple approaches in the literature where the process environment is assumed to be a Markov Decision Process (MDP), the methods presented in this work assume a partial observability of the process. This is called Partially Observable Markov Decision Process (POMDP) and it is useful to handle the uncertainty in the process as well to perceive the evolution over time of the process. To hold this assumption for the scheduling process, the use of Recurrent Neural Networks (RNN) is implemented in order to analyze their performance in the scheduling optimization problem. To the author’s knowledge, these networks have not been used before with this purpose, i.e., their implementation in the literature is focused on other features of this type of network, for instance, their capacity to handle inputs of various lengths. This new perspective is compared with implementations framed as MDPs showing the advantage of approaching the scheduling problem in this way. The decision space of the scheduling problem is usually discretized into a set of actions that can be taken in the online scheduling process with DRL. In this work we propose the use of hybrid agents that can make simultaneous decisions in the process at every decision-step. This allows to expand the applicability of the scheduling agent into more realistic scenarios where more than one decision is required. Moreover, this method extends the nature of the decisions into the continuous space by using DRL algorithms that can work on this domain. This method also allows the integration of the scheduling tasks with others levels in the hierarchical manufacturing systems, e.g., planning or control tasks. To the author’s knowledge, attempts to extend the number of decisions, from the agent to these other levels, are not reported in the literature. The use of DNNs for modeling policies in the scheduling process brings the issue of the “black box” model in which the model does not provide any descriptions of its heuristics for humans to understand the logics behind its decisions. This becomes an obstacle when ensuring that the setting of hyperparameters aligns with the current decisions of the agent. In other words, the agent does not provide any insights about its decisions and the only way to check this is through the final results of the agent. To provide the agent with interpretability, the use of attention mechanisms is implemented in this work. They allow to develop an attention matrix at every decision-step in the process which allows to gather information on the logics behind the decisions. Attention mechanisms have been used in the literature due to their outstanding capacity for building correlations between the elements of the process. To the author’s knowledge, the use of this interpretability for inspection and correction of hyperparameters in the scheduling problem is not reported in the literature. The algorithms from DRL that are used in the methods presented are: a) a variation of Deep Q-Learning and b) Proximal Policy Optimization (PPO). Deep Q-Learning is a well-known DRL method that specializes in problems with a discrete action space; on the other hand, PPO is applicable to discrete and continuous action spaces. These methods are of relatively simple implementation and showed good performance on the works presented in this thesis. The implementations of the methods developed were tested on job-shops, flow-shops and State Task Networks, following objective functions related to reduction of makespan and product maximization. Results showed that the trained agents with the presented methods can generate schedules considering the uncertain parameters of the system, thus making this online scheduling agent attractive for industrial-scale applications. The capacity to react of the agent is demonstrated with the experiments performed on each study. Moreover, the results were compared with different benchmarks including alternative DNNs and optimization solvers. The implementation of DRL methods to approach the scheduling problem under uncertainty demonstrated that this alternative has potential as a reactive online scheduler that can provide reliable responses in short turnaround times. The set of methods presented in this thesis illustrate the advantages and limitations of the incorporation of machine learning into the decision-making process in the context of chemical engineering.
  • Item
    The Use of Physical Exercise in the Hippocratic Corpus
    (University of Waterloo, 2025-06-16) Mifsud, Joshua
    This thesis sets out to prove that the Hippocratic Corpus uses exercise prescriptions for the treatment of various pathologies, showing concordance with modern EIM (‘exercise is medicine’) principles. I provide an analysis of both the Hippocratic views on exercise and the prescriptions themselves from the following texts: Regimen, Internal Affections, and Regimen in Acute Diseases. The fundamental purpose of this research project is to study the origins of exercise therapy as a medical practice in the ancient world.
  • Item
    Human-centric Path Planning and Motion Behaviour Analysis in Hazardous Environments
    (University of Waterloo, 2025-06-13) Gao, Noreen
    Hazardous work environments, such as nuclear facilities and construction sites, present critical safety challenges that need more attention. While nuclear operations expose workers to imperceptible radiation risks, construction activities involve physically demanding tasks that contribute to ergonomic injuries such as musculoskeletal disorders (MSDs). Persistent risks in hazardous environments require more effective solutions to address their unique occupational challenges through advanced technologies. This research explores the potential of augmented reality (AR) and virtual reality (VR) for improving safety through human-centric design, focusing on path planning in radiation environments and human motion analysis in masonry as potential use cases. The research addresses the practicality issues during their implementation and assesses the feasibility of the developed solutions. The first study develops an AR-based path planning system in radiation environments. Here, path planning algorithms in existing methods only prioritize exposure minimization but result in routes poorly suited for human navigation, such as zigzagged paths with too many turns or paths that are unnecessarily long just to follow the lowest doses. To overcome this, I propose a two-stage human centric path planning framework. First, the A*-based algorithm is enhanced with a novel multi objective cost function to generate candidate paths that balance cumulative radiation exposure, travel distance, and the number of turns. Second, a parameter sweep procedure is introduced to select Pareto-optimal solutions. Unlike traditional methods that prioritize either radiation reduction or path length, this framework offers users a variety of path options tailored to their specific needs. Also, considering fewer turns for easier navigation, this approach is more intuitive than traditional robot centric path planning methods, offering greater flexibility and safety for workers in real-world applications. The second study evaluates VR’s efficacy for training in masonry work to reduce ergonomic risks like MSDs while lifting heavy blocks. While VR training is effective in training for various fields, its effectiveness in teaching proper ergonomic posture and reducing injury risks has not been thoroughly explored. This study conducted experiments to compare real lifts (lifting physical blocks in a real world setting) and VR lifts (lifting virtual weightless blocks in a VR-simulated environment), assessing motion behaviour in both contexts. In both experiments, while performing the same tasks of lifting blocks, participants were asked to wear a motion capture suit to record the motion data. The collected data were processed for analysis using the Rapid Upper Limb Assessment (a standard test for ergonomic risk), followed by a detailed analysis of the scores for body sections, including upper arm, lower arm, neck, and trunk. Experimental results demonstrate a significant statistical difference in motion behaviour between VR and real-life tasks, particularly in the trunk and neck. We conclude that VR training developments for the trades must recognize this limitation.
  • Item
    Deliberative Machines: From Reflective Dialogue to Fair Consensus with Language Models and Social Choice
    (University of Waterloo, 2025-06-13) Blair, Carter
    This thesis investigates the bidirectional relationship between artificial intelligence (AI), particularly large language models (LLMs), and social choice theory. Firstly, it explores how principles from social choice can address challenges in AI alignment, specifically the problem of aggregating diverse human preferences fairly when guiding AI behavior (SC → AI). Standard alignment methods often obscure value conflicts through implicit aggregation. Secondly, it examines how AI techniques can enhance collective decision-making processes traditionally studied in social choice (AI → SC), offering new ways to elicit and synthesize the complex, nuanced, and verbal preferences that conventional mechanisms struggle to handle. To address these issues, this work presents computational methods operating at the interface of AI and social choice. First, it introduces Interactive-Reflective Dialogue Alignment (IRDA), a system using LLMs to guide users through reflective dialogues for preference elicitation. This process helps users construct and articulate their values concerning AI behavior, resulting in individualized reward models that capture preference diversity with improved accuracy and sample efficiency compared to non-reflective baselines, especially when values are heterogeneous. Second, the thesis proposes a framework for generating fair consensus statements from multiple viewpoints by modeling text generation as a token-level Markov Decision Process (MDP). Within this MDP, agent preferences are represented by policies derived from their opinions. We develop mechanisms grounded in social choice: a stochastic policy maximizing proportional fairness (Nash Welfare) to achieve ex-ante fairness guarantees (1-core membership) for distributions over statements, and deterministic search algorithms (finite lookahead, beam search) maximizing egalitarian welfare for generating single statements. Experiments demonstrate that these search methods produce consensus statements with better worst-case agent alignment (lower Egalitarian Perplexity) than baseline approaches. Together, these contributions offer principled methods for eliciting diverse, reflective preferences and synthesizing them into collective outputs fairly. The research provides tools and insights for developing AI systems and AI-assisted processes that are more sensitive to value pluralism.
  • Item
    Tacit Inefficiencies and Barriers in Continuous Integration
    (University of Waterloo, 2025-06-13) Weeraddana, Nimmi Rashinika
    Continuous Integration (CI) is the heartbeat of a software project. CI enables team members to validate each change set through an automated cycle (i.e., a CI build) that compiles and tests the project's source code. Although adoption of CI improves team productivity and software quality, these benefits come at a cost. As projects evolve, the complexity of CI pipelines tends to increase, introducing potential inefficiencies (i.e., prolonged build durations and frequent build restarts) and barriers (e.g., the specialized expertise required to maintain CI artifacts). Such inefficiencies and barriers waste resources that enable CI. While inefficiencies and barriers in CI are often explicit, where project teams are cognizant of them, there also exist tacitly accrued inefficiencies and barriers that are not immediately apparent to project teams. In this thesis, we use historical data from a large collection of software projects to perform three empirical studies, focusing on tacit inefficiencies and barriers in CI. We first present an empirical study that focuses on tacit inefficiencies in the environment (e.g., CircleCI) where CI builds are executed. We observe that (1) CI builds can unexpectedly time out due to issues in the environment, such as network problems and resource constraints, and (2) the history of previous CI build outcomes and anticipation of clusters of consecutive timeouts can provide useful indications to project teams to proactively allocate resources and take preventive measures. Next, we present an empirical study that investigates tacit inefficiencies in CI that stem from dependencies in projects (e.g., npm dependencies). More specifically, CI builds triggered from change sets that update versions of unused dependencies are entirely wasteful because such change sets do not impact the project source code. We find that (1) a substantial amount of CI build time is spent on these wasteful builds, (2) bots that automatically manage dependency updates in projects (e.g., Dependabot) need to consider whether a dependency is used before triggering a build, and (3) to detect and omit such wasteful builds, project teams may adopt our automated approach, Dep-sCImitar, to cut down on this waste. We then present an empirical study that investigates tacit barriers that are related to the composition of the teams responsible for creating and maintaining CI pipelines, i.e., the DevOps contributors. In particular, we examine the diversity and inclusion of these contributors—a factor that plays a crucial role in CI by influencing collaboration and the overall efficiency of CI pipelines. Our findings show that (1) the perceived ethnic diversity of DevOps contributors is significantly low compared to other contributors, with a similar pattern observed for perceived gender diversity, and (2) the lack of diversity is amplified when considering the intersection of minority ethnicities and genders, calling for enhanced awareness of the lack of diversity among DevOps contributors.
  • Item
    Understanding experiences of women’s empowerment through WASH/cash transfers toward post-COVID-19 recovery in Ghana
    (University of Waterloo, 2025-06-13) Jebuni, Julius
    cash transfers water, sanitation, and hygiene social norms women's empowerment water security covid-19
  • Item
    Analytical Seismic Performance Assessment of Braced Timber Frames with Shape Memory Alloy Fasteners
    (University of Waterloo, 2025-06-13) Fierro Orellana, Javier Sebastian
    The increasing adoption of mass timber construction requires a thorough understanding of its structural performance. Brace timber frames (BTFs) commonly serve as lateral load-resisting systems in mass timber buildings; however, further insight into their seismic performance, particularly regarding ductility and energy dissipation, remains essential. Shape memory alloy (SMA) dowels at BTF connections provide potential advantages due to their superelasticity, enabling large displacements with minimal permanent deformation. This analytical research investigates the seismic performance of BTFs with SMA dowel-type connections compared to traditional steel connections. The methodology involves developing a numerical framework in OpenSees to capture the nonlinear behaviour of BTFs with steel and SMA dowel-type connections. The framework involves calibrating uniaxial material models at the connection level based on experimental hysteresis results. Brace-level models incorporate asymmetrical deformation at each brace end, documented in the literature. Frame-level models facilitate pushover and time history analyses. This approach bridges the gap between experimental observations in connections and structural system applications, specifically evaluating how connection-level self-centering translates into system-level performance in moderate seismic zones in Canada. Using this numerical framework, the self-centering capacity of SMA dowel-type connections is assessed at the system-level. The seismic response of SMA-connected frames is compared to traditional steel-connected frames using BTF building prototypes. Key findings show that SMA connections exhibit superior self-centering, substantially reducing residual drift compared to steel connections, thus enhancing seismic resilience. Although SMA-connected frames experienced higher peak drifts, their minimal permanent deformations significantly reduced post-earthquake repair needs. Numerical results indicated that SMA connections increased peak interstorey drift by 8-32% but significantly reduced residual drift by approximately 90% in both prototypes. Peak floor accelerations decreased by 3-13% with SMA connections. These findings confirm that SMA dowels effectively improve post-earthquake performance without compromising structural strength. This will guide future code developments and research directions for BTF systems incorporating SMA dowel-type connections.
  • Item
    Location for Profit Maximization: Applications to On-Demand Warehousing and Pilotless Air Cargo Transportation
    (University of Waterloo, 2025-06-13) Chang, Hsiu-Chuan
    Modern logistics systems face increasing complexity due to the rapid growth of e-commerce and emerging technologies, particularly in shared warehousing platforms and air freight delivery. Accordingly, advanced approaches are needed to effectively address large-scale network design problems that prioritize profit-oriented objectives. This dissertation develops mathematical models and effective solution methodologies to optimize profitability in on-demand warehousing and pilotless air cargo transportation networks. In particular, the thesis introduces decomposition methods and heuristic algorithms to solve facility location problems arising in these applications with a focus on profit-maximization. First, a Lagrangian relaxation framework combined with a local search heuristic is used to solve a multi-period profit-maximizing facility location model in on-demand warehousing, accounting for price-sensitive demand and dynamic allocation. Next, for pilotless air cargo transportation, two network design models are developed and solved using a branch-and-price and genetic algorithm enhanced with score-based labeling and a hybrid heuristic to optimize business profit, hub placement, and airplane routing. The findings provide practical insights for enhancing profitability and operational responsiveness in both on-demand warehousing and pilotless air cargo logistics.
  • Item
    Enhancing YOLO through Multi-Task Learning: Joint Detection, Reconstruction, and Classification of Distorted Text Images
    (University of Waterloo, 2025-06-12) Shaji, Reshma
    Robust recognition of alphanumeric text mounted on vehicle surfaces may present significant challenges. These real-world challenges include conditions such as motion blur, out-of-focus imagery, variation in illumination, and compression artifacts. Existing automatic license plate recognition (ALPR) pipelines usually separate detection, enhancement, and recognition into distinct stages, either relying on explicit deblurring networks or extensive augmentation for generalization, each incurring latency, error propagation, or a performance ceiling on severely degraded inputs. This study introduces YOLO CRNet, a unified end-to-end multi-task framework built upon the YOLO object detector, designed to simultaneously localize characters, enhance text regions, and perform optical character recognition (OCR) within a single network. We integrate two specialized heads into the YOLO backbone: a reconstruction head that restores degraded text regions, and a classification head that directly recognizes alphanumeric characters. Shared feature representations are extracted from multiple depths of the core YOLO network for synergistic learning across complementary tasks. To inform feature selection for the classifier head, we extract per‑character embeddings from five different layer combinations of the YOLO network (ranging from early backbone to deep neck layers) and visualize class separability via t‑SNE. This analysis reveals that Configuration A which comprises of early backbone layers (1,2,4) with neck layers (10,13,16) yields the most distinct clusters for the alphanumeric character classes. The YOLO CRNet classifier head trained on Configuration A achieves 95.2% accuracy and a 94.97% F1‑score on a held‑out set of 10,100 sharp character crops, outperforming alternative layer configurations by up to 18%. Extensive experiments on blurred text datasets demonstrate that combined reconstruction followed by classification of YOLO CRNet significantly outperforms both the baseline YOLO detector and the YOLO CRNet classification head. In particular, the combined reconstruction followed by classification configuration achieves a 23.5% relative improvement in classification accuracy (from 44.5% to 68.0%) and a 15.5% gain in F1-score (from 0.550 to 0.705). By integrating detection, enhancement, and recognition into a single network guided by t‑SNE based feature selection, YOLO CRNet reduces latency, mitigates error propagation, and explicitly handles image distortions. This work lays a foundation for real‑time, robust vehicle text detection and illustrates the power of multi‑task learning and data‑driven feature analysis in fine‑grained text recognition tasks.
  • Item
    Closest Point Geometry Processing: Extensions and Applications of the Closest Point Method for Geometric Problems in Computer Graphics
    (University of Waterloo, 2025-06-12) King, Nathan
    This thesis develops theoretical aspects and numerical methods for solving partial differential equations (PDEs) posed on any object for which closest point queries can be evaluated. In geometry processing (and computer graphics in general), objects are represented on a computer in many different ways. Requiring only closest point queries allows the methods we develop to be used with nearly any representation. Objects can be manifold or nonmanifold, open or closed, orientable or not, and of any codimension or even mixed codimension. Our work focuses on solving PDE on manifolds using the closest point method (CPM), although some nonmanifold examples are also included. We develop fundamental extensions of CPM to enable its use for the first time with many applications in geometry processing. Two major impediments stood in our way: the complexity of manifolds commonly found in geometry processing and the inability to impose interior boundary conditions (IBCs) with CPM. We first develop a runtime and memory-efficient implementation of the grid-based CPM that allows the treatment of highly complex manifolds (involving tens of millions of degrees of freedom) and avoids the need for GPU or distributed memory hardware. We develop a linear system solver that can improve both memory and runtime efficiency by up to 2x and 41x, respectively. We further improve runtime by up to 17x with a novel spatial adaptivity framework. We then develop a general framework for IBC enforcement that also only requires closest point queries, which finally allows for many geometry processing applications to be performed with CPM. We implicitly partition the embedding space across (extended) interior boundaries. CPM's finite difference and interpolation stencils are adapted to respect this partition while preserving up to second-order accuracy. We show that our IBC treatment provides superior accuracy and handles more general BCs than the only existing method. We deviate from the common grid-based CPM and further develop a discretization-free CPM by extending a Monte Carlo method to surface PDE. This enables CPM to enjoy common benefits of Monte Carlo methods, e.g., localized solutions, which are useful for view-dependent applications. Finally, we introduce an algorithm to compute geodesic paths that does not even require a manifold PDE; only heat flow on a 1D line and closest point queries are required. Our method is more general, robust, and always faster (up to 1000x) than the state-of-the-art for general representations. Our method can be up to 100,000x faster (with high-resolution meshes) or slower (with low-resolution meshes) than the state-of-the-art in terms of runtime. Convergence studies on example PDEs with analytical solutions are given throughout. We further demonstrate the effectiveness of our work for applications from geometry processing, including diffusion curves, vector field design, geodesic distance and paths, harmonic maps, and reaction-diffusion textures.
  • Item
    Dynamic Modelling and Simulation of Major League Baseball Pitching Biomechanics to Reduce Ulnar Collateral Ligament Injury and Improve Performance
    (University of Waterloo, 2025-06-11) Attias, Cedric
    Baseball teams of all levels rely on their pitchers to achieve the ultimate goal: win games. This fact is exacerbated in professional baseball, where pitchers are considered the most important members of the team and are compensated as such. Because of this, many efforts are made to maximize the performance of pitchers, without compromising their health. Doing so is no easy task and is often considered a balancing act, where teams want to get the most out of their pitchers, while making sure they are available long-term. Historically, scientific work in baseball pitching typically constitutes the study of kinematics and kinetics of pitching motions, to extract relevant joint angles, forces, and moments. However, most efforts in this space do not consider elite calibre athletes. In high-level organizations, most research and development work is conducted internally, by sports science and analytics departments, which are often kept confidential, to not give opponents the same competitive advantage that the analyses might produce. With the modelling, simulation, and computing tools available today, an opportunity exists to conduct a comprehensive analysis of elite baseball pitching biomechanics, that can inform the management and training of these athletes, without the need for physical experimentation that may impart additional strain on the pitcher. In collaboration with a confidential industry partner (a high-level baseball organization), practical strategies to reduce injury risk and improve performance for their pitchers are deduced from in-game kinematic data, shared and anonymized by the researcher partners. This work also provides expertise to the participating team in modelling and simulation, which can contribute to their on-field success and future scientific work. This is done by employing a series of simulation strategies, the first being an inverse dynamic approach. The shared kinematic data is initially used to track the pitching motion in an inverse kinematic analysis, which will provide estimates of joint angles over time and an animation of the motion. The same data is then used to calibrate the dimensions and joint kinematics of the athletes for the development of a full-body, skeletal model. An inverse dynamic analysis was then performed to estimate the net joint moments responsible for this motion, while also considering the model’s reliability at estimating joint angles and net joint dynamics. Particular attention was paid to the throwing arm, namely the shoulder and elbow joints, to understand the etiology of ulnar collateral ligament (UCL) injuries in pitchers. Next, a predictive forward dynamic simulation was developed. This approach was generated by solving an optimal control problem for the muscle forces and activations that maximize ball speed during a pitch, with constraints on the model's physiology, joint kinematics, and joint kinetics. First, a comprehensive musculoskeletal (MSK) model that is tuned and scaled to represent a baseball pitcher’s dimensions and forcer generating capabilities was developed. Using this model, a series of “what-if” simulations were conducted to explore the influence of variable pitching mechanics on UCL loads, while continuing to prioritize the identification of factors that have the most impact on elbow valgus torques, without compromising performance. Specifically, the predictive simulations were used to identify the pitching motion that minimized UCL loading while maintaining pitching speed. The cost function used involved minimizing actuation effort while achieving a competitive pitch speed. Constraints were imposed on joint angles and speeds, muscle activations, and ball speed to reflect realistic bounds for elite athletes. The final cost function of the optimal control problem rewarded reaching a desired pitch speed threshold, achieving a lead foot speed of zero at ball release (to maintain balance and prevent slipping), and minimizing the actuation of the shoulder rotation actuator (shoulder external rotation), which tends to load the UCL. Results demonstrated how variations in pitching mechanics alter UCL loads. For instance, faster pitches exhibited greater contralateral trunk tilt and higher arm slot, elevating UCL loads, while slower pitches demonstrated greater ipsilateral trunk tilt with a lower arm slot (sidearm), reducing UCL loads. Throughout all of the simulations, care was made to evaluate the robustness of the model through validation using a series of case studies and comparisons to literature. Using these validated models, findings relating to the variability in pitching strategies were identified. Each pitcher and simulation analyzed employed unique biomechanics to achieve the defined costs, while all producing competitively viable throwing speeds. While performance was prioritized, each set of throwing mechanics posed different levels of threat to the athlete's UCL, indicating that these highly individualized motions can create inconsistent loading patterns within subjects. These findings can be used to develop personalized pitching and conditioning programs aimed at maximizing throwing velocity while reducing injury risk, without the need for strenuous physical testing.
  • Item
    A Biophysical Study on the Effects of Bacterial Infection and Neuroprotective Molecules in Relation to Alzheimer’s Disease
    (University of Waterloo, 2025-06-10) Filice, Carina Teresa
    Alzheimer’s disease (AD) is a prominent health concern among the aging population. This disease impairs neuronal cells, in part, through the accumulation of amyloid-β(1-42) peptide (Aβ1-42) and its various toxic mechanisms. With the advent of controversial anti-amyloid drugs, the efficacy of newly developed treatments is still not high. Investigations into novel treatment strategies highlight the potential protective abilities of natural products; one of which is melatonin, a hormone produced by the pineal gland. Due to its inherent lipophilicity and interaction with membranes, melatonin is investigated in this work as a novel membrane-protection strategy. The difficulty in discovering effective anti-amyloid drug targets may be related to Aβ1-42’s physiological role as an antimicrobial peptide. Several hypotheses suggest microbial infections as a causative risk factor of AD through the stimulation of Aβ1-42 and neuroinflammation. This work specifically focuses on the contributions of bacterial functional amyloids, known as ‘curli fibers’, to Aβ1-42 processes. The basis for this investigation into curli fiber-amyloid interactions lies in multiple established instances of cross-seeding with other amyloidogenic peptides. Recently published studies demonstrate this interaction but many questions still remain as to the nature and effect that this interaction will have on other AD processes. In this work, we intend 1) to elucidate the molecular mechanism of melatonin membrane protection against amyloid toxicity and 2) to investigate the interaction of infection-establishing bacterial curli fibers with Aβ1-42 and the effect of these complexes on AD-associated mechanisms. To explore these mechanisms, we used multiple methods in biophysics, molecular biology, and computational chemistry to provide an interdisciplinary perspective. Previously published lipid models mimicking various AD-afflicted neuronal membranes were pre-treated with melatonin prior to Aβ1-42 and resulting damage was assessed via atomic force microscopy (AFM) imaging and black lipid membrane electrophysiology. In this work we demonstrated melatonin’s ability to inhibit peptide binding and promote membrane repair after Aβ1-42 exposure is dependent on its fluidic nature working in combination with lipid composition of target membranes. Similarly, high speed-AFM, AFM, and BLM studies evaluated the differences in antimicrobial and toxic mechanisms of Aβ1-42 in comparison to a known antimicrobial peptide. These evaluations revealed that anionic bacterial membranes repel Aβ1-42 indicating antimicrobial activity is not likely related to the same non-specific membrane binding as is its toxicity. Next, curli-amyloid interactions and the identification of participating aggregation states were evaluated through molecular dynamics simulations, transmission electron microscopy and AFM, as well as a novel biomolecular condensate assay. We confirmed that not only do curli fibers and Aβ1-42 interact and form peptide complexes, but also identified that this interaction is solely carried out by early aggregation species such as monomers and oligomers. Additionally, the effects of curli fibers on Aβ1-42 toxicity were first modelled by MD simulations and experimentally confirmed through measurements of damage on simple eukaryotic-based models by AFM and BLM, cell viability assays of murine microglial cell cultures, and immunogenicity evaluations using enzyme-linked immunosorbent assays. We also established that these curli-amyloid complexes reduce toxicity directly related to membrane perforation mechanisms but still negatively affect cell viability, presumably due to a heightened immunogenicity of the peptide complexes. Our findings lead us to propose a novel membrane-centric neuroprotection strategy against Aβ1-42 toxicity. This proposed mechanism intends to expand our knowledge of melatonin’s effect on membranes in AD and could inform therapeutic development. Furthermore, our investigations into curli-amyloid interactions and their effect on AD processes highlights the important role of Aβ1-42 in physiology and how this can relate to AD onset. These findings can reinforce the current research paradigm shift to microbial infections, the gut-brain axis, and the role of microbial products as potent initiators of AD onset pathways. Therefore, this entire body of work aims to develop knowledge of important AD mechanisms to guide new research, diagnostics, and treatment avenues.
  • Item
    Using Artificial Intelligence for Some Activity Recognition and Anomaly Identification Using a Multi-Sensor Based Smart Home System
    (University of Waterloo, 2025-06-09) Saragadam, Ashish
    Ambient Assisted Living (AAL) research frequently contends with limitations including reliance on supervised data, lack of personalization and interpretability, and evaluations in artificial laboratory settings. This study aimed to address these gaps by developing an unsupervised, personalized, and interpretable AAL system using low-cost sensors for long- term, real-world activity recognition and behavioural anomaly detection. A multi-modal sensor network (including contact, vibration, outlet, air quality sensors) was deployed in a single participant’s apartment for over 90 days. Primarily unsupervised machine learning techniques, augmented with interpretability methods (SHAP), were used to identify key ac- tivities (cooking, couch-sitting, showering) and detect personalized behavioural deviations. Minimally supervised approaches for showering detection were also accurately achieved using humidity data to address the shortcomings of unsupervised showering model. More importantly, the system effectively identified interpretable anomalies demonstrating the model’s capability to learn the individual’s normal behaviour in the home and identify anomalies, representing significant deviations from the participant’s established routines. In addition, the model was also able to be interpretable that allowed for the participant to understand why each anomaly occured. This study confirms the feasibility of leveraging unsupervised, interpretable methods with affordable sensors for personalized, ecologically valid AAL, significantly reducing labelling dependence and enhancing system trustworthi- ness for scalable, unobtrusive health monitoring.
  • Item
    Language Design for Scalable, Extensible Proof Engineering
    (University of Waterloo, 2025-06-09) Ebresafe, Oghenevwogaga
    Certified compilers are complex software systems. Like other large systems, they demand modular, extensible designs. While there has been progress in extensible metatheory mechanization, scaling extensibility and reuse to meet the demands of full compiler verification remains a major challenge. We respond to this challenge by introducing novel expressive power to a proof language. Our language design equips the Rocq prover with an extensibility mechanism inspired by the object-oriented ideas of late binding, mixin composition, and family polymorphism. We implement our design as a plugin for Rocq, called Rocqet. We identify strategies for using Rocqet’s new expressive power to modularize the monolithic design of large certified developments as complex as the CompCert compiler. The payoff is a high degree of modularity and reuse in the formalization of intermediate languages, ISAs, compiler transformations, and compiler extensions, with the ability to compose these reusable components — certified compilers à la carte. We report significantly improved proof-compilation performance compared to earlier work on extensible metatheory mechanization. We also report good performance of the extracted compiler.