Theses
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6
The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.
This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)
This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)
Browse
Recent Submissions
Item SWE-bench-secret: Automating AI Agent Evaluation for Software Engineering Tasks(University of Waterloo, 2025-01-21) Kio, GodsfavourThe rise of large language models (LLMs) has sparked significant interest in their application to software engineering tasks. However, as new and more capable LLMs emerge, existing evaluation benchmarks (such as HumanEval and MBPP) are no longer sufficient for gauging their potential. While benchmarks like SWE-bench and SWE-bench-java provide a foundation for evaluating these models on real-world challenges, publicly available datasets face potential contamination risks, compromising their reliability for assessing generalization. To address these limitations, we introduce SWE-bench-secret, a private dataset carefully selected to evaluate AI agents on software engineering tasks spanning multiple years, including some originating after the models’ training data cutoff. Derived from three popular GitHub repositories, it comprises 457 task instances designed to mirror SWE-bench’s structure while maintaining strict data secrecy. Evaluations on a lightweight subset, called SWE-Secret-Lite, reveal significant performance gaps between public and private datasets, highlighting the increased difficulty models face when dealing with tasks that extend beyond familiar patterns found in publicly available data. Additionally, we provide a secure mechanism that allows researchers to submit their agents for evaluation without exposing the dataset. Our findings emphasize the need for improved logical reasoning and adaptability in AI agents, particularly when confronted with tasks that lie outside well-known public training data distributions. By introducing a contamination-free evaluation framework and a novel secret benchmark, this work strengthens the foundation for advancing benchmarking methodologies and promoting the development of more versatile, context-aware AI agents.Item For the Common Man- Patrick J. Buchanan and Kevin P. Phillips: Populist Conservatives during the Ascendancy of the American Right(University of Waterloo, 2025-01-21) Norton, ChaseThis biographical history explores the careers of New Right conservative populists Patrick Buchanan and Kevin Phillips, tracing their shared political evolution with, and eventual divergence from, mainstream Republicanism. Placing their conservative populism in the wider political context of America’s transitions from liberalism to neoliberalism from the late 1960s to the early 1990s, both men expressed a politics that articulated the rise of the American right, and its divisions by the election of Bill Clinton. Using a myriad of published works from their literary careers, this project complicates the often-monolithic picture of conservatism’s rise in American politics. Starting with Phillips’ influential The Emerging Republican Majority (1969), both men envisioned a coming conservative electoral majority to usurp the dominance of Democratic liberalism. During the 1970s, both men thought such a majority was jeopardized in the aftermath of the Watergate scandal, the resulting ideological confusion of the decade, and what they perceived as the liberal bias of the nation’s media. Unsatisfied with the expressions of American conservatism in the 1980s, Phillips became a harsh critic of Ronald Reagan’s economic policies while Buchanan felt that Republicans had not moved right enough. By contextualizing America’s shifts in the post-World War II era with the country’s longer populist history, Phillips pivoted his image of the elite from a broad collection of liberal interests to Reagan’s business Republicans. Buchanan only emboldened his attacks on liberal elites and moderate Republicans he thought threatened the white working-class. In highlighting their trajectories, this project places populism as a political force that both bound together, and revealed contradictions in, the conservative movement that gained power in the 1980s. Broadly, their divergence from mainstream Republicanism represented the breaking of Reagan’s coalition of economic and social conservatives.Item Impacts of Alternative Policing on Officer Health: The Community Engagement Unit (CEU)(University of Waterloo, 2025-01-21) Fenton, ArdenCreated by the Waterloo Regional Police Service (Ontario-Canada), the Community Engagement Unit (CEU) links high-needs community members with specially trained police and to appropriate social services. Whether living with homelessness, poor mental health, domestic violence, or substance use, and often with several of these challenges simultaneously, high-needs community members often require greater police attention. Indeed, CEU officers act as a “911 calls diversion centre” by redirecting high-needs individuals away from traditional law enforcement approaches and towards relevant social service agencies. However, due to their unique work environment, CEU officers likely experience additional/unique stressors that may impact their health. As a result, when developing alternative policing units like the CEU, recognizing the potential stressors of such work and their impact on the police service is essential. This mixed methods study utilizes semi-structured interviews and surveys to assess the work environment of CEU officers relative to traditional police officers to improve our understanding of stress-related health risk factors within alternative policing units. Five CEU officers were recruited via email to participate in a 1-hour semi-structured interview. Thematic analysis of semi-structured interviews developed the CEU ‘phenotype’ by assessing the unique work environment, roles, and responsibilities of CEU officers. The CEU officers were asked to reflect on these aspects of their work compared to previous units (primarily patrol). The experience of stress in CEU is more prolonged and chronic, given the continuous nature of their interactions with high-needs individuals, as opposed to the fast-paced, case-to-case stress nature of patrol units. This ‘slow-burn’ stress, combined with the frustration of navigating under-resourced community social support systems and moral distress when cases lack resolution, is likely to present unique challenges to the officers' health and well-being. Surveys collected standard demographic information and included validated scales to measure self-reported health, health-risk behaviours, given and received social support, and varying levels and types of police-specific stressors using the operational and organizational police stress questionnaires (PSQ-Op, PSQ-Org). Seven CEU officers (14.0%) and 43 traditional (TRAD) officers (86.0%) were recruited via e-mail to participate in the survey. CEU officers had lower levels of fatigue (-42.5%), shiftwork (-57.6%), working alone at night (-58.3%) and internal investigations (-63.3%) compared to TRAD officers. This study reveals distinct differences between CEU and TRAD officers. CEU officers report lower fatigue and less shiftwork but experience prolonged "slow-burn" stress due to ongoing interactions with high-needs individuals and frustrations with under-resourced support systems. These findings emphasize the need for tailored support systems in alternative policing units like the CEU. Improved mental health resources, decompression opportunities, and clear mandates can help reduce health risks associated with their unique stressors. This study adds to the research on alternative policing, highlighting the importance of understanding occupational health in non-traditional police roles.Item Uncovering the understudied role of microtopography and ground cover on evapotranspiration partitioning in high-elevation wetlands in the Canadian Rocky Mountains(University of Waterloo, 2025-01-21) Wang, YiA warming climate is projected to alter hydrological regimes in high mountain regions, including the Canadian Rocky Mountains. Rivers originating from the eastern slopes of the Canadian Rocky Mountains provide up to 90% of streamflow to downstream users in the Saskatchewan River Basin and have shown significant declines in summer discharge. In mid to late summer, as streamflow gradually decreases while water demand for agriculture, industry, and domestic use remains high, this reduction in streamflow imposes considerable stress on water use for various needs in downstream areas. Wetlands buffer excess water during floods and alleviate water shortage during droughts, making them crucial for sustainable water resource management. With glacial recession, wetlands may become more widespread; however, their hydrological roles are uncertain. Evapotranspiration (ET) represents the total water loss from the wetland surface to the atmosphere through both evaporation and transpiration, which is typically the largest component of the wetland hydrological cycle. Accurately quantifying ET is essential for understanding ecosystem water use patterns, estimating water budgets, and designing sustainable water resource management strategies. The environmental controls of ET in high-elevation wetlands are currently not well understood. The effects of land surface features such as microtopography, bryophytes, and litter covers, commonly found in these ecosystems, on ground surface evaporation—a component of ET—remain insufficiently explored. This thesis investigated these impacts across three typical high-elevation wetland types on the eastern slope of the Canadian Rocky Mountains: a montane fen (Sibbald), a sub-alpine marsh (Burstall) and a sub-alpine wet meadow (Bonsai). Field measurements of controlled evaporation experiments were conducted over various soil substrates and ground cover types, with and without an overstory willow canopy, during July and August 2021. The results showed that microforms and the overstory willow canopy did not exhibit statistically significant direct impacts on ground surface evaporation. Instead, their influences were indirect, affecting soil temperature profiles and below canopy microclimates, which in turn influenced ground surface evaporation. Conversely, ground covers, including litter and bryophytes, significantly impacted on ground surface evaporation, with effects varying by site and involving complex interactions with evaporative demand and water availability. In high-elevation ecosystems, the lack of adequate measurements of energy and water balance components, primarily due to high instrumentation costs and accessibility challenges, hampers understanding of their hydrological processes. While modelling approaches may be convenient and enable the exploration of temporal variability of energy and water fluxes, they require accurate parameterization, and model validation is often unavailable. It remains uncertain whether and how microtopography and ground covers should be integrated into modelling frameworks to enhance the representation of water and energy fluxes of high-elevation wetland ecosystems. This thesis found that the effects of microtopography on ground surface evaporation were satisfactorily modelled by the Penman–Monteith model and a more complex bryophyte layer model in the Atmosphere-Plant Exchange Simulator (APES), based on soil temperature measured in these microtopographical features. The effects of ground covers and overstory canopy on ground surface evaporation were successfully modelled based on the modelling framework of soil evaporative efficiency (SEE) which is the ratio of actual to potential ground surface evaporation. Given that high-elevation ecosystems generally lack sufficient ground measurements for model calibration and validation, this thesis established a simple approach for modelling the SEE of vegetated surfaces over a range of soil substrates and ground cover types, based on the mass fractions of bryophytes and litter. Additionally, a correction method was developed to account for the effects of an overstory willow canopy on SEE. This novel approach is less parameter-intensive compared to conventional methods and potentially widely applicable beyond the three study sites. Both the field experiments and the proposed SEE model formulation illustrate that ground covers significantly influence ground surface evaporation, a key component of ET. This highlights the importance of further studying the effects of ground covers on both ET and its partitioning into evaporation and transpiration at the ecosystem scale. Since field measurements alone are insufficient to fully capture the daily dynamics of ET partitioning, modeling techniques were applied to achieve this goal. To quantify the relative contribution of ground surface evaporation (E) and vascular plant transpiration (T) to wetland ET at a daily scale, part of the newly developed SEE approach, which estimates the surface resistance of bryophytes and litter covered wetland ground surfaces, was incorporated into the widely used Shuttleworth–Wallace (S-W) model. Based on this model, the relative contributions of E and T fluctuated daily in response to meteorological conditions and soil moisture content during the study period. The average daily T/ET ratios for Sibbald, Burstall, and Bonsai were 0.86, 0.46, and 0.61, respectively. When excluding rainy days, the average T/ET ratios were 0.86 for Sibbald, 0.50 for Burstall, and 0.60 for Bonsai. The modified S-W model performed with satisfactory accuracy when compared to field measurements. Assuming that the modified S-W model accurately captures the fundamental mechanisms driving wetland ET partitioning, this model was employed to simulate daily ET partitioning with varying ground cover fractions to investigate the effects of ground covers on ET and T/ET. The results indicated that bryophytes and litter had a minor impact on the magnitude of T/ET, though their influence may be more pronounced in wetlands with an elevated water table. A literature synthesis was further presented to reveal that wetland ET partitioning is strongly controlled by leaf area index (LAI) and moderated by water table depth. However, due to the absence of surface runoff measurements and continuous groundwater level monitoring, the water balance could not be estimated for each study site. Future research should explore potential interaction effects of water table level with microforms and ground covers as well as possible seasonal variations in their effects on wetland ET. Overall, this research reduced uncertainties associated with land surface heterogeneity in estimating ET and its partitioning in high-elevation wetlands, offered new perspectives for developing ET models, and facilitated the evaluation of ecosystem services and hydrological processes in these ecosystems.Item LiDAR-Driven Calibration of Microscopic Traffic Simulation for Balancing Operational Efficiency and Prediction of Traffic Conflicts(University of Waterloo, 2025-01-21) Farag, NatalieMicroscopic traffic simulation is a proactive tool for road safety assessment, offering an alternative to traditional crash data analysis. Microsimulation models, such as PTV VISSIM, replicate traffic scenarios and conflicts under various conditions, thereby aiding in the assessment of driving behavior and traffic management strategies. When integrated with tools like the Surrogate Safety Assessment Model (SSAM), these models estimate potential conflicts. Research often focuses on calibrating these models based on traffic operation metrics, such as speed and travel time, while neglecting safety performance parameters. This thesis investigates the effects of calibrating microsimulation models for both operational metrics including travel time and speed, and safety metrics including traffic conflicts and Post Encroachment Time (PET) distribution, using LiDAR sensor data. The calibration process involves three phases: performance calibration, performance and safety calibration, and only safety calibration. The results show that incorporating safety-focused parameters enhances the model's ability to replicate observed conflict patterns. The study highlights the trade-offs between operational efficiency and safety, with adjustments to parameters like standstill distance improving safety outcomes without significantly compromising operational metrics. Furthermore, there is a substantial difference in the calibrated minimum distance headway for the safety model, highlighting the trade-off between operational efficiency and safety. While the operational calibration focuses on optimizing flow, the safety calibration prioritizes realistic conflict simulation, even at the cost of reduced flow efficiency. The research emphasizes the importance of accurately simulating real-world driver behavior through adjustments to parameters like the probability and duration of temporary lack of attention.Item Implementation and Comparative Analysis of Open 5G Standalone Testbeds: A Systematic Approach(University of Waterloo, 2025-01-21) Amini, MaryamOpen-source software and commercial off-the-shelf hardware are finally paving their way into the 5G world, resulting in a proliferation of experimental 5G testbeds. Surprisingly, very few studies have been published on the comparative analysis of testbeds with different hardware and software elements. This dissertation is a comprehensive study on the implementation of experimental 5G testbeds and the challenges associated with them. We first introduce a precise nomenclature to characterize a 5G-standalone single-cell testbed based on its constituent elements and main configuration parameters. We then build 36 distinct such testbeds and systematically analyze and compare their performance with an emphasis on element interoperability, as well as the number and type of User Equipment (UE), to address the following questions: 1) How is the performance (in terms of bit rate and latency) impacted by different elements? 2) How does the number of UEs affect these results? 3) What is the impact of the user(s)' location(s) on the performance? 4) What is the impact of the UE type on these results? 5) How far does each testbed provide coverage? 6) What is the impact of the available computational resources on the performance of each open-source software? Finally, to illustrate the practical applications of such open experimental testbeds, we present a case study focused on user scheduling. Historically, most research on user scheduling has been conducted using simulations, with a strong emphasis on downlink scheduling. In contrast, our study is fully experimental, and targets enhancements to the uplink scheduler within the open-source Radio Access Network platform, srsRAN. We aim to move beyond simulation-based evaluations and explore how improvements in the uplink scheduler translate to real-world performance, specifically by measuring their impact on the user experience in a live experimental testbed.Item Enzyme-mediated controlled release of DNA polyplexes from gelatin methacrylate hydrogel for nonviral gene delivery(University of Waterloo, 2025-01-20) Kunihiro, JoshuaCorneal diseases such as recurrent corneal erosion and dry eye disease are common diseases which can significantly interfere with quality of life. The lack of effective treatments for these diseases necessitates the development of novel treatment methods. Skin wounds are another disease area which presents opportunities for improvement of clinical solutions. Gene delivery solutions have gathered interest for both corneal disease treatment and skin wound healing applications. Matrix metalloproteinase 9 (MMP9) enzymes are highly upregulated in both corneal diseases and in chronic skin wounds. Gelatin methacrylate (GelMA) is a biocompatible hydrogel which is degraded in the presence of MMP9 enzyme. It was thus hypothesized that GelMA could serve as an enzyme-responsive controlled release scaffold for polyplexes. The objective of this project was to demonstrate the potential of GelMA hydrogel as an enzyme- responsive controlled release scaffold for chitosan-graft-polyethyleneimine (CS-PEI) polyplexes as a non-viral gene delivery platform to treat corneal diseases or skin wounds. Key criteria of a controlled-release system are tunable release kinetics and maintained bioactivity of the therapeutic molecule after loading and release. The aims of this work were to characterize the release kinetics of the polyplexes and assess the bioactivity of polyplexes released from the GelMA hydrogel system. Methods were developed to quantitate the concentration of the released CS-PEI polyplexes in solution, and the release profile of polyplexes from GelMA in different MMP9 concentrations was measured. The release was found to be sustained over a 5-day period in the presence of physiologically relevant MMP9 concentrations. The in vitro transfection efficiency of released CS-PEI polyplexes was also explored to demonstrate the bioactivity of the polyplexes. The released polyplexes successfully transfected COS-7 cells in an enzyme-responsive and dose- dependent manner.Item Investigation and Enhancement of Zn-Ce Redox Flow Battery Performance Through Experimental and Modeling Studies(University of Waterloo, 2025-01-20) Yu, HaoThe transformation from energy based on fossil fuels to that based on sustainable options such as wind, solar and hydroelectric sources is crucial to reduce air/water pollution and carbon emissions. However, the production of electricity from these sustainable sources is typically intermittent in nature and can perturb the stability of the existing power grid. Redox flow batteries (RFB) have emerged as promising devices for grid-scale energy storage to stabilize power systems and improve their efficiency. Among the different types of RFBs, zinc- and cerium-based RFBs are promising for large-scale applications that require high output power density due to their low cost and high cell voltage. Motivated by its potential for future applications, this work focuses on the performance improvement of Zn-Ce RFBs through both experimental and modeling studies. Many of the findings and general ideas for RFB performance improvement are also applicable to other RFB systems and to commercial scale RFBs in real-life scenarios. In this work, the effect of different positive supporting electrolytes on the performance of a bench-scale Zn-Ce RFB has been studied. The effectiveness of mixed methanesulfonic/sulfuric acid, mixed methanesulfonic/nitric acid and pure methanesulfonic acid has been assessed and compared. The Ce(III)/Ce(IV) reaction exhibits faster kinetics and the battery exhibits higher coulombic efficiency in the mixed 2 mol/L MSA-0.5 mol/L H2SO4 electrolyte compared to that achieved in the commonly used 4 mol/L MSA electrolyte due to lower H+ crossover and higher Ce(IV) solubility. The rate of the fade in coulombic efficiency in the mixed MSA-H2SO4 electrolyte is 0.55% per cycle over 40 charge-discharge cycles, while the fade rate is 1.26% in the case of 4 mol/L MSA. Furthermore, the positive electrode reaction is no longer the limiting half-cell reaction even at the end of long-term battery charge-discharge operation. The effect of ion crossover on the overall Zn-Ce RFB performance has also been investigated through the measurement of the Zn(II), Ce(III), Ce(IV) and H+ concentrations on both sides of a Nafion 117 membrane during charge-discharge cycles. As much as 36% of the initial Zn(II) ions transfer from the negative to the positive electrolyte and 42.5% of the H+ in the positive electrolyte has crossed over to the negative side after 30 charge-discharge cycles. Both of these phenomena contribute to the steady fade in battery performance over the course of operation. Based on these findings, experiments aimed at reducing the concentration gradient driving crossover by intentionally adding different amounts of Zn(II) to the positive electrolyte at the outset of operation have been conducted. This approach has been shown to reduce the crossover of Zn(II) from the negative side to the positive side, improve both the battery coulombic and voltage efficiencies and reduce the decay of battery performance. Since the ion crossover phenomena is very commonly observed, this strategy to improve battery overall performance and reduce ions crossover by minimizing concentration gradient is not only applicable to similar lab-scale RFB research, but also beneficial for real-life RFB applications. Since the positive electrode reaction becomes the limiting half-cell reaction during the course of battery operation, two strategies have been investigated to regenerate the positive electrolyte by converting the accumulated Ce(IV) ions back to Ce(III) ions. The first strategy which utilizes RuO2 as a catalyst for Ce(IV) reduction improves the voltage efficiency from 71.1% to 77.8% over 16 cycles but reduces the coulombic efficiency from 74.1% to 57.8% due to the leakage of RuO2 catalyst through the porous filter into the positive electrolyte. The method utilizing H2O2 to regenerate the positive electrolyte improves the average coulombic efficiency from 63.7% to 68.3% and the average voltage efficiency from 56.8% to 76.1% over 30 cycles. Similar battery performance and life-cycle improvement can also be expected if these electrolyte regeneration methods are applied on a commercial scale. Furthermore, the implementation of these regeneration methods should also reduce the overall operating costs since it will reduce the frequency with which electrolytes have to be replaced. Finally, a transient 2-D model for the Zn-Ce RFB that accounts for the crossover of different electroactive species through the membrane has been developed. All three modes of transport (migration, diffusion and convection) coupled with electrode kinetics of Zn/Zn(II) and Ce(III)/Ce(IV) redox couples as well as HER and OER side reactions are included in the model. This model has been successfully validated against measurements of the evolution of the cell voltage, negative and positive electrode potentials and ion crossover during the course of 5 charge-discharge carried out in our laboratory. The validated model is then used to simulate the battery behaviour when operated under various operating conditions and using positive electrodes with different geometries. The results obtained provide useful information for the future design of Zn- or Ce-based RFBs with the aim of further improving their performance.Item Application of GNSS Reflectometry for the Monitoring of Lake Ice Cover(University of Waterloo, 2025-01-20) Ghiasi, YusofLakes cover vast expanses of land in many regions of the Northern Hemisphere. Their presence has been shown to have a significant impact on local weather and climate. The seasonal presence of ice cover affects the transfer of energy and heat between lakes and the overlaying atmosphere, as well as various socio-economic activities, including transportation, recreation, and cultural practices. However, climate change is rapidly altering lake ice cover and its phenology and ice thickness with meaningful implications for both human activities and the ecosystems they support. Monitoring the seasonal variability and changes in lake ice cover and thickness is crucial for understanding the impacts of climate change; however, traditional in-situ observations have declined over the last few decades, creating a need for innovative remote sensing approaches. Global Navigation Satellite System Reflectometry (GNSS-R) offers a novel, cost-effective method for monitoring lake ice dynamics. Unlike traditional remote sensing techniques, GNSS-R utilizes existing satellite signals (known as signals of opportunity), providing high temporal and spatial resolution data that can be used to detect and analyze lake ice conditions. This thesis investigates the application of GNSS-R for lake ice remote sensing, focusing on the signals' sensitivity to different phases of lake ice and its potential ice detection and the monitoring of lake ice phenology. The thesis begins with an exploration of the fundamentals of GNSS-R and its relevance to lake ice remote sensing. Through understanding the scattering mechanisms involved when GNSS signals interact with lake ice, this research establishes a theoretical framework that supports the subsequent experimental analyses. The research then evaluates the sensitivity of GNSS-R signals, particularly the Signal-to-Noise Ratio (SNR), to various lake ice conditions. Using data from the Cyclone Global Navigation Satellite System (CYGNSS) mission over Qinghai Lake, the thesis examines how SNR values change in response to freeze-up, ice cover, spring melt onset, and breakup. The results demonstrate that GNSS-R can effectively monitor lake ice dynamics with high temporal resolution, making it a valuable tool for tracking the seasonal evolution, inter-annual variability and changes in ice cover. Further investigation is conducted on the potential of hybrid compact polarimetry in GNSS-R for analyzing lake ice cover properties. Data from the Soil Moisture Active Passive Reflectometry (SMAP-R) mission acquired over large Canadian lakes are analyzed to assess the sensitivity of polarimetric GNSS-R signals to ice cover conditions. The study finds that hybrid compact polarimetry enhances the interpretation of GNSS-R data, particularly in distinguishing between different ice and water conditions. In addition, the use of machine learning, specifically random forest, combining several polarimetric parameters, improves the accuracy of lake ice phenology detection compared to using each parameter alone. To deepen our understanding of how lake ice modifies the reflectivity of GNSS signals, a multi-layer scattering regime model is developed and validated against CYGNSS data. The model simulates the interaction of GNSS signals with various lake ice layers and the underlying water interface, providing insights into the complex scattering processes that influence GNSS-R measurements. The successful validation of the model demonstrates its utility for improving the accuracy of lake ice phenology analysis and offers a robust tool for broader cryospheric applications. This thesis contributes to the field of remote sensing by demonstrating the effectiveness of GNSS-R in monitoring lake ice and advancing the understanding of GNSS signal interactions with ice-covered surfaces. The findings highlight the potential of GNSS-R as a low-cost, high-resolution tool for tracking lake ice dynamics, with implications for climate monitoring, weather forecasting and environmental management.Item Transition to electric vehicles: the importance of macro and micro influences on spatial and temporal patterns(University of Waterloo, 2025-01-20) Chen, YixinThe climate crisis is widely recognized as being caused by unsustainable consumption and production patterns across various social domains, which motivates the demand for an acceleration of transformative changes with the goal of sustainability. Socio-technical transitions, offers a path forward. That said, a core impediment is an incomplete understanding of how multiple elements co-evolve in different contexts. Various lenses, theories, and approaches have been used to analyse and explain technology adoption and diffusion in societies; these can be characterised as macro-level (‘structure’) or micro-level (‘agency’), but to date the linkages between them in understanding transition processes have been under-explored. This gap provides a research opportunity for the thesis to question in what ways can macro-level and micro-level lenses explain the spatial and temporal patterns of the transition process to electric vehicles (EVs), an example of a transition for the decarbonization of mobility. With specific reference to Canada, the thesis aims to illuminate the multi-dimensionality and complexity of how the transition to EVs is unfolding, using a quantitative approach including indicator development and statistical modelling. The thesis adopts two complementary components. One aims to describe and explain the spatial and temporal patterns of transition to EVs at a national level between 2017 and 2022 by drawing upon the ‘geography of transitions’ literature and modelling secondary data of new EV registration by seven provinces in Canada by quarter. The other component seeks to understand and assess changes of consumers’ likelihood and perceptions to purchase EVs in one municipality, Waterloo Region, between 2020 and 2023, framed by ‘diffusion of innovation’ concepts and based on primary data from two public surveys. In both analyses, robust models highlighted the importance of various factors in leading to EV adoption and diffusion. These macro-level and micro-level analyses both depict the transition to EVs in Canada as proceeding at a slow pace, with variations across space and time and society. The micro-level analysis further suggests that the transition is hampered by the resistance of nearly half of the population in the local context. Longitudinal dynamics of individual consumers’ perceptions of EVs and differences and changes at the landscape level mutually reinforce each other. For example, consumers’ recognition of EVs’ environmental benefits have the most substantial influence on people’s interest in EVs, which also echoes the significant role of societal environmentalism, as one of the representations of informal localized institutions at a provincial level, in driving the EV transition. The importance of EVs’ economic perspectives in individuals’ likelihood to adopt EVs increased between 2020 and 2023, which is aligned with the considerable influence of rising gasoline prices on the increase of new EV registrations in Canada. The findings of the two analyses raise concerns about whether Canada can achieve its commitment of 100% zero-emissions vehicle sales by 2035 and whether EVs can fully penetrate the Canadian market. The Canadian transition process of EVs is a co-evolutionary process with multiple elements interacting with one another. Therefore, no single policy or action can singly accelerate the process. The heterogeneity across consumers highlights the importance of tailored strategies for different consumer segments and the importance of longitudinal dynamics in investigations. In conclusion, macro-level and micro-level lenses are both important in understanding socio-technical transitions due to their integration, synergy, and complementarity.Item Learning Design Parameters to Build Application Customizable Network-on-Chips for FPGAs(University of Waterloo, 2025-01-20) Malik, Gurshaant SinghWe can exploit configurability of Field Programmable Gate Arrays (FPGA) and maximize the performance of communication-intensive FPGA applications by designing specifically customized Network-on-Chips (NoCs) using Machine Learning (ML). As transistor density growth stalls, NoCs play an increasingly critical role in deployment of FPGA applications for modern-day use cases. Unlike Application-Specific Integrated Circuits (ASICs), FPGA configurability allows the design of application-aware NoCs that can outperform statically configured NoCs in terms of both performance and efficiency. Conventional NoC design process is typically centered around universally-sound one-size-fits-all NoC design decisions and does not take the underlying application into account. In contrast, we present application aware designs that learn their NoC parameters by casting the NoC design space as a function of application performance using ML algorithms. Complex and non-obvious relationships between the large search space of NoC parameters and performance of the underlying FPGA application necessitates a more efficient approach than manual hand-tuning or brute force. Modern ML algorithms have demonstrated a remarkable ability to generalize to complex representations of the world by extracting high-order, non-linear features from complex inputs. In this thesis, we identify 1) NoC topology, 2) Flow control and 3) Regulation rate, as the key NoC design variables that have the strongest influence on application performance and leverage two primary ML methodologies in this thesis: 1) Stochastic Gradient Free Evolutionary Learning and 2) Gradient based Supervised Learning. First, we present NoC designs based on Butterfly Fat Tree (BFT) topology and light weight flow control. These BFT-based NoCs can customize their bisection bandwidth to match the application being routed while providing features such as in-order delivery and bounded packet delivery times. We present the design of routers with 1) latency-insensitive interfaces, coupled with 2) deterministic routing policy, and 3) round-robin scheduling at NoC ports. We evaluate our NoC designs under various conditions to deliver up to 3x lower latency and 6x higher throughput. We also learn the routing policy on a per-switch basis in an application-aware manner using Maximum Likelihood Estimation, decreasing latencies by a further ~1.1--1.7x over the static policy. Second, we overcome the pessimism in routing analysis of timing-predictable NoCs through the use of a "hybrid" application-customized NoCs. HopliteBuf NoCs leverage stall-free FIFOs as a measure of flow control under token bucket regularization. The static analysis, in the worst-case, can deliver very large FIFO size and latency bounds. Alternatively, HopliteBP uses light-weight backpressure as flow control under similar injection regulation. But, it suffers from severely pessimistic static analysis due to propagation of backpressure to other switches. We show that a hybrid FPGA NoC that seamlessly composes both design styles on a per-switch basis, delivers the best of both worlds. We learn, specifically for the application being routed, the switch configuration through a novel evolutionary algorithm based on Maximum Likelihood Estimation (MLE). We demonstrate ~1--6.8x lower routing latencies and ~2--3x improvements in feasibility, while only consuming ~1--1.5x more FPGA resources. Third, we further improve routability of a workload on the hybrid Buf-BP Hoplite NoC by learning to tune regulation rates for each traffic trace. We model the regulation space as a multivariate gaussian distribution. We capture critical dependency between parameters of the multivariate distribution using Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We also propose nested learning, that learns switch configurations and regulation rates in-tandem, and further lower cost-constrained latency by ~1.5x and accelerate rates by ~3.1x. Finally, we propose a Graph Neural Network (GNN) based framework to accurately predict NoC performance in sub-second latencies for a variety of FPGA NoC designs and applications. Application-aware NoC design can include thousands of incremental updates to the NoC design space, with each step requiring performance evaluation of NoC configuration using slow and expensive conventional tooling. This presents a bottleneck in the adoption of application-aware FPGA NoC design. Instead of spending up to tens of wall clock minutes simulating the NoC design for each step, we present a GNN based framework to encode any FPGA NoC and any FPGA application into graphs. We create a dataset, consisting of over 1.5 million samples, to train GNNs to predict NoC routing latencies. GNNs accelerate benchmarking run-times by up to ~148x (~506x on GPU) while preserving accuracies as high as 97.2%. Through this work, we observe that application-aware NoCs designed using ML algorithms such as MLE and CMA-ES can decrease routing latency by ~2.5--10.2x, increase workload feasibility by ~2--3x, increase injection rates by up to ~3.1x. By leveraging GNNs trained using supervised learning, we can accelerate design time of such NoCs by up to ~4.3x.Item Advancing sustainable packaging: The role of nanofibers in bioplastics(University of Waterloo, 2025-01-20) Mehinrad, PardisIn the last few decades, the packaging industry has become one of the fastest-growing industries worldwide, owing to some changes in standards of living, consumption habits, and global trade expansion. Bio-based materials have emerged as one of the most interesting subjects of research in the packaging industry due to the environmental concerns associated with materials derived from petrochemical sources, such as resource depletion, recycling challenges, and biodegradation which have resulted in the development of eco-friendly materials. Polymers are widely used in the packaging industries, and synthetic polymers are extensively employed mainly because of their outstanding mechanical properties, effective barriers against oxygen and water, and ease of processability. However, they present significant downsides, such as poor degradability and challenges in recyclability, and as a result, packaging waste constitutes a large portion of post-consumer solid waste, leading to ecological problems. Therefore, extensive research is being conducted to develop biopolymers in the packaging industry. The challenges associated with the global usage of biopolymers include poor mechanical and barrier properties and high production costs. In order to modify the properties of biopolymers, various methods could be employed, such as reinforcing the polymer matrix with nanomaterials, especially nanofibers. The first goal of this project was to optimize the process of preparing nanofibers derived from hemp. Pre-treatments were applied before the fibers underwent the refining process to reduce the number of steps required for refinement and to investigate their effect on the stability and diameter of the nanofibers produced. This approach not only saves time, energy, and costs but also enhances the overall efficiency of the process, representing a significant step forward for the industry. In this study, mechanical treatment was applied for the fibrillation of hemp fibers. This method has significant advantages over chemical treatment, particularly in terms of reducing the amount of chemicals used. This aligns with one of the most important goals of this project, promoting a more sustainable and cost-effective approach. In order to improve the efficiency of the fibrillation process, several pre-treatments were applied. Among them, the pre-treatment involving fiber hydration by immersing the fiber in water for one hour, subjecting it to a strong vacuum for 30 minutes, and processing it in a pressure cooker at high temperature (≈120°C) and pressure (12 psi) for 10 minutes resulted in the smallest fiber size reduction after eight passes. Furthermore, based on the stability test, this sample exhibited the highest stability, remaining stable after seven days. Another goal of this project was to improve the mechanical and barrier properties of biodegradable nanocomposite films for packaging applications. Polybutylene succinate (PBS) has high flexibility, high elongation at break, good biodegradability, and water resistance. However, due to its low molecular weight, low stiffness, poor oxygen resistance, and high cost, its potential applications are limited. Therefore, one solution could be the addition of hemp nanofibers (HNF) to PBS in order to enhance biodegradation and reduce costs. In this study, nanocomposites of PBS and HNF, with a ratio of 95/5, were first prepared using an extruder and hydraulic press. Their barrier and mechanical properties were then investigated. Then, these properties were compared with the properties of nanocomposites containing PBS/HNF with the addition of beeswax and sodium dodecyl sulfate (SDS) at different ratios. The moisture content, water absorption capacity, and water solubility tests showed that adding beeswax reduced moisture content, water absorption, and water solubility. These effects became more pronounced with increasing amounts of beeswax. Similarly, introducing SDS as a surfactant resulted in a greater decrease in these properties compared to adding beeswax alone, with further reductions observed as the concentration of SDS increased. Furthermore, the results of the water vapor permeability (WVP) test revealed that the incorporation of nanofibers resulted in a decrease in the film permeability due to its hydrophilic nature. However, beeswax created a barrier that hindered the movement of water vapor molecules through the film due to its hydrophobic nature. The extent of this decrease depends on the amount and distribution of the beeswax. When SDS was introduced to the film’s formulation, its bridging effect could further reduce the WVP amounts of films, though only at low SDS concentrations. Overall, the interactions between all components (PBS, hemp nanofiber, beeswax, SDS) can influence the final film structure. Additionally, mechanical tests demonstrated that adding HNF to PBS films increased tensile strength and modulus. However, this led to a decrease in elongation at break. For samples with beeswax in the formulation, the flexibility of the films increased, resulting in an increase in the film's elongation. In terms of tensile strength and tensile modulus, beeswax improved the compatibility between PBS and HNF. This enhancement led to better dispersion of hemp nanofiber within the PBS matrix, resulting in a more uniform composite and improved tensile strength. Meanwhile, the addition of beeswax to the formulation, due to its plasticizing effect, is expected to reduce the tensile modulus. In the final compositions, SDS was added to the film formulation. At low concentrations, SDS behaves as a surfactant, reducing the surface tension between PBS and HNF. This leads to a better dispersion of the nanofibers throughout the PBS matrix, which could reduce stress concentration. Well-dispersed nanofibers create a more uniform stress distribution within the film. This can help prevent premature failure at specific points and allow for more stretching before breaking, potentially increasing elongation. While SDS aids in dispersion at lower concentrations, excess SDS can interact with the surfaces of both PBS and HNF, disrupting the natural interactions (such as hydrogen bonding) between them, which contribute to the overall strength and integrity of the film and their disruption can make the film more susceptible to breaking under stress, potentially leading to decreased elongation. Meanwhile, the initial addition of SDS enhanced the composite's tensile strength mainly because of improved PBS-HNF adhesion and better stress transfer from the polymer matrix to the fibers. The addition of SDS over the optimal concentration resulted in phase separation, which could be regarded as a weak point in the composite, thereby having an adverse effect on the tensile strength. It was observed that by introducing SDS to the formulation tensile modulus also decreased. Overall, the nanocomposites prepared exhibited promising properties for sustainable packaging applications. Nevertheless, additional research and development are essential to improve and optimize the material properties further for optimal performance.Item Exploring high-level transport programming through eXpress Data Path(University of Waterloo, 2025-01-20) Valenca Mizuno, Pedro VitorThe transport layer is a layer from the network stack responsible for providing several essential services, such as congestion control, and lost packet detection and retransmission. Moreover, this layer is constantly changing to satisfy the new demands present in the network. This includes the increase in traffic, the leveraging of new hardware, and the release of new workloads. However, implementing these protocols is a considerably challenging task due to several reasons. First, the available documentation about these algorithms is written in natural language and can be long, which may result in misinterpretations, and consequently, incorrect implementations. Second, while developing transport algorithms, programmers must deeply study the environment where the protocol will run to find the best-suiting built-in helper functions and data structures available for their implementations. For this reason, the protocols are usually implemented as large sections of optimized code that are challenging to navigate, read, and modify. Third, since the transport layer is located between two other layers, namely application and network, transport algorithms must handle the details of these interactions, which can depend on the execution environment. In this thesis, we introduce Modular Transport Programming (MTP). This event-driven framework allows the declaration of transport algorithms with a high-level, yet, precise language. MTP abstracts low-level details by providing a set of high-level built-in functions and constructs focused solely on transport programming. It also abstracts the interactions between the transport layer and its neighbours by having interfaces responsible for such communication. Moreover, it improves the readability of MTP code by having a modularized design that clearly shows how each module interacts with the other and how the events are processed. To explore the feasibility and effectiveness of MTP, we chose to implement transport protocols in the Linux Kernel. The Linux Kernel and its transport layer implementations are some of the most notable examples that highlight the challenges stated above. These implementations can require thousands of lines of code and hundreds of functions, which are difficult to navigate due to the low-level C used to write their codes. Additionally, transport protocol implementations in the Kernel are tightly linked with the neighbouring layers of the network stack, utilizing complex data structures for packet processing and socket interface. Thus, implementing protocols from the start or modifying already existing ones can be a considerably difficult task. Nevertheless, instead of directly implementing protocols to the Linux Kernel, we opted to use the eXpress Data Path (XDP) framework as our first step. This framework allows a simple and safe loading of new user-defined code to the Linux Kernel. Moreover, several characteristics of the framework are ideal for the implementation of transport layer protocols, such as its loading point being in the Network Interface Card (NIC) driver and allowing Kernel bypass. Thus, this thesis also introduces a back-end developed in the XDP framework, to which the MTP code can be compiled using our code generator. During the development of this back-end, we explored XDP in depth to make wise design decisions. This includes utilizing a set of data structures best suited for our model, making the most of the helper functions available in its libraries, and bypassing the limitations of this framework. Finally, we show that it is possible to specify the logic of transport protocols in a high-level language and translate it to XDP. We implemented two protocols, TCP and RoCEv2, in the MTP language and successfully translated them with our compiler to the XDP back-end. Moreover, between two servers connected by 100Gbps links, the TCP implementation presented 80Gbps of throughput with 16 cores processing packets. Meanwhile, RoCEv2 translation is functional but still needs further optimizations to reach its expected performance. Lastly, we evaluate the strengths and weaknesses of XDP for transport programming.Item DINO Features for Salient Object Detection(University of Waterloo, 2025-01-20) Ding, AlbertIn the field of computer vision, it can be useful to solve the problem of modeling the visually “salient” areas in a scene viewed by a human. One way to formulate this problem is to give an estimate of how likely a pixel is to belong to an “object” (as opposed to the background) which altogether forms a “saliency” mask. This has become a task commonly known as Salient Object Detection (SOD). Supervised methods for this task are given a set of images and their corresponding pixel-precise saliency mask targets (often hand-labeled) and aim to learn the relationship between them. Unsupervised SOD methods in comparison attempt to identify salient objects by solely examining an image. This gives unsupervised methods the advantage of not needing expensive labels which are susceptible to human error. One good place to start an unsupervised method from is a technique called unsupervised feature learning. Interestingly, the Vision Transformer (ViT) [30] which struggled to find its place in the task of image classification, was able to produce features that are directly useful for SOD when it is trained in an unsupervised manner [11] called self-DIstillation with NO labels (DINO). The authors of LOST [97] which investigates DINO features further have found that the “keys” from the last attention layer stand out as the most useful. Melas-kyriazi et al. [80] and TokenCut [120] explore how applying spectral clustering methods to these features can be a good way to do tasks in computer vision without supervision. In this thesis we choose to continue to investigate these features for use in SOD by developing a K-means clustering-based method. From our experiments with a multitude of methods for using DINO features we observe that the clusters are rich in salient information. This could mean that the features themselves are particularly useful for SOD when processed by clustering methods. We first apply K-means clustering to DINO features, then select some clusters to be salient according to various heuristics, upscale and post-process the resulting coarse saliency maps, obtaining pseudo-ground truth. Finally, we train a SelfReformer [136] saliency model on our pseudo-ground truth. The most important step of our approach is the heuristic which decides on salient clusters after K-means clustering. We have done an extensive development and evaluation of many heuristics, and, surprisingly, the simplest heuristic which assigns the clusters not touching the image border to be salient works the best, outperforming the complex method based on eigenvectors presented in [80], after SelfReformer training.Item WCET Preserving Memory Management for High Performance Mixed Criticality Systems(University of Waterloo, 2025-01-20) Lawand, WaficThis thesis explores innovative memory and cache management mechanisms for mixed-criticality systems that aim to improve the performance of the system while maintaining its predictability. We consider mixed-criticality systems where requestors may issue latency-critical (LTC) and non-latency-critical (NLTC) requests. LTC requests must adhere to strict latency bounds imposed by safety-critical applications, but timely servicing NLTC requests is necessary to maximize overall system performance in the average case. In this thesis, we focus on addressing the challenges of processing LTC and NLTC memory requests in mixed-criticality systems and the limitations of existing cache management mechanisms designed to improve the system's average performance. Accordingly, we introduce two key contributions: one at the memory arbitration level and the other at the cache controller level. In the first contribution, we propose DAMA, a dual arbitration mechanism that imposes an upper bound on the cumulative latency of LTC requests without unduly impacting the performance of NLTC requests. DAMA comprises a high-performance arbiter, a real-time arbiter, and a mechanism that constantly monitors the cumulative latency of requests suffered by each requestor. DAMA primarily executes in high-performance mode and only switches to real-time mode in the rare instances when its incorporated mechanism detects a violation of a task's timing guarantee. We demonstrate the effectiveness of our arbitration scheme by adapting a predictable prefetcher that issues NLTC requests and attaching it to the L1 caches of our cores. We show both formally and experimentally that DAMA provides timing guarantees for LTC requests while processing other NLTC requests. Our evaluation demonstrates that with a negligible overhead of less than 1% on the cumulative latency bound of LTC requests, DAMA can achieve an equivalent average performance to a prefetcher that processes requests under a high-performance arbitration scheme. In the second contribution, we introduce a novel hardware mechanism at the cache controller level that leverages the LRU age bits to perform duration-based cache locking. Our proposed mechanism dynamically locks and unlocks cache lines for different durations at run-time, without the need to modify the program's code. We further devise a heuristic that analyzes a program's loop structure and selects the set of addresses to be locked in a L1 instruction cache alongside their locking durations. Evaluation results show that our duration-based locking mechanism significantly reduces initialization overhead and eliminates the need for program code modifications. The proposed mechanism achieves a performance comparable to the dynamic approach, which adjusts locked program content at runtime.Item Data-Driven Simulation and Optimization of Renewable Energy Systems(University of Waterloo, 2025-01-20) Ye, WenruiThe transition to renewable energy systems is critical in mitigating climate change and reducing fossil fuel dependence. However, integrating these variable and intermittent sources into the existing grid raises challenges such as dynamic energy demand management and resource underutilization, leading to increased operational costs and hindering broader adoption. This thesis develops algorithms to optimize renewable energy systems, enhancing their integration and operational efficiency. The research makes a significant contribution to enhancing the utilization, reliability, and economic viability of renewable energy systems, supporting a smoother transition to sustainable energy practices. This thesis first enhances the energy generation of photovoltaic panels by optimizing their tilt angles to maximize solar energy capture under varying environmental conditions. The machine learning models provided accurate predictions of photovoltaic output, allowing for data-driven insights into optimal system performance. A subsequent optimization process identified the best tilt angles during the operation. The results demonstrated an increase in annual energy output by up to 9.7% compared to fixed-tilt systems. This confirms that dynamic tilt adjustment is an effective strategy for maximizing photovoltaic energy generation. The second part of the research focuses on optimizing the capacity of renewable energy system components, with a particular emphasis on energy storage systems such as batteries. This project addressed the challenge of determining the optimal capacity for each component to efficiently meet energy demands while minimizing costs. A two-stage optimization approach was applied: first, a genetic algorithm generated candidate configurations with specific capacities; second, a simulation was conducted using an energy management algorithm to evaluate the performance of these configurations. The optimized configurations led to an overall system energy independence score of 0.51 and an 18.12% higher internal rate of return, validating the effectiveness of the integrated optimization approach in capacity planning and highlighting the importance of appropriately sized energy storage in enhancing system performance. The final part introduces an advanced energy management algorithm inspired by Model Predictive Control, integrating both batteries and hydrogen storage to enhance renewable energy utilization. By employing time series data and transformer-based models, the system accurately predicts future energy demand. These predictions enable a rolling window optimization technique that utilizes machine learning for dynamic energy management. The inclusion of hydrogen storage allows excess renewable energy to be stored as hydrogen, providing a versatile energy carrier for applications beyond electricity and improving overall renewable energy utilization. This approach improved demand forecasting accuracy by 41.21% and increased the adjusted green hydrogen production rate from 29.54% to 54.3%, demonstrating that advanced predictive energy management strategies, combined with diverse energy storage solutions, significantly enhance system adaptability, efficiency, and renewable energy utilization. These studies demonstrate that optimizing component configurations and energy management strategies—while integrating advanced energy storage systems like batteries and hydrogen—substantially improves the efficiency, reliability, and economic viability of renewable energy systems. The research provides valuable insights for integrating renewable energy into existing grids and supports the transition toward more sustainable and resilient energy infrastructures.Item Assessing the Compact Polarimetric Capabilities of the RADARSAT Constellation Mission for Monitoring Sub-Arctic Lake Ice Dynamics(University of Waterloo, 2025-01-20) Dezyani, SabaLake ice is a significant aspect of the physical landscape at northern latitudes and plays a crucial role in regulating weather, climate, and supporting various socio-economic aspects. The sensitivity of lake ice development to air temperature makes it an effective indicator of climate change. This phenomenon not only impacts regional weather patterns but also contributes to global environmental shifts. It is important to closely observe variations in lake ice to gain insight into the potential impact that lakes might have on regional weather patterns and environmental conditions. This study focuses on monitoring lake ice dynamics using RADARSAT Constellation Mission (RCM) compact polarimetric data, employing a combination of the first difference method and compact polarimetric (CP) parameter analysis. The research centers on three Canadian sub-Arctic lakes, with a primary focus on Kluane Lake, and examines phenological transitions through changes in backscatter intensity and CP parameters such as Degree of Polarization (DoP), Circular Polarization Ratio (CPR), Relative Phase, and Alpha Angle. The results reveal that freeze-onset and melt-onset events, mapped using the first difference method, align well with validation data from Landsat thermal imagery and meteorological records, highlighting the reliability of RCM data. Additionally, CP parameter analysis provides deeper insights into ice surface and subsurface conditions. DoP and CPR captured variations in surface roughness, while Relative Phase and Alpha Angle distinguished subsurface scattering and ice-water transitions. Mid-winter stability, reflected by minimal variability in CP values across parameters, emerged as the most consistent pattern, signifying coherent solid or snow-covered ice surface. The results highlight the capacity of RCM data to capture key phenological events, demonstrating strong consistency with optical and thermal data sources. However, the research also identifies limitations related to the temporal resolution of RCM data.Item Microbial ecology of nitrification in engineered water treatment systems(University of Waterloo, 2025-01-20) McKnight, Michelle MarieNitrification is performed primarily by chemolithoautotrophic microorganisms and is important for nitrogen transformation in aquatic environments. The understanding of nitrification has evolved over the years from a previously understood two-step process mediated by ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA) that perform ammonia oxidation to nitrite, and nitrite-oxidizing bacteria (NOB) that perform nitrite oxidation to nitrate. More recent research has revealed the existence of complete ammonia-oxidizing (“comammox” or CMX) bacteria from the genus Nitrospira that are capable of oxidizing ammonia to nitrate. With three groups of ammonia oxidizers often existing in the same environment, research is needed to understand the microbial ecology of nitrifying communities. These ammonia oxidizers play important roles in engineered systems, including wastewater treatment plants (WWTP) and aquarium biofilters, where they transform ammonia waste (NH₃/NH₄⁺) to less toxic nitrate (NO₃⁻) via nitrite (NO₂⁻). Prior to the discovery of comammox Nitrospira, previous research revealed that AOA dominated over AOB in freshwater aquarium biofilters. In Chapter 2, aquarium biofilter microbial communities were profiled and the abundance of all three known ammonia oxidizers were quantified using 16S rRNA gene sequencing and quantitative PCR (qPCR), respectively. Biofilter and water samples were each collected from representative residential and commercial freshwater and saltwater aquariums. Distinct biofilter microbial communities were associated with freshwater and saltwater biofilters. Comammox Nitrospira amoA genes were detected in all 38 freshwater biofilter samples and dominant in 30, whereas AOA were present in 35 freshwater biofilter samples and only dominant in 7 of them. The AOB were at relatively low abundance within biofilters, except for the aquarium with the highest ammonia concentration. For saltwater biofilters, AOA or AOB were differentially abundant, with no comammox Nitrospira detected. Additional sequencing of Nitrospira amoA genes revealed differential distributions, suggesting niche adaptation based on water chemistry (e.g., ammonia, carbonate hardness, and alkalinity). Network analysis of freshwater microbial communities demonstrated positive correlations between nitrifiers and heterotrophs, suggesting metabolic and ecological interactions within biofilters. These results indicate that comammox Nitrospira play a previously overlooked but important role in home aquarium biofilter nitrification. Following the identification of comammox Nitrospira among dominant ammonia oxidizers in freshwater aquarium biofilters, Chapter 3 monitored microbial community succession and water chemistry for three independent home aquariums during their first 12-weeks after start-up, with weekly collection of biofilter beads and sponge samples. Extracted DNA from biofilter samples was used for 16S rRNA gene sequencing to determine microbial community composition and quantitative PCR (qPCR) to quantify ammonia monooxygenase (amoA) genes. Water samples were also collected weekly for measurements of ammonia, nitrite, and nitrate. Biofilter nitrification activity reduced ammonia and nitrite concentrations below detectable limits by week 3 in two of the three aquariums, which showed comparable nitrification activity by the week 8 time point. Detection of ammonia oxidizers by qPCR coincided with ammonia oxidation activity for all systems. The two aquariums with nitrification occurring by week 3 contained live plants, whereas the aquarium with delayed nitrification did not, suggesting that live plants might provide an effective nitrifier inoculation source for aquarium establishment. Additionally, a preference in biofilter material was observed for detected AOA, which were present in higher abundance in bead samples compared to sponge samples. Although the tested aquaria differed in the timing and prevalence of ammonia oxidizers in biofilters during community establishment, samples from all three aquaria were consistently dominated by comammox Nitrospira by the end of 12 weeks. Additional metagenomic functional profiling of week 12 biofilter samples confirmed the presence of AOA amoA genes and comammox Nitrospira amoA genes as detected by qPCR for all aquaria, with nitrite oxidation marker gene nxrB for both comammox Nitrospira spp. and canonical NOB Nitrospira spp. also detected. Although this work sheds light on how ammonia oxidizers establish in residential freshwater aquaria, further research is needed to test factors that impact the establishment of nitrifiers populations, such as inoculation sources, fish loads, and water chemistry. Novel WWTP technologies, including membrane aerated biofilm reactors (MABR), aim to improve nitrogen removal, reduce energy consumption, and improve nitrification in cold weather conditions. The municipal WWTP in Southern Ontario was upgraded with an MABR system in 2022, which was installed upstream of the existing conventional activated sludge (CAS) system at a municipal scale. This current study evaluates the impact of an MABR upgrade on the CAS and MABR microbial communities, with this MABR system being the largest in North America based on media surface area at the time of its installation. In Chapter 4, the microbial community of the CAS system was characterized before and after the MABR upgrade to evaluate the impact of the upstream MABR installation on the functional potential of the downstream activated sludge. Microbial community characterization was done using 16S rRNA gene amplicon sequencing of the V4-V5 region and additional MABR biofilm samples were characterized using metagenomic analysis to evaluate the functional potential of the biofilm for nitrification and denitrification. Before upgrade, the CAS system included the nitrifiers Nitrosomonas (AOB) and Nitrobacter (NOB), which exhibited seasonal abundance and activity differences. Following the upgrade to the hybrid MABR-CAS system, seeding effects from the MABR biofilm increased diversity of the CAS, including nitrifiers. Along with AOB, Nitrospira NOB and comammox Nitrospira were present in the MABR. Metagenomic profiling showed that biofilm microbial communities were well equipped to perform nitrification, denitrification, and phosphate removal. Characterization of the microbial communities in the plant showed that the MABR technology had a positive impact in increasing microbial diversity in this treatment system, along with an increased inventory of nitrifiers with diverse metabolic capabilities.Item TECTONIC EVOLUTION OF THE BAIE VERTE MARGIN, NEWFOUNDLAND(University of Waterloo, 2025-01-17) Scorsolini, LudovicoLocated along the Early Paleozoic Laurentian continental margin in Newfoundland, the Baie Verte Margin tectonistratigraphy and tectono-metamorphic evolution have been controversial for decades. Here, the results of a detailed field, petrological and geochronological study are presented, where Baie Verte Margin is subdivided into three tectono-metamorphic units separated by tectonic contacts: the East Pond Metamorphic Suite (EPMS) basement, EPMS cover, and the Fleur de Lys Supergroup (FdLS). Each unit exhibits a distinct metamorphic and structural evolution recorded during the subduction, exhumation, and post-collisional history of this ancient margin. The combination of thermodynamic modelling, petrochronology, and structural analysis provided insights into the P-T-t-d paths of the studied units, allowing a better understanding of their role during the evolution of the Taconic subduction system. High-pressure (HP) to ultra-high-pressure (UHP) conditions were reached between 483 and 475 Ma during the D1 phase, with the EPMS cover recording eclogite-facies metamorphism at ~2.8 GPa and 620°C. Subsequent decompression resulted in a β-shaped pressure-temperature-time (P-T-t) path, with near-isothermal decompression to ~2 GPa and heating to 860°C during exhumation. A multi-stage exhumation model is proposed for the EPMS eclogites: 1) buoyant rise through a low-density mantle wedge and 2) subsequent ascent at shallower crustal levels, facilitated by external tectonic forces and slab break-off, as evidenced by Late Taconic magmatism. While the EPMS cover re-equilibrated at UHP conditions, the EPMS basement and FdLS experienced decompression and Barrovian metamorphism during late-D1, indicating decoupling of the units during this stage. Coupling between the units occurred along a D2 shear zone during retrograde metamorphism, spanning 475–452 Ma. Two exhumation scenarios are proposed to explain the tectonic evolution of the margin: (i) Following late D1 detachment, the EPMS basement and FdLS were exhumed to crustal levels while the EPMS cover was subducted deeper into the mantle. Tectonic extrusion along D2 shear zones, potentially aided by melt weakening, then emplaced the EPMS cover between the two units. (ii) Alternatively, sequential detachment occurred from the top to the bottom of the slab, resulting in deeper subduction of lower units, followed by their exhumation through back-folding and crustal wedge thrusting. The Silurian F3 folding deforms both D1-2 structures in each unit and the D2 shear zones that bound them, suggesting that the continental wedges, which recorded different tectono-metamorphic paths after early D1, were juxtaposed before the onset of deformation associated with the Salinic Orogeny. Later deformation phases, D4 and D5, are probably related to tectono-metamorphic activity related to the Acadian and Neo-Acadian orogenies. This research improves our understanding of the dynamic tectono-metamorphic evolution of the Baie Verte Margin, emphasizing the role of fluids, thermal perturbations, and deformation in driving metamorphic reactions, and exhumation. The findings contribute to understanding the mechanisms controlling HP-UHP terrain evolution in subduction zones and highlight the complex interactions between subduction, exhumation, collision, and magmatism throughout the Taconic orogeny.Item Code Generation and Testing in the Era of AI-Native Software Engineering(University of Waterloo, 2025-01-17) Mathews, Noble SajiLarge Language Models (LLMs) like GPT-4 and Llama 3 are transforming software development by automating code generation and test case creation. This thesis investigates two pivotal aspects of LLM-assisted development: the integration of Test-Driven Development (TDD) principles into code generation workflows and the limitations of LLM-based test-generation tools in detecting bugs. LLMs have demonstrated significant capabilities in generating code snippets directly from problem statements. This increasingly automated process mirrors traditional human-led software development, where code is often written in response to a requirement. Historically, Test-Driven Development (TDD) has proven its merit, requiring developers to write tests before the functional code, ensuring alignment with the initial problem statements. Applying TDD principles to LLM-based code generation offers one distinct benefit: it enables developers to verify the correctness of generated code against predefined tests. This paper investigates if and how TDD can be incorporated into AI-assisted code-generation processes. We experimentally evaluate our hypothesis that providing LLMs like GPT-4 and Llama 3 with tests in addition to the problem statements enhances code generation outcomes. We experimented with established function-level code generation benchmarks such as MBPP and HumanEval. Our results consistently demonstrate that including test cases leads to higher success in solving programming challenges. We assert that TDD is a promising paradigm for helping ensure that the code generated by LLMs effectively captures the requirements. As we progress toward AI-native software engineering, a logical follow-up question arises: Why not allow LLMs to generate these tests as well? An increasing amount of research and commercial tools now focus on automated test case generation using LLMs. However, a concerning trend is that these tools often generate tests by inferring requirements from code, which is counterintuitive to the principles of TDD and raises questions about their behaviour when the flawed assumption of the code under test being correct is violated. Thus we set out to critically examine whether recent LLM-based test generation tools, such as Codium CoverAgent and CoverUp, can effectively find bugs or unintentionally validate faulty code. Considering bugs are only exposed by failing test cases, we explore the question: can these tools truly achieve the intended objectives of software testing when their test oracles are designed to pass? Using real human-written buggy code as input, we evaluate these tools, showing how LLM-generated tests can fail to detect bugs and, more alarmingly, how their design can worsen the situation by validating bugs in the generated test suite and rejecting bug-revealing tests. These findings raise important questions about the validity of the design behind LLM-based test generation tools and their impact on software quality and test suite reliability. Together, these studies provide critical insights into the promise and pitfalls of integrating LLMs into software development processes, offering guidelines for improving their reliability and impact on software quality.