Civil and Environmental Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9906
This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Recent Submissions
Item Chemo-rheological Characterization of Asphalt Binders Using Different Aging Processes(University of Waterloo, 2025-03-17) Sharma, Aditi; Baaj, Hassan; Tavassoti, PejoohanThe performance and longevity of asphalt pavements depend heavily on the properties of asphalt binders, which are affected by aging, binder modifications, and the incorporation of reclaimed asphalt pavement (RAP) materials. However, significant gaps exist in understanding the long-term chemical and rheological changes induced by aging processes (particularly with respect to differences between thermo-oxidative aging and UV exposure), and in the use/standardization of chemical analytical techniques such as Fourier Transform Infrared (FTIR) and Nuclear Magnetic Resonance (NMR) spectroscopy for binder characterization. Furthermore, the behaviour in RAP-virgin binder blends, along with the influence of bio-based rejuvenators and anti-aging additives under different aging conditions, remains underexplored. Addressing these gaps are crucial to developing more durable, sustainable pavements. This thesis bridges these research gaps through comprehensive investigation of chemo-rheological binder characterization, combining experimental testing with advanced analytical tools and varying aging methods. The findings offer essential insights into binder aging, rejuvenation strategies, and modification techniques, with significant implications for pavement durability and environmental sustainability. The first chapter presents an evaluation of Attenuated Total Reflection-Fourier Transform Infrared (ATR-FTIR) spectroscopy combined with functional group and multivariate analysis techniques to characterize asphalt binders. The research identifies challenges in repeatability across binder sources and aging states demonstrating the importance of standardized protocols for improving reliability. Repeatability as described by AASHTO standards is listed in the precision and bias statement as single operator precision. This is the allowable difference in two test results measured under the repeatability conditions (same asphalt binder, measured by the same operator, on the same piece of equipment in the same lab). Principal Component Analysis (PCA) and k-means clustering successfully classified binder types and aging states, with large quantity (LQ) sample preparation yielding more consistent results than small quantity (SQ) preparation. These findings underscore the need for uniform procedures in binder analysis, addressing inconsistencies prevalent in the current literature. The second part of the thesis investigates the impact of Styrene-Butadiene-Styrene (SBS) polymer modification on binder performance and oxidative resistance. Using Nuclear Magnetic Resonance (NMR) and ATR-FTIR spectroscopy, along with PCA and Partial Least Squares Regression (PLSR), the research highlights the ability of SBS to enhance high-temperature performance and slow thermo-oxidative aging. This work not only confirms previous findings on SBS but also provides new insights into the molecular interactions contributing to aging resistance. The study fills a gap in understanding how SBS-modified binders behave under various aging scenarios, offering a deeper perspective on polymer-modified asphalt technologies. The thesis also addresses a critical gap related to UV-induced aging, which has been underexplored in comparison to thermo-oxidative aging. A novel UV aging chamber was developed to simulate real-world environmental conditions, incorporating UV exposure, water spray cycles, and controlled heating at 70°C. Comparative analysis revealed that different additives exhibit varying effectiveness under UV and thermo-oxidative conditions. Zinc diethyldithiocarbamate (ZDC) showed strong resistance to thermo-oxidative aging but limited efficacy under UV aging, while ascorbic acid (Vit. C) accelerated aging under UV exposure, contrary to expectations. These findings emphasize the challenges involved in designing effective anti-aging strategies for asphalt binders, demonstrating the value of combining conventional rheological tests with spectroscopic techniques and further highlighting the need for more targeted approaches to additive selection and development. This thesis advances the understanding of asphalt binder behaviour and aging processes by integrating chemical, rheological, and multivariate analysis techniques. It offers critical contributions to the standardization of binder characterization protocols, the optimization of polymer-modified asphalt technologies, and the development of more effective anti-aging strategies. The research also demonstrates the potential of machine learning and artificial intelligence (AI) in predicting binder performance from spectroscopic data using multivariate analysis, paving the way for future innovations in asphalt binder characterization. In conclusion, the work in this thesis addresses significant gaps in the literature, providing new insights into aging mechanisms, additive/rejuvenation strategies, and RAP binder interactions. By combining chemical analysis, rheological testing, and multivariate techniques, this research contributes both to academic knowledge and practical pavement engineering, promoting the development of more sustainable, long-lasting asphalt pavements.Item LiDAR-Driven Calibration of Microscopic Traffic Simulation for Balancing Operational Efficiency and Prediction of Traffic Conflicts(University of Waterloo, 2025-01-21) Farag, Natalie; Bachmann, Christian; Fu, LipingMicroscopic traffic simulation is a proactive tool for road safety assessment, offering an alternative to traditional crash data analysis. Microsimulation models, such as PTV VISSIM, replicate traffic scenarios and conflicts under various conditions, thereby aiding in the assessment of driving behavior and traffic management strategies. When integrated with tools like the Surrogate Safety Assessment Model (SSAM), these models estimate potential conflicts. Research often focuses on calibrating these models based on traffic operation metrics, such as speed and travel time, while neglecting safety performance parameters. This thesis investigates the effects of calibrating microsimulation models for both operational metrics including travel time and speed, and safety metrics including traffic conflicts and Post Encroachment Time (PET) distribution, using LiDAR sensor data. The calibration process involves three phases: performance calibration, performance and safety calibration, and only safety calibration. The results show that incorporating safety-focused parameters enhances the model's ability to replicate observed conflict patterns. The study highlights the trade-offs between operational efficiency and safety, with adjustments to parameters like standstill distance improving safety outcomes without significantly compromising operational metrics. Furthermore, there is a substantial difference in the calibrated minimum distance headway for the safety model, highlighting the trade-off between operational efficiency and safety. While the operational calibration focuses on optimizing flow, the safety calibration prioritizes realistic conflict simulation, even at the cost of reduced flow efficiency. The research emphasizes the importance of accurately simulating real-world driver behavior through adjustments to parameters like the probability and duration of temporary lack of attention.Item Reduced Order Geomechanics Models(University of Waterloo, 2025-01-14) Hatefi Ardakani, Saeed; Gracie, RobertComputational techniques are commonly used for real-time simulation of complex geomechanics problems, such as hydraulic dilation stimulation. A significant challenge in this realm is that high-fidelity mathematical models or full order models (FOMs) are computationally expensive as they must span multiple spatial and temporal length scales, often including nonlinearities and thermo-hydro-mechanical processing. The computationally intensive nature of these simulations continues to pose challenges in parameter estimation, uncertainty quantification, and optimization applications, where hundreds to thousands of simulations are required to achieve a solution. Intrusive reduced order models (ROMs) have emerged as a method to derive and train a computationally efficient surrogate/proxy model using the FOM. This thesis seeks to bridge the gap in existing intrusive ROMs in reservoir engineering by introducing efficient ROMs that are capable of capturing hydro-mechanical coupling behavior and path-dependent plastic deformation of rocks. A complex case involving hydraulic dilation stimulation is used to show the efficiency and accuracy of the ROM in addressing coupling, plasticity, and permeability enhancement features. First, an efficient and accurate ROM is proposed for nonlinear porous media flow problems, with specific application to a two-dimensional layered reservoir with a two-well system. Standard projection-based intrusive ROMs without hyper-reduction, such as proper orthogonal decomposition-Galerkin (POD-Galerkin), have not demonstrated efficacy in reducing the computational cost of the ROM for nonlinear problems. In this context, we combine POD-Galerkin with discrete empirical interpolation method (DEIM) as a hyper-reduction technique to reduce the size of the system of equations and accelerate the computation of nonlinear terms (residual force vector and its Jacobian). Column-reduced Jacobian DEIM technique is employed to interpolate the Jacobian, leading to a significant reduction in the computational time of the online stage. The ROM is parameterized for the nonlinear transient injection rate (pumping schedule). Offline, training data is generated by the FOM runs with simple constant injection rates. Online, the ROM demonstrates high accuracy and efficiency for complex and time-varying pumping schedules, including sinusoidal, high-frequency, and time-discontinuous pumping schedules that are located outside of the training regime. It is shown that POD-DEIM ROM has about 10^3 times fewer degrees of freedom (DoFs) and is approximately 190 times faster than the FOM for a reservoir model with 3*10^4 DoFs, while maintaining an accurate solution in the online stage. The accuracy and efficiency of the POD-DEIM motivate its potential use as a surrogate model in the real-time control and monitoring of fluid injection processes. Intrusive ROMs have faced considerable difficulties in accurately capturing the history-dependent nonlinear evolution of plastic strain. In the second objective, an intrusive ROM is developed and evaluated for a Drucker-Prager plasticity model, in which material properties and cyclic load path are parametric inputs. By constructing multiple local DEIM (LDEIM) approximations in combination with clustering and classifier techniques, a fast and accurate ROM is achieved. The FOM consists of a two-dimensional finite element analysis (FEA) of a deformable solid with Drucker-Prager plasticity. Offline, the temporal and parameterized training data generated from FOM runs are classified using the k-means clustering algorithm, whereby LDEIM basis vectors are computed. Online, a nearest neighbor classifier identifies the appropriate LDEIM. The ROM has three hyper-parameters (the size of the ROM, the number of clusters, and the number of DEIM measurement points per cluster), influencing both accuracy and speed-up. In a micromechanics porous media problem, parameterized by Young’s modulus and hardening modulus, the ROM’s performance is demonstrated for inputs within and outside of the training domain; error and speed-up vary with inputs - accuracy is highest for inputs within the training domain (Error: 1.0-3.5% vs 1.0-9.2%), while speed-up varies from 106 to 134 times. For a cyclic plasticity problem, parameterized by load path, the ROM exhibits stable and accurate online performance with a substantial speed-up for test load paths. Under FOMs with ~10^3 and ~5*10^4 DoFs, speed-ups are 11 and 770 times, respectively. Larger speed-ups seem likely for larger FOMs. Finally, the ROM for nonlinear transient porous media flow as a diffusion problem is coupled with the ROM for plasticity to develop a novel ROM formulation for poroplasticity problems. This ROM aims to significantly reduce the computational costs for nonlinear and fully-coupled hydro-mechanical simulations in large-scale reservoirs. The developed mathematical model integrates a coupled system of equations from a two-dimensional FEA of momentum and mass balance equations equipped with Drucker-Prager plasticity and stress-dependent permeability enhancement models. The proposed ROM combines various ROMs, including POD-Galerkin to reduce the number of DoFs, DEIM to accelerate the computation of nonlinear terms, and local POD and local DEIM (LPOD/LDEIM) for further reductions in poroplasticity problems. LPOD and LDEIM classify the parameterized training data, obtained from offline FOM runs, into multiple subspaces with similar dynamic features. A new strategy for clustering and classification techniques tailored for the coupled formulation framework is introduced. The advantages of this ROM are demonstrated in a large-scale application involving hydraulic dilation stimulation of a reservoir with a horizontal well pair. The ROM is parameterized not only by the material properties but also by the injection rate. Its effectiveness is evaluated for more realistic use cases, where ROM remains efficient for injection rates that extend beyond the training data. In large-scale subsurface flow modeling of hydraulic dilation stimulation, a speed-up of ~400 times is achieved, with a ROM reducing the model dimension from 10^5 DoFs to 100 DoFs. This substantial computational saving results from a real-time analysis of the ROM and becomes even more highlighted in multi-query problems, where the model must be executed for multiple inputs and system configurations. This ROM has a high potential for accelerating various problems, such as uncertainty quantification, design, history matching, and well control optimization. It is also recommended that the proposed ROM be adopted for other real-world subsurface applications, including conventional and unconventional oil and gas production, hydraulic fracturing, and carbon storage.Item Finding Specific Industrial Objects in Point Clouds using Machine Learning and Procedural Scene Generation(University of Waterloo, 2025-01-06) Lopez Morales, Daniel; Haas, Carl; Narasimhan, SriramIn the era of Industry 4.0 and the rise of Digital Twins (DT), the demand for enriched point cloud data has grown significantly. Point clouds allow seamless integration into Building Information Modeling (BIM) workflows, offering deeper insights into structures and enhancing the value of documentation, analysis, and asset management processes. However, several persistent challenges limit the current effectiveness of point cloud methods in industrial settings. The first major challenge is the difficulty in identifying specific objects within point clouds. Finding and labeling individual objects in a complex 3D environment is technically demanding and fraught with various issues. Manually processing these point clouds to locate specific objects is labor-intensive, time-consuming, and susceptible to human error. In large-scale industrial environments, the complexity of layouts and the volume of data make these manual methods impractical for efficient and accurate results. The second major challenge lies in the scarcity of industrial point cloud datasets necessary for training machine learning-based segmentation networks. Automating point cloud enrichment through machine learning relies heavily on the availability of high-quality datasets specific to industrial applications. Unfortunately, comprehensive datasets of this kind are either unavailable or proprietary, creating a significant barrier to developing effective segmentation networks. Furthermore, the few current datasets often lack flexibility, being limited only by the areas that have been scanned. This rigidity, combined with the time-consuming process of manually segmenting data, slows down the development and deployment of scalable machine-learning solutions for point cloud segmentation. These limitations highlight the need for more flexible and adaptive solutions to efficiently address object detection, asset tracking, and inventory management in dynamic industrial scenarios. This research addresses these challenges by developing open-access, weight-balanced class datasets specifically designed for 3D point cloud segmentation in industrial environments. The datasets integrate synthetic data with real-world industrial scans, offering a solution to the problem of imbalanced class distributions, which often hinder the accuracy of neural networks. Two methodologies for synthetic datasets were developed, one with random object placement and the second through a procedural generation pipeline, which includes rules for object placement and rules for generating tube structures for industrial elements, filling the scene with various objects of variable geometric features to understand the different effects that make a dataset realistic. This procedural generation technique provides a flexible method for dataset creation that can be adapted for different objects, point cloud scales, point densities, and noise levels. The dataset improves the generalization capabilities of machine learning models, making them more robust in identifying and segmenting objects within industrial settings. The second part of the research presents a methodology for efficiently and accurately identifying specific objects in point cloud scenes and two methodologies for creating open-access industrial datasets designed to train neural networks for segmentation. The first part of the research focuses on the object-finding methodology, which is crucial for multiple applications, including object detection, pose estimation, and asset tracking. Traditional methods struggle with generalization, often failing to differentiate between unique objects and general classes. The proposed methodology for specific object finding utilizes a point transformer network for point cloud segmentation and a fully convolutional geometric features network to enhance geometrical features using color. A key innovation in this process is using a color-based iterative closest point (ICP) algorithm on the output of the fully convolutional geometric features network. This enables precisely matching segmented objects with a point cloud template, ensuring accurate object identification.Item Land-to-Water Linkages: Nutrients Legacies and Water Quality Across Anthropogenic Landscapes(University of Waterloo, 2025-01-06) Byrnes, Danyka; Basu, NanditaAn increasing population and the intensification of agriculture has driven rapid changes in land use and increases in excess nutrients in the environment. Globally, excess nutrients in inland and coastal waters have led to persistent issues of eutrophication, ecosystem degradation, hypoxia, and drinking water toxicity. Over the past few decades, we have seen policies set to mitigate the degradation in water quality. The existing paradigm of water quality management is based on decades of research finding a linear relationship between the net nitrogen inputs to the landscape and stream nitrogen exports. For instance, in the U.S., in response to these nutrient problems, working groups have spent approximately a trillion dollars to improve water quality by upgrading wastewater treatment plants and implementing nutrient management plans to decrease watershed nitrogen and phosphorus inputs. Despite concerted efforts, in many cases we have not seen marked improvements in water quality. In cases where water quality has improved, it is frequently after decades of nutrient management. The lack of or delayed water quality improvement suggests the importance of other drivers in modulating the relationship between nutrient inputs and watershed exports. Indeed, watershed nutrient loads are not just a function of current-year nitrogen inputs but can also depend on the history of inputs to the watershed. However, we still have little understanding of the relationship between nutrient inputs related to exports and the extent that accumulated stores of nitrogen and phosphorus influence this relationship. The central theme of my research has been an exploration of the history of anthropogenic nutrient use and the relationship between nutrient inputs and the response in water quality. Specifically, I have focused on the role of current nutrient inputs versus historical nutrient use in impacting water quality at the watershed scale, as well as the various landscape and climate controls that can mediate responses to changes in management. My research objectives will be to (1) develop a multi-decadal mass balance of nitrogen and phosphorus at the sub-watershed scale across the contiguous U.S. in order to investigate (2) the relationship between watershed nitrogen inputs and export and the drivers of changes in watershed nitrogen export, (3) the magnitude, spatial distribution, and drivers of nitrogen retention and legacy stores, and (4) the use and management of phosphorus in agricultural landscapes in the context of both food security and environmental health. I began by developing county-scale nitrogen and phosphorus surplus datasets, TREND-N and TREND-P, for the contiguous U.S.—with surplus defined as the difference between anthropogenic inputs (fertilizer, manure, domestic inputs, biological nitrogen fixation, and atmospheric deposition) and non-hydrological export (crop and pasture uptake). In Chapter 2, I present the updates to a previously published TREND-N county-scale nitrogen mass balance dataset, improving crop and pasture uptake and livestock excretion methods. In Chapter 3, I develope new county-scale phosphorus surplus dataset, using similar methods. These datasets were then downscaled to a 250 m gridded-scale dataset, known as gTREND-Nitrogen and gTREND-Phosphorus, a step led by my collaborator Shuyu Chang. These novel datasets serve as the foundational data for the subsequent chapters. Next, in Chapter 4, I explored the relationship between net nitrogen inputs and nitrogen export for over 400 watersheds across the U.S. I used the newly developed nitrogen surplus dataset to understand how watershed-scale nitrogen surplus magnitudes and exports change over time and examine how the relationships are influenced by both natural and anthropogenic controls within watersheds. To achieve this, I used a set of 492 watersheds with nitrogen input and export data spanning from 1990 to 2017. We found that 284 watersheds had a significant (p<0.1) increasing or decreasing trend in both nitrogen surplus and nitrogen load. Of these watersheds, we identified 62 where both nitrogen surplus and export have been significantly increasing over the last two decades. These input-driven watersheds are characterized by high livestock density, agricultural area, and tile drainage. In contrast, nitrogen surplus and export have been decreasing in 127 "bright spot" watersheds, characterized by high population density and urban land use. Nitrogen surplus is also decreasing in 60 "transitioning" watersheds, but export is increasing as nitrogen surplus decreases. We argue that these watersheds are transitioning from agriculture to more urban areas, such that fertilizer inputs have decreased, but the higher nitrogen export is driven by legacy nitrogen stores. Finally, we found 35 watersheds demonstrating a delayed response, with nitrogen export decreasing despite an increase in nitrogen surplus. Climate appears to be the driver of response in these watersheds, with aridity likely driving lower nitrogen export, despite increasing inputs. The four typologies of nitrogen inputs and export relationships suggest that watersheds can act as filters and modulate the movement of nitrogen. Our results provide insights into the complex dynamics of nitrogen surplus and export relationships, as well as how the landscape, climate, and legacy nitrogen can influence these relationships. In Chapter 4, I analyzed relationships between changes in nitrogen inputs and export, to understand what drives changes in watershed export, finding that legacy stores may be modulating the watershed response to changing net nitrogen inputs. However, we have limited knowledge of the magnitude and spatial distribution of legacy stores across North America. Therefore, in Chapter 5, we quantified how much nitrogen retention, which is the mass of nitrogen stored in legacy pools and nitrogen lost to denitrification, has accumulated in watersheds, and where it can be accumulating. To achieve this, we used existing datasets and machine learning algorithms to calculate the mass of ‘retained’ nitrogen in the landscape—defined as the nitrogen stored in the soil organic nitrogen pool, the groundwater pool, or lost through denitrification. Specifically, we built a random forest modeling framework trained on the watersheds’ nitrogen surplus and components, loads, and characteristics to predict nitrogen loads at the HUC8 scale across the U.S. We calculated retention for HUC8, which is the difference between nitrogen surplus and predicted loads, and found that nitrogen retention is highest in the Midwestern and Eastern U.S. because of low exports in regions with high agricultural inputs or high population density. Next, we used a data-driven approach to estimate legacy stores by allocating retained nitrogen mass into their legacy pools. We partition nitrogen retention in the Upper Mississippi region HUC8 watersheds into the mass stored in the groundwater pool, soil organic nitrogen pools, and mass lost to denitrification. We found that, on average, 42% of the mass is stored in the soil organic nitrogen pool, 16.5% is stored in the groundwater pool, and 40% is lost to denitrification. While these two chapters focused on nitrogen, in my final chapter we shifted to explore phosphorus use in agricultural systems. In my final chapter, we used the new gridded phosphorus surplus and components dataset to explore current and historical agricultural phosphorus use and management in landscapes within the context of both food security and environmental health. To characterize the extent of phosphorus depletion and excess, we employed indicators such as annual and cumulative phosphorus surplus and phosphorus use efficiency (PUE). We found that the evolution of agricultural phosphorus management is shaped by changing fertilizer management, the proliferation of concentrated animal operations, climate, and the landscape's memory of past phosphorus use. We further integrated both cumulative phosphorus surplus and PUE into a framework to quantify phosphorus sustainability in intensively managed landscapes. We found that in the 1980s, much of the agricultural land was undergoing ‘intensification,’ with positive and increasing cumulative stores because phosphorus inputs exceeded crop uptake (PUE < 1). By 2017, 29.5% of the agricultural land was undergoing ‘recovery’ and had positive cumulative phosphorus stores that were being depleted through improved phosphorus management (PUE > 1). However, 70% of the agricultural area in the U.S. is still undergoing ‘intensification,’ particularly in areas with more of their inputs from livestock manure, pointing to the need to treat manure as a resource instead of the current approach of treating it as a waste product. By using novel datasets, we have been able to explore nutrient use across space and time and its impact on food security and environmental outcomes. I have made significant contributions towards expanding the discussion of nutrient us and fate, understanding the magnitude and distribution of cumulative net nutrient inputs stores in the landscape, as well as the ways in which intrinsic watershed properties, climate, land management, and historical nutrient use can modulate the relationship between inputs and export. Overall, my findings underscore the importance of nuanced, place-based, and context-dependent nutrient management strategies, with a focus on manure management, to address the diverse challenges of different agricultural systems and prevent unintended environmental consequences.Item Erosion Risk Modelling: An Improved Screening Tool for Urban Watershed Management(University of Waterloo, 2025-01-02) Thirimanne, Hettige Dona Thiruni Dulara; MacVicar, BruceUrbanization alters hydrological responses by increasing impervious surfaces, leading to elevated runoff, altered streamflow regimes, and heightened flood risks (Paul & Meyer, 2001; Walsh et al., 2012). The impact of land-use changes is a crucial consideration for urban watershed management (Bochis-Micu & Pitt, 2005; Walsh et al., 2012). SPINpy 2 is a screening tool that utilizes digital elevation model (DEM)-based methods of stream power mapping from Vocal Ferencevic and Ashmore (2012) to integrate land-use data into its modelling framework. This study presents the development of two of SPINpy 2's Land Use (LU) based analyses: i) the No Stormwater Management (NSM) Scenario and ii) the Engineered Stormwater Management Pond (ESM) Scenario. Incorporating Nature-based Solutions (NbS), such as stormwater management ponds, into SPINpy 2 can model methods to alleviate the adverse effects of urbanization by promoting infiltration and stabilizing stream banks. This feature is particularly valuable for urban watersheds at high erosion risk, where NbS can help reduce the effects of impervious surfaces, lower flood risks, and stabilize channels. SPINpy 2 facilitates the modelling of NbS, assessing their effects on stream power, discharge, and erosion sensitivity and providing a decision-support tool for urban watershed managers. It helps evaluate the long-term benefits of NbS in reducing runoff and enhancing ecosystem resilience. By modelling the effects of reducing peak flows on erosion risk, SPINpy 2 simulates how stormwater management measures can mitigate erosion and offers insights into effective strategies for enhancing channel stability. The model was applied to urbanized watersheds such as Cooksville Creek to assess its utility in high-risk environments. The simulation results provide insights into the potential of NbS to reduce flood risks and improve channel stability. The application of SPINpy 2 demonstrated that incorporating NbS significantly mitigates the impacts of urbanization. Comparisons between scenarios with and without NbS interventions highlighted the importance of infiltration-based solutions in stabilizing stream channels and reducing sediment transport. SPINpy 2 also provided spatially explicit maps showing locations of high erosion risk and areas where NbS would be most effective. The findings underscore the potential of SPINpy 2 as a decision-support tool for urban watershed managers. By simulating the impacts of land-use changes and NbS interventions, SPINpy 2 offers a proactive approach to addressing hydrological and geomorphological challenges posed by urbanization. The ability to model diverse NbS scenarios enhances the tool's applicability in high-risk watersheds, such as Cooksville Creek, where impervious surfaces dominate and flood risks are heightened. The results demonstrate that NbS can substantially reduce runoff and stabilize channels, promoting ecosystem resilience and sustainable development. Overall, SPINpy 2 serves as a screening tool for decision-makers, enabling them to simulate and evaluate the impacts of land-use changes and NbS interventions, promoting sustainable development and environmental stewardship in urban environments. Its comprehensive approach allows watershed managers to tackle the unique challenges posed by urbanization and supports the development of cost-effective and environmentally sound infrastructure and policies. This proactive, integrative approach positions SPINpy 2 as a key resource for managing urban watersheds.Item Development of Tools for Infrastructure Asset Management Cross-Asset Trade-off Analysis and Universal Performance Measure for Public Agencies(University of Waterloo, 2024-12-13) Posavljak, Milos; Tighe, SusanWhile the modern monetary system, the limited liability corporation, and modern public infrastructure trace their beginnings to past centuries – use of the term “asset” when referring to public infrastructure started at the end of the 20th Century. With advances in computer technology, New Zealand and Australian public agencies were initial adopters and were the first to benefit from mass data availability on road infrastructure. Soon after, North America and the rest of the developed world followed in adopting what today is commonly referred to as infrastructure asset management practices. Infrastructure’s purpose remains the same as before – to support economic growth and societal accessibility. However, the new perspective of viewing it as an asset rather than an almost naturally occurring - passive societal commodity - has brought forth demand for increased transparency and evidence-based decision-making. Appropriate timing and action relative to asset’s performance and societal growth demands require a complex socio(org)-technical system to maximize the asset’s benefits to society and minimize the risk of it turning into a societal liability. The thesis presents an original approach to improving an organization’s decision-making capabilities by operationalizing asset management processes within vertical and horizontal public agency structures. One which uses organizational behaviour theory and operational analysis while leveraging civil engineering industry experience and engineering risk and reliability knowledge to develop-corporate data driven-asset performance measures. A novel horizontal information flow is mapped and introduced as the operationalizing asset management framework. It is used as a guide to shine a light on the asset management process complexities at tactical and operational levels of organizations. A new operational perspective on the definition of asset management is argued, one which sees it as an equal partnership between engineering and financial professionals reinforced with administrative policies and procedures. The effects of division of labour are reflected in the academic fields of engineering as well. Intra-departmental specializations within civil engineering include transportation, structural, hydrology, and further branching within each. With respect to infrastructure asset management this is a necessity as public agencies typically have a portfolio of varying assets ranging from roads, water distribution, sewer management, facilities, and parks - to name a few. For which different knowledge and skills are necessary in order to provide expert level management sought and claimed by managing agencies. As such, subject matter experts along with finance professionals make up the core team which functions within a compartmentalized structure of administrative policies and procedures. The two degrees of compartmentalization, one in the academic, other in the corporate setting has yielded organizationally silo-ed asset management processes competing for a single source of funding – public monies. Provided that all assets are equally important in providing a singular infrastructure system - as experienced by citizenry – the questions of which and why one is a priority over the other arise when there is a lack of funding for all within a particular time span. The research originally argued the need to use the inherit objectivity of monetary value to provide an objective method of cross-asset trade-off analysis to answer the “which”. While organizational theory and engineering experience is used to create new value from untapped potentials of existing organizational processes in creating one objective level playing field from which evidence-based decisions can rapidly be made and cataloged in answering the “why”. The research journey identified a significant bottleneck with the cross-asset item. Specifically, “field inspection of information” showed that the forecasting tools available to municipalities – within single asset classes – do not satisfy minimal scientific standards. Subsequently, it is argued that this is a naturally occurring limitation of the sample space, rather than a “continuous improvement item”. The research found that forecasting infrastructure spending needs according to the scientifically unreliable Age-Based approach overestimates them by 335%. This is compared to the scientifically reliable Consumer-Based approach that is based upon engineering risk and reliability.Item Advancing the efficient development and deployment of hydrologic and hydraulic models for large scale and real-time applications(University of Waterloo, 2024-12-12) Chlumsky, Robert; Craig, James R.; Tolson, BryanHydrologic and hydraulic models are tools that may be typically applied, respectively, to simulate streamflow and to determine depths and locations of flooding. While these tools are crucially important for predicting flood events, they require niche expertise and a high degree of effort to be developed and deployed effectively. This thesis aims to streamline the level of effort required to develop and deploy quality models within the typical workflows that support the simulation of flood events. First, the selection of optimal model structures within hydrologic models is addressed. The blended hydrologic model, which allows the selection of mathematical equations to represent processes in the hydrologic cycle to occur as part of a calibration exercise, is tested and shown to provide benefit to both model performance as well as scientific utility. Secondly, the blended model is improved through an extensive empirical experiment which delivers a high performing blended version 2 model. This model achieves high performance scores across the contiguous USA without a need to adjust the model structure, which will greatly reduce the time-consuming step in practice of manually selecting the optimal model structure for a given watershed. Finally, a novel method for hydraulic modelling and flood mapping is introduced. Improved geospatial methods are paired with a one-dimensional hydraulic model solver and then benchmarked against conventional methods. The result is shown to provide improved accuracy of flood maps while maintaining a computational runtime that is suitable for large-scale and real-time applications. Overall, it is anticipated that this research benefits the development of crucial tools for predicting and simulating flooding.Item Increasing Nutrient Circularity and Reducing Water Pollution Through Anaerobic Digesters(University of Waterloo, 2024-12-11) Wallace, Nettie; Basu, Nandita; Mai, JulianeWhile the intensification of agricultural practices over the last few decades has increased livestock and crop production, it has also led to unintended environmental consequences such as harmful algal blooms, drinking water contamination, and increased emissions of greenhouse gasses. Much of the increase in crop and livestock production can be attributed to a shift towards specialized agriculture which has resulted in the decoupling and spatial separation of livestock and crop systems. The spatial separation of the two systems has disrupted the circular flow of nutrients in agricultural systems. Relinking the livestock-nutrient economy has been identified as a strategy to reduce the overall environmental burden of the sector. The use of anaerobic digesters to manage livestock manure presents a promising pathway towards the recoupling of crop and livestock systems. Anaerobic digesters, or also referred to as biodigesters, utilize anaerobic decomposition to transform organic wastes into valuable by-products. During the digestion process, methane – a potent greenhouse gas emitted in traditional manure management – is captured to produce biogas which is a source of renewable energy. The process also produces digestate which is a nutrient rich effluent that can be applied to cropland as a fertilizer source. The nutrient dense nature of digestate, and the potential revenue from biogas production enable it to be economically transported over a greater distance than untreated manure – thereby providing a pathway to enhance the nutrient circularity in spatially separated livestock and crop systems. However, there is concern that digestate use can result in greater nitrogen leaching losses than manure. The work presented in this thesis estimates the nitrogen leaching losses from the corn and soybean cropland across 263 regions in Ontario and assesses the water quantity implications of manure and digestate land-application. To do this, a DeNitrification-DeComposition (DNDC) model was developed for each region. The models were calibrated individually to observed crop yields from 2011 and 2021. The calibrated models were able to capture the general magnitude and annual variation in reported corn and soybean yields across the study region. The median error between simulated and observed crop yields across all regions was 5.8% (mean absolute percent error). Corn crops were provided with synthetic fertilizer at an optimal rate, as determined by calibration. The results of the calibration showed that observed crop yields across the study region could be met through the application of 69% of the nitrogen fertilizer purchased in Ontario in 2021. This finding suggests corn nitrogen requirements are met through the application of purchased synthetic fertilizer while manure is applied to cropland in addition of crop needs. Next, I used livestock population data to estimate the quantity of manure nitrogen produced in each region. Using the calibrated DNDC models, I simulated a number of scenarios which explored various manure and digestate distribution configurations across the landscape. The results of this work show that when digestate was substituted for manure and subject to the same transportation constraints, the amount of nitrogen lost by leaching across the study region increased by 6% (from 46.77 to 49.42 kt N/yr). However, when the digestate distribution configuration was altered to reflect re-distribution from a centralized biodigester and its ability to be transported over a greater distance, the amount of nitrogen lost through leaching across the study region was reduced by 7% (43.42 kt N/yr). These findings show that when digestate was used as a direct substitute for manure and applied at equal rates based on total nitrogen content, it contributed to increased nitrogen leaching losses. However, when the distribution of the digestate was considered at a regional scale and the system dynamics of the biodigester were accounted for, the use of digestate reduced the total nitrogen leaching losses across the study region. This research shows that biodigesters can provide benefit to water quality when considered at a regional scale.Item Improving Short-term Streamflow Forecasting with Wavelet Transforms: A Large Sample Evaluation(University of Waterloo, 2024-12-11) You, John; Quilty, JohnAccurate streamflow forecasting is instrumental to water management, including flood preparation and drought monitoring. The past two decades have seen a steady rise in the application of machine learning (ML) models to streamflow forecasting, given their ability to model highly nonlinear relationships, moderate data requirements, and accuracy. Successful application of ML to streamflow forecasting requires the modeler to select appropriate features (e.g., precipitation and air temperature) to use in an ML model. Since the original features can be insufficient, adding new features derived from existing ones, also known as feature engineering, is often needed to improve the accuracy of streamflow forecasts. Wavelet transforms (WTs) have become promising feature engineering methods for streamflow forecasting since they can decompose time series (e.g., precipitation) into multiple sub series (coefficients). Each set of coefficients captures changes across different timescales (e.g., monthly, seasonal), allowing for the variance of the original time series to be associated with specific timescales. The different coefficients extracted by WTs are then used as features in an ML model, often improving the accuracy of streamflow forecasts compared to using the original features alone. Furthermore, different wavelet filters can capture different behaviours of a given time series (e.g., trends, short-term transients), making some more suitable for different applications (e.g., streamflow forecasting) than others. This leaves the modeler with the task of finding the right wavelet filter(s) for their application. Despite many existing studies coupling WTs and ML for streamflow forecasting, none have explored a large hydro-climatically diverse sample of catchments to evaluate the impact of WTs on streamflow forecasting performance. Due to the small number of catchments included in existing streamflow forecasting studies using WT-based ML models, it is not clear how the performance of the adopted models generalizes to catchments with different characteristics. In addition, approximately 90% of studies using WTs for hydrological forecasting misuse WTs. The most common issue is not taking proper precautions to address look-ahead bias (i.e., the use of ‘future data’), invalidating the forecasts for real-world applications. Thus, this thesis seeks to address the abovementioned gaps in the literature by undertaking a large sample case study involving 620 catchments across the contiguous United States, using best practices for WT-based streamflow forecasting at the daily timescale. The WT-generated features are used in long short-term memory networks (LSTMs) to produce streamflow forecasts. LSTMs are selected due to their exceptional streamflow forecasting performance compared to other commonly adopted models, as noted in the literature. In total, three LSTM configurations are considered: baseline LSTM (B LSTM), wavelet LSTM (W-LSTM), and grid search LSTM (GS-LSTM). In the first configuration, a baseline LSTM model is developed for each catchment. In the second configuration, 33 different wavelet filters are used to engineer features based on several hydro-meteorological features (e.g., precipitation and air temperature), resulting in 33 different W-LSTM models for each catchment. For each catchment, the 33 different W-LSTM models are compared to the B-LSTM model to evaluate the impact of WTs on streamflow forecasting performance. In the third configuration, the B-LSTM models undergo hyper-parameter selection using grid search. This setup is used to test whether grid search has a greater impact on streamflow forecasting performance than WTs. All configurations are applied to one- and three-day-ahead streamflow forecasting. For the one-day-ahead forecast horizon, W-LSTM improves upon B-LSTM performance in 97% of catchments and improves upon GS-LSTM in 50% of catchments. For a forecast horizon of three days ahead, W-LSTM improves upon B-LSTM performance in 97% of catchments and improves upon GS LSTM in 41% of catchments. When considering only catchments where the B-LSTM model meets a minimum performance threshold (i.e., out-of-sample Nash-Sutcliffe Efficiency, OOS NSE, greater than 0.4), then for a forecast horizon of one day ahead, W-LSTM improves upon GS-LSTM in 60% of catchments, while for a forecast horizon of three days ahead, W-LSTM improves upon GS-LSTM in 70% of catchments. Certain wavelet filters perform better than others. For instance, the W-LSTM using the Morris Minimum Bandwidth 4.2 filter outperforms B-LSTM in over 50% of catchments (where B-LSTM has an OOS NSE greater than 0.4) for both forecast horizons. Overall, WTs provide the greatest improvement to forecasting performance (for both one and three day(s) ahead forecast horizons) in the D (snowy climates) and B (dry climates) Koppen classification regions. This finding presents a clear direction for researchers and practitioners when deciding whether WTs will benefit their streamflow forecasting models in their regions. This thesis is the first to use a large sample of catchments to demonstrate that WTs are useful for improving ML-based streamflow forecasts. These models can be used for reservoir management, early flood warning systems, irrigation, navigation, and many other water management applications. Future work can explore the combined optimization of wavelet filters and LSTM hyper-parameters to improve further upon the performance of the models reported in this thesis. Another worthwhile endeavor is to focus on modifications to the LSTM, such as quantile loss functions, Monte Carlo dropout connections, conformal prediction, and/or Bayesian methods to generate probabilistic forecasts enabling risk-based solutions to water management problems.Item Finite Element Models for Multiscale Theory of Porous Media(University of Waterloo, 2024-12-03) Campos, Bruna; Gracie, RobertTheories to describe porous media aim to represent a combination of a solid matrix and a connected pore space, which may be occupied by fluids. Porous media applications are found in structural materials, geomechanics, biological tissues, and chemical filtration processes, amongst others. Early developments are attributed to Biot who, based on experimental data, derived a set of governing equations for consolidation problems. Concomitantly, other theories were based on mixture theory and volume averaging concept, establishing a clear connection between micro and macro scales. The work presented by de la Cruz and Spanos (dCS) is an example of a volume-averaging theory. The Biot (BT) theory can be seen as a simplified version of the dCS formulation, since the former assumes a unique energy potential, restrains solid-fluid deformations to reciprocal interactions, and does not account for fluid viscous dissipation terms on the fluid stress definition. Due to the complexity of the dCS theory governing equations, correspondent numerical models have been scarce in literature. To fill this gap, this research focuses on the development of novel finite element (FE) models for the dCS theory. The main applications are consolidation and wave propagation problems in water-saturated rock formations. Results are compared to the BT formulation, and the circumstances which can lead to discrepancies between these theories are studied. First, a novel FE method is developed for quasi-static solid deformations and transient pore fluid flow, where dynamic effects are neglected. The governing equations are written in terms of solid displacement, fluid pressure, and porosity. Fully implicit time integration and a mixed-element formulation are employed to ensure stability. The convergence rate of the dCS FE model is shown to be optimal in a one-dimensional consolidation problem, considering that rates are dependent on how significant the solid-fluid coupling terms are compared to the uncoupled terms. Two-dimensional examples further attest the robustness of the implementation and are shown to reproduce BT model results as a special case. Non-reciprocal solid-fluid interactions, inherent to the dCS theory, lead to significant differences depending on the properties of the porous media (e.g., permeability) and problem specific constraints. Extending the transient formulation to include inertia (acceleration) terms, a three-field dCS FE model for dynamic porous media is presented, now formulated in terms of solid displacement, fluid pressure, and fluid displacement. Due to fluid viscous dissipation terms, wave propagation in the dCS theory yields an additional rotational wave compared to BT theory. Besides the introduction of non-reciprocal solid fluid interactions, the dCS model further accounts for a dimensionless parameter related to the macroscopic shear modulus. Space and time convergence rates are demonstrated in a one-dimensional case. A dimensionless analysis performed in the dCS framework led to negligible differences between BT and dCS models except when assuming high fluid viscosity. Domains with small characteristic lengths resulted in BT and dCS damping terms in the same order of magnitude. One- and two-dimensional examples showed that the dCS non-reciprocal interactions and the macroscopic shear modulus parameter are responsible for modifying wave patterns. A two-dimensional injection well simulation with water and slickwater showed higher wave attenuation for the latter. Following the derivation of the dCS dynamic formulation, wave propagation phenomena in porous media is analyzed. The dCS slow S wave is studied, essential in the representation of fluid vorticity and shear motion at high frequencies. Differences between dCS and BT results appear at high frequency range. Solid-fluid non-reciprocal interactions and changes in the macroscopic solid shear modulus lead to distinct P wave patterns. The influence of permeability, porosity, and dynamic viscosity is evaluated, showing that wave patterns are generally most affected at the ultrasonic frequency range. At low frequencies, BT and dCS yield the same results for saturated rocks, except when the slow P wave is non-dissipative. While BT theory incorporates a correction factor to reproduce fluid behavior at high frequency, the dCS model naturally accounts for this effect due to the complete fluid stress tensor. High-frequency results from both theories, nonetheless, are discrepant. Since the dCS equations can be written in terms of different main variables, they can be chosen according to the problem setup. In this sense, a three-field dCS formulation is written in terms of solid displacement, fluid pressure, and relative fluid velocity. The verification study is in agreement with BT results, highlighting how the addition of fluid viscous dissipation terms does not influence load bearing. However, it is essential in the representation of fluid vorticity motion at high frequencies and in wave reflection/transmission problems. Optimal convergence rates for a one-dimensional example are obtained for same-order linear elements. Two-dimensional examples with sandstone and shale layers show how waves are transmitted and reflected at the domain interface at the low- and high-frequency regimes. The research development and findings reported in this thesis consist in novel FE models to represent the dCS porous media theory. The results show how the dCS formulation is able not only to recover BT results but further circumvent gaps in the BT theory. The dCS FE framework herein presented is also a foundation for future studies in the area. Examples are the expansion of wave propagation studies with a reduced number of assumptions, the combination of a inertial fracture flow model with a porous media representation of fractured rocks, the simulation of nonlinear solid matrix behavior, and the development of a dCS FE model for inertia-driven flow in porous media.Item Semi-Analytical Framework for Thermo-Mechanical Analysis of Energy Piles in Elastic and Elastoplastic Soils(University of Waterloo, 2024-10-29) Paul, Abhisek; Basu, DipanjanEnergy piles or geothermal piles are used to reduce the energy demand of a building by helping in the heating/cooling of the building as required. The advantage of energy piles over other ground heat exchangers is that piles are an integrated part of the foundation of a building to carry the superstructure load. The same piles can be used to extract/inject heat from/to the ground and that heat can be used in the cooling/heating of the building. Thus, the energy demand of the building for heating/cooling is reduced. Energy piles are subjected to mechanical load that comes from the superstructure as well as thermal load (temperature change) caused by the heat exchange operation. The combined mechanical and thermal load changes the behavior of the pile foundation. Because the length of the pile is much higher compared to its diameter, the temperature change affects the axial behavior of the pile rather than its lateral response. The settlement of an energy pile is different than a conventional pile foundation as heating or cooling of the pile causes extension in some parts of the pile and compression in other parts of the pile. The axial response of an energy pile under mechanical and thermal loads in terms of vertical settlement, strain, and stress in the pile is calculated in the available literature where the pile-soil interaction has been considered by modeling the soil with equivalent linear and nonlinear soil springs. The representation of soil as springs does not take into account the effect of three-dimensional pile-structure interaction. Analysis of the energy pile considering the three-dimensional pile-structure interaction has been done in the literature using numerical methods which are computationally expensive. Apart from that, the thermo-mechanical behavior of soil needs to be considered in the energy pile analysis because of the temperature change of the pile and soil, and heat exchange between the pile and the soil. In this context, the thermo-mechanical soil constitutive model should satisfy the laws of thermodynamics. The analysis of the energy pile with a thermodynamically acceptable thermo-mechanical soil constitutive model is lacking in the available literature. In this thesis, a continuum-based semi-analytical framework for the analysis of the energy pile that takes into account the three-dimensional pile-structure interaction is proposed. First, the semi-analytical framework for an energy pile that is embedded in multi-layered soil and subjected to mechanical axial load and thermal load is developed using the variational principle of mechanics by minimizing the potential energy of the pile-soil continuum where both the pile material and soil are considered to behave as a linear elastic material. In the next part of the thesis, a semi-analytical framework for the same energy pile is developed where the soil is modeled as an elastoplastic material. The analysis framework for the energy pile in elastoplastic soil is developed from the laws of thermodynamics with the energy potential and dissipation function where the plastic behavior of soil is taken into account through the dissipation function. The derived analytical framework is used for elastoplastic soil response with the Drucker-Prager constitutive model. The results from the present analysis are verified with available results in the literature from experiments and numerical analysis. The Drucker-Prager constitutive model does not take into account the effect of temperature on the stress-strain response of the soil. So, this constitutive model is most suitable to model the thermo-mechanical behavior of sand as the effect of temperature on the thermo-mechanical response of sand is not significant. Therefore, energy piles in sand can be analyzed using the present framework with the Drucker-Prager constitutive model. Energy pile in clay needs to be represented with a soil constitutive model that can capture the effect of temperature on the thermo-mechanical response of clay. In the third part of the thesis, the present analytical framework for an energy pile in elastoplastic soil is used to analyze the pile in clay with a thermo-mechanical constitutive model that takes into account the change in the mechanical response of clay due to temperature change. The thermo-mechanical model used in the analysis was developed using the hyperplasticity formalism, which satisfies the laws of thermodynamics. The present framework in all cases predicts results with acceptable accuracy. The present analytical framework predicts the response of energy pile under different loads (mechanical and thermal) in terms of vertical displacement, stress (with <10% difference between the results from the present analysis and finite element analysis/field tests) with acceptable accuracy in less computational time. The run time for the present analysis is approximately 10 and 5 times faster than axisymmetric finite element analysis for elastic and elastoplastic cases, respectively. In the final part of the thesis, the stresses developed in an energy pile under mechanical and thermal loads are observed for different soil properties. A parametric study is conducted on the developed stresses in an energy pile in single and two-layer soils under multiple loading conditions. A correlation between the applied mechanical and thermal load is established, and under these loading conditions, the effects of soil layering and material properties on the developed stresses are observed. The development of the tensile zone and its range in an energy pile for certain conditions are concluded from the parametric study.Item Rethinking Infrastructure Deconstruction Through Reality Data Capture and Interactive Simulations(University of Waterloo, 2024-10-22) Earle, Gabriel; Haas, Carl; Narasimhan, SriramThe demand for careful infrastructure deconstruction and disassembly increases every day, as the market for building material reuse grows, and the reality of construction landfill waste weighs on the environment. Mid-century nuclear power plants are one example of infrastructure that are reaching life expectancies and require massive deconstruction initiatives. These projects span multiple decades and bear massive costs, making effective planning a requirement. However, despite active research efforts in infrastructure lifecycle stages such as design, construction, inspection, and maintenance, deconstruction remains relatively unexplored. Meanwhile, applicable technologies like autonomous robotics, virtual reality headsets, and 3D simulation software are more capable and cost-effective than ever. In this research, the use of reality data capture and virtual reality simulations as tools for planning deconstruction projects is studied. Novel planning workflows using these technologies are designed, implemented, and tested for validation. The completion of this research yields the contribution of three novel planning methodologies. They are 1) a methodology for cutting and packing planning, 2) a methodology for building material reuse planning, and 3) methodology for automated critical path schedule generation. Several scientific problems are addressed in support of developing and validating the proposed methodologies. While the methodologies each address a different workflow within deconstruction planning, they build upon each other in terms of the scientific problems that are addressed, with the final methodology bearing the most significant and novel contributions. In the methodology for cutting and packing, the topics of creating immersive virtual reality simulations from reality data as well as conducting human-computer collaborative cutting and packing planning based on reality data are addressed. An optimal reality data processing approach for deconstruction planning simulations is detailed based on a quantitative comparison of reality data capture methods as well as qualitative observations. A simulation environment with rich feedback based on the processed reality data is designed to enable human-computer collaborative cutting and packing planning. These developments build upon prior work with similar capabilities, but that do not incorporate reality data. In the methodology for building material reuse planning, the problem of simultaneously optimizing deconstruction planning and reuse-based architectural design is studied. A gamified virtual reality simulation is developed and presented to address the underconstrained nature of this problem, which enables users to create a high level initial solution. A workshop with human participants is held and quantitative metrics from questionnaires as well as qualitative feedback on the simulation is collected. The feedback is analyzed in order to support future work in refining the methodology. In the methodology for automated critical path schedule generation, the most significant contributions of the thesis are presented, where the problem of efficiently and automatically generating accurate critical path schedules based on a virtual reality simulation run is addressed. In the methodology presented, a series of novel algorithms are designed to deduce the requisite information to produce a critical path schedule that reflects a user’s actions in a virtual reality environment. The algorithms support 1) detecting the occurrence of construction-centric actions within the virtual reality environment, 2) estimating the corresponding real-world durations of detected actions, and 3) deducing the precedence relationships of detection actions. Using this information, an automated approach for assembling a critical path schedule that is enhanced with rich metadata about the planning process and that is not resource constrained is presented.Item Removal of microplastics during drinking water treatment: Linking theory to practice to advance risk management(University of Waterloo, 2024-10-22) Gomes, Alice; Emelko, Monica B.; Anderson, William B.Microplastics (MPs) have emerged in the past decade as widespread contaminants that are harmful to human and ecosystem health. While their removal from water may be similar to those of other particulate contaminants, its characterization is complicated because MPs can undergo weathering, photolysis, and microbial degradation in the natural environment, resulting in the presence of functional groups (e.g., carbonyl, hydroxyl) on their surfaces, which may affect their removal during drinking water treatment. Given that studies using seeded polystyrene microspheres/MPs as surrogates for oocysts have shown good (but sometimes variable) removals through conventional drinking water treatment composed of coagulation, flocculation and sedimentation (CFS) followed by filtration, MPs are likely to be well removed in optimized conventional drinking water treatment plants. While many studies have focused on the removal of larger (i.e., >50 µm sized microplastics), investigations of the removal of smaller sized (<10 μm) microplastics by drinking water treatment processes have been limited largely to case studies in which foundational mechanisms necessary for maximizing treatment performance have only been superficially investigated, if at all. To address this gap, the study focused on whether MPs removal by conventional chemical pretreatment (i.e., coagulation, flocculation, and sedimentation) with alum aligns with the removal of other particles, including Cryptosporidium oocysts, for which particle destabilization is essential for removal. The study aimed to advance knowledge through three main objectives: (1) characterize MPs removal by CFS with different particle destabilization mechanisms and compare it to other important particulate contaminants (i.e., Cryptosporidium spp. oocysts), (2) evaluate the effect of particle size on MPs removal by CFS, and (3) assess the influence of weathering on MPs removal by CFS. To evaluate MPs removal by chemical pretreatment reliant on (1) adsorption and charge neutralization and (2) enmeshment in precipitate (i.e., sweep flocculation) particle destabilization mechanisms, bench-scale investigations of alum-based CFS (i.e., jar tests) were conducted with synthetic water using pristine and weathered PS microplastics of 1, 5 and 10 μm diameter. Several synthetic raw water matrices were explored to identify scenarios in which both particle destabilization mechanisms were clearly discerned. The final synthetic raw water was composed of deionized water spiked with sodium carbonate and kaolin (70 NTU) at pH 7.0. To demonstrate that MPs removal by CFS aligns with coagulation theory, sixteen alum doses between 0–38.8 mg/L were used to evaluate MPs removal by CFS. Turbidity reduction was also evaluated, and zeta potential was analyzed to identify maximal particle destabilization. MPs removal increased with particle size, aligning with gravitational settling theory. MPs removal during CFS with optimized particle destabilization was generally consistent with reported removals of other particles, including Cryptosporidium spp. oocysts during optimized chemical pretreatment, thereby suggesting that similar approaches for risk management may be relevant to MPs. Notably, differences in pristine and weathered MPs removal by CFS were not significant under the conditions investigated, thereby suggesting that weathering does not affect MPs removal when particle destabilization by coagulant addition is optimized. This study bridges the gap between the theories of conventional drinking water treatment and concerns regarding the potential passage of MPs through drinking water treatment plants, demonstrating that MPs can be removed in the same manner as other colloidal particles using conventional chemical pretreatment and—by well-recognized theory-based extension—physico-chemical filtration.Item Phosphorus Legacies and Water Quality Trajectories Across Canada(University of Waterloo, 2024-10-15) Malik, Lamisa; Basu, NanditaPhosphorus (P) pollution in freshwater is a critical environmental issue, primarily driven by agricultural runoff, wastewater discharge, and industrial effluents. Across Canada, lakes such as Lake Erie and Lake Winnipeg experience severe and persistent algal blooms driven mainly due to excess phosphorus loading. Excessive phosphorus loading leads to eutrophication, which causes harmful algal blooms and hypoxia which disrupt aquatic life, reduce biodiversity, and impair water quality, making human consumption and recreational activities unsafe. Despite policies aimed at reducing phosphorus loading, such as improved farming practices and wastewater treatment upgrades, we have not seen a marked decrease in riverine loads. Phosphorus management goals often fall short due to the persistence of legacies – phosphorus that has accumulated in soils and sediments over decades of agricultural applications – which continue to release phosphorus into water bodies for decades after its initial application. Despite recognizing the existence and significant regional and global impact of legacy P on water quality and aquatic ecosystems, our understanding of the magnitude and spatial distribution of these P stores remains limited. Understanding the legacy P stores and their contributors is crucial for efficiently managing water quality, highlighting the importance of studying these factors to develop more effective and sustainable management strategies. The central theme of this thesis is the exploration of the phosphorus legacy across various landscapes. My work has three objectives. First, I explore phosphorus legacies and water quality trajectories across the Lake Erie basin. Second, I quantify various legacy P stores and evaluate their current and future impacts on water quality. Third, I quantified phosphorus accumulation for Pan Canada. In the first objective, I develop a comprehensive phosphorus budget for the Lake Erie Basin, a 60,000 km² transboundary region between the U.S. and Canada, by collecting, harmonizing, and synthesizing agricultural, climate, and population data. The phosphorus inputs included fertilizer, livestock manure, human waste, detergents, and atmospheric deposition, while outputs focused on crop and pasture uptake, covering a historical period from 1930 to 2016. The budget allowed us to calculate excess phosphorus as phosphorus surplus– surplus defined as the difference between P inputs and non-hydrological exports. A random forest model was then employed to describe in-stream phosphorus export as a function of cumulative P surplus and streamflow. The results indicated a significant accumulation of legacy P in the watersheds of the Lake Erie Basin. Notably, higher legacy P accumulation corresponded strongly with greater manure inputs (R²=0.46, p < 0.05), whereas fertilizer inputs showed a weaker relationship. For the second objective, I model the long-term nutrient dynamics of phosphorus across 45 watersheds in the Lake Erie basin using the ELEMeNT-P model. This aimed to quantify legacy phosphorus accumulation and depletion across different landscape compartments, including soils, landfills, reservoirs, and riparian zones, and to assess the potential for phosphorus load reductions under future management scenarios. The model sought to identify key legacy phosphorus pools and explore the feasibility of achieving significant reductions in phosphorus loading, with results indicating that 40% reductions are attainable only through aggressive management efforts. For the last objective, I develop a high-resolution phosphorus budget dataset for Canada, spanning the years 1961 to 2021, at both county and 250-meter spatial scales. This dataset aimed to capture phosphorus inputs from fertilizers, manure, and domestic waste, along with phosphorus uptake by crops and pastureland, across all ten provinces. With this dataset, I aim to better understand the state and progress of phosphorus management across space and time. The results reveal significant variation in P surplus attributable to differences in land use and management practices. The highest surpluses were observed in southern Ontario and Quebec, with approximately 50 kilotons in 2021, contributing to an accumulation of over 2 tera tons of phosphorus over the past 60 years.Item Non-Stationary Stochastic Modelling of Climate Hazards for Risk and Reliability Assessment(University of Waterloo, 2024-10-01) Bhadra, Rituraj; Pandey, Mahesh D.This thesis presents methodologies for studying the effects of climate change on natural hazards. The thesis is structured around three key aspects: first, the stochastic modelling of non-stationary hazards; second, the modelling of concrete degradation in a changing climate; and third, the economic risk evaluation associated with these non-stationary hazards. The initial focus of this thesis is on applying a non-stationary stochastic process to model the increasing frequency and intensity of climate-driven hazards in Canada. The early chapters provide an overview of the effects of climate change in Canada. To understand the trends and projections of various climatic variables, such as temperature, precipitation, and wind speed, recent studies and reports from Environment and Climate Change Canada, along with other relevant literature, are examined and analyses were performed on the model outputs of the Couple Model Inter-comparison Project Phase 6 (CMIP6) data. The overview highlights the growing occurrence and severity of climate hazards, including hurricanes, droughts, wildfires, and heatwaves, as supported by other independent studies. In the light of such analyses, this study demonstrates the inadequacy of traditional stationary models for future predictions and risk assessments, thereby advocating for a shift to non-stationary frameworks. The thesis provides a robust theoretical foundation for non-stationary hazard modelling using stochastic process models. Traditional extreme value analysis (EVA) typically assumes stationarity. However, this assumption is invalidated by gradual changes in the frequency and intensity of climate-driven hazards. This research proposes methodologies to model climatic hazards using a non-stationary stochastic shock process, specifically the non-homogeneous Poisson process (NHPP), to derive the maximum value distributions over any finite period and not just restricted to annual maxima. These models account for changes in the underlying processes over time, providing a more accurate representation of climate-driven hazards by incorporating time-varying parameters that reflect the dynamic nature of climatic extremes. By integrating stochasticity and temporal variability, these stochastic process models offer a robust framework for predicting the future occurrence and intensity of climate-driven hazards. The proposed methods are demonstrated through the estimation of maximum value distributions for precipitation events using the Coupled Model Inter-comparison Project (CMIP) phase-6 multi-model ensemble data, with an analysis of inter-model variability. Furthermore, the thesis presents a case study on modeling heatwaves to illustrate the application of these models to climatic data, particularly for events where the asymptotic assumptions of extreme value theory do not hold. Climate change will not only influence the loads and hazards on infrastructure, but it will also exacerbate the degradation processes of structures due to harsher climatic conditions such as higher temperatures and increased humidity. To model these effects on the degradation of concrete bridges, simulations were conducted using physico-chemical concrete degradation processes. Based on the simulation results, non-stationary Markov transition probabilities were estimated for several key locations in Canada under various Shared Socioeconomic Pathway (SSP) scenarios. The final chapter of the thesis addresses the economic aspects of climate-driven hazards. It includes derivations to estimate various statistics of damage costs, such as the mean, variance, moments, and distribution, resulting from a non-stationary hazard process. The analytical results were derived for several cases, such as considering the loss magnitudes to be identically and non-identically distributed, and whether discounting is applied to the losses or not to address the effect of time in evaluating the net present losses or not. This analysis offers valuable information for policy makers, engineers, and scientists involved in climate adaptation and mitigation efforts.Item A Stochastic Framework for Urban Flood Hazard Assessment: Integrating SWMM and HEC-RAS Models to Address Watershed and Climate Uncertainties(University of Waterloo, 2024-09-25) Abedin, Sayed Joinal Hossain; MacVicar, BruceUrbanization significantly alters natural hydrological processes, leading to increased flood risks in urban areas. The potential damages caused by flooding in urban areas are widely recognized, making it crucial for urban residents to be well-informed about flood risks to mitigate potential losses. Flood maps serve as essential tools in this regard, providing valuable information that aids in effective planning, risk assessment, and decision-making. Despite floods being the most common natural disasters in Canada, many Canadians still lack access to high-quality, up-to-date flood maps. The occurrence of recent major flood events across the country has sparked renewed interest among government officials and stakeholders in launching new flood mapping initiatives. These projects are critical for enhancing flood risk management across communities. Traditional flood hazard mapping methods, based on deterministic approaches, often fail to account for the complexities and uncertainties inherent in urban flood dynamics, especially under rapidly changing climate conditions. Uncertainty affects every stage of flood mapping, influencing accuracy and reliability. Recognizing this, recent studies advocate for stochastic approaches to explicitly incorporate these uncertainties. However, there is a lack of industry-standard tools that allow for a convenient and comprehensive analysis of uncertainty, making it challenging to routinely incorporate uncertainty into flood hazard assessments in practice. This underscores the need for a robust framework to model flood uncertainty. While various models have been proposed to address the uncertainty, many remain conceptual or lack the necessary automation. Despite no "perfect models", the Storm Water Management Model (SWMM) and the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) are widely used for urban hydrology and channel hydraulics modeling, respectively, due to their robust physics-based approaches. Both SWMM and HEC-RAS models have been enhanced with commercial and open-source extensions, built on algorithms written in various programming languages, to improve their utility, particularly for automating workflows to handle complex urban flood scenarios. While SWMM has more robust extensions, most available HEC-RAS extensions are designed for one-dimensional (1D) steady state models, which lack the complexity needed for accurate urban flood modeling. The release of HEC-RAS 6.0, which allows for two-dimensional (2D) unsteady flow modeling and incorporates structures like bridges and weirs, marks a significant advancement for urban flood modeling. The current research was motivated by the perceived benefits of designing such extensions for automating workflows in recent versions of SWMM and HEC-RAS, as well as automating the coupling of these two models in a stochastic framework to facilitate the integration of uncertainty into existing flood hazard mapping workflows. This thesis introduces the SWMM-RASpy framework, a novel automated stochastic tool built using the open-source Python programming language. SWMM-RASpy integrates SWMM's detailed urban hydrologic capabilities, such as dual-drainage modeling, with HEC-RAS's 2D unsteady hydraulic modeling, coupled with stochastic simulations through Latin Hypercube Sampling (LHS) to analyze the uncertainty in flood hazard mapping. The framework was demonstrated on the Cooksville Creek watershed, a highly urbanized area in Mississauga, Ontario, known for its susceptibility to flooding. An entropy map was successfully produced for the case study site, which better reflects the uncertainty of flooding and could be used to develop tailored flood planning and preparedness strategies for different zones within the site. This thesis also presents a detailed application of the SWMM-RASpy framework to assess flood hazards, with a specific focus on topography-based hydraulic uncertainties in the watershed, particularly surface roughness variability, which affects pedestrian safety during flood events. The study highlights that traditional hazard models, which focus mainly on residential buildings, do not adequately account for the risks to pedestrians who are a significant source of fatalities in flood events, especially in densely populated urban areas with high mobility. Three flood hazard metrics were developed and used to evaluate the flood risks to pedestrians given the uncertainty surrounding surface roughness: FHM1, based on inundation depth; FHM2, combining depth and velocity; and FHM3, incorporating depth, velocity, and duration. Key findings from the assessment indicate that surface roughness significantly affects pedestrian hazard estimation across the floodplain, making it a critical factor in flood hazard management. The FHM2 metric, which incorporates depth and velocity, was found to be highly sensitive to roughness variation, potentially leading to the misclassification of hazardous zones as safe and vice versa. The inclusion of velocity in the hazard assessment, while improving accuracy, also increased variability, emphasizing the need for a balanced approach in flood risk evaluations. In contrast, the FHM3 metric, which includes flooding duration, showed minimal sensitivity to surface roughness uncertainty. The research also suggests that confidence maps, produced as part of the analysis and accounting for estimated uncertainties surrounding the hazard metrics propagated from surface roughness, can offer a more reliable alternative to traditional deterministic hazard maps. Lastly, the study, through this analysis, emphasizes the importance of combining grid-level and zonal-level analyses for a more comprehensive understanding of flood hazards at different scales, thereby supporting more robust flood risk assessments. This thesis extends the application of the SWMM-RASpy framework to assess the impacts of climate change on flood hazards within the Cooksville Creek watershed. It examines how projected changes in rainfall intensity from Global Climate Models (GCMs) affect flood risks, particularly for urban buildings, as well as the importance of incorporating uncertainties from these projections into flood hazard assessments. The same hazard metrics used for pedestrian hazard assessment, FHM1, FHM2 and FHM3, were used to evaluate building hazards. The study predicts a significant increase in flood hazards within the watershed, with a substantial expansion of inundation areas affecting up to 40% more buildings when uncertainties are considered. The analysis shows that without considering uncertainties, FHM1 and FHM3 predict a higher number of damaged buildings than FHM2, with FHM1 predicting the highest number of affected buildings. This suggests that relying solely on FHM1 to estimate building hazards may be sufficient in similar climate change scenarios, although further investigations are needed. However, when uncertainties are included, FHM2 shows a greater increase in the number of buildings at risk compared to FHM1 and FHM3, due to the larger uncertainty associated with velocity versus depth and duration. This underscores the need to incorporate uncertainty into flood hazard assessments to ensure a more comprehensive understanding of potential future damages. Overall, this study has made significant contributions to the field of urban flood hazard assessment by developing a robust method for incorporating and analyzing uncertainties, thereby supporting more effective flood management and resilience planning. Future research should apply the SWMM-RASpy framework to other watersheds and investigate additional hydrologic and hydraulic variables to further improve flood risk assessments.Item Atmospheric Emissions Associated with the Use of Biogas in Ontario(University of Waterloo, 2024-09-24) Bindas, Savannah; Saari , RebeccaThis study aims to quantify the atmospheric emissions associated with an energy-from-waste transition in Ontario. Specifically, it explores the emissions from using livestock and food waste to produce biogas as a source of renewable natural gas. Biogas is one potential “closed loop” solution to waste management; however, there is potential for additional emissions associated with transitioning to anaerobic digestion as a waste management strategy. Each step along the anaerobic digestion process, from the transportation of feedstock to the storage of post-processed digestate, can release both greenhouse gas (GHG) and air pollutant emissions. Here, we quantified the net effects on emissions of a biogas transition in Ontario. We evaluated scenarios using up to 100% of available waste feedstocks in the province and compared these emissions to conventional manure management and landfilling. We found that emissions from current manure management strategies dominated GHG emissions, and, as more manure was utilized in the biogas system, there was a drastic decrease in corresponding emissions of CH4, N2O, and CO2, all of which are potent GHGs. All scenarios showed emission reductions compared to the traditional practice. By the 75% biogas scenario, GHG emissions associated with the biogas process are balanced by the potential offsets from avoided synesthetic fertilizer production, leading to negligible net GHG emissions from this system. In the 100% scenario, we observed SOx, VOCs, NH3, and PM2.5 were increasingly offset by emissions savings in the natural gas production and synthetic fertilizer production. The important exceptions were the significant NH3 and PM2.5 emissions associated with conventional manure management. Due to this, we did not see net emissions savings from the biogas scenarios until the 100% run. These results highlight the atmospheric impacts of conventional waste management and demonstrate the potential for anaerobic digestion to mitigating these emissions.Item Desorption of Per-and Polyfluoroalkyl Substances from Powdered and Colloidal Activated Carbon(University of Waterloo, 2024-09-24) Uramowski, Andrew; Pham, Anh; Thomson, NeilPer- and polyfluoroalkyl substances (PFAS) are a group of synthetic chemicals with unique heat-resistant properties, leading to their usage in aqueous film-forming foams (AFFF) for fighting fuel-based fires at airports and military bases. The application of AFFF at these facilities has led to the contamination of groundwater with PFAS, which can threaten the safety of nearby drinking water, agricultural, and industrial supply wells, as well as downgradient surface water bodies. Drinking water supplies contaminated with PFAS can result in human exposure, which has been linked to developmental, immunological, endocrine, and cardiovascular disorders, and cancer. Since PFAS are resistant to biodegradation and traditional destruction technologies, current remediation efforts are focused on immobilizing PFAS using adsorptive processes that sequester PFAS from the aqueous phase, concentrating them on an adsorbent media. The injection of activated carbon (AC) particulate amendments into the subsurface has been suggested as a promising technique for the in situ immobilization of plumes of PFAS and to protect downgradient receptors. To predict the long-term performance of these AC barriers, a thorough understanding of adsorption and desorption processes is required. The objective of this research was to investigate the desorption behaviour of three PFAS (perfluorooctane sulfonic acid (PFOS), perfluorooctanoic acid (PFOA), and perfluorobutane sulfonic acid (PFBS)) from a powdered AC (PAC) and a colloidal AC (CAC). The research focused specifically on assessing whether desorption of PFOS, PFOA, and PFBS on these two AC materials was hysteretic. PFAS adsorption and desorption kinetic experiments using PAC or CAC were completed to determine the contact time required to reach near-equilibrium conditions. Adsorption experiments with PAC utilized the bottle-point method, and desorption experiments used a sequential desorption methodology where the aqueous phase of desorption reactors was replaced with an adsorbate-free solution. Adsorption of PFAS by CAC was also investigated using the bottle-point method; however, desorption experiments were conducted using two different methods: (1) a sub-sampling methodology where aliquots of slurry from a well-mixed adsorption isotherm bottle were diluted to initiate desorption, and (2) a whole-bottle dilution method where the entire contents of adsorption reactors were diluted to larger volumes to initiate desorption. The results indicated that for experiments utilizing PAC, adsorption and desorption equilibrium was established for all compounds within 72 h. Desorption of PFOS, PFOA and PFBS from PAC did not demonstrate hysteresis since all desorption data were contained within the 95% adsorption prediction band. For experiments using CAC, adsorption equilibrium was established by 120 h for all compounds, while desorption equilibrium was established by 120 h for PFOS, and 72 h for PFOA and PFBS. Desorption data using the sub-sampling method for PFOS, PFOA or PFBS and CAC were below and outside of the 95% adsorption prediction band. It was concluded that unrepresentative sub-sampling of CAC slurry occurred in the desorption step using this method. When the whole-bottle dilution method was adopted for PFOS, desorption data were within the 95% adsorption prediction band, indicating no evidence of hysteresis under the experimental conditions used. Since mass removal at each desorption step was extremely small compared to the sorbed fraction, desorption data did not reach aqueous equilibrium concentrations near the method detection limit. The absence of hysteretic behaviour in this research demonstrates that sorption processes are reversible over the concentration ranges explored. This reversibility implies that PFAS sorbed within AC barriers can be released when the groundwater concentration is decreased, either due to temporal heterogeneity in concentration profiles, or the depletion of the source zone.Item Spatio-Temporal Analysis of Roundabout Traffic Crashes in the Region of Waterloo(University of Waterloo, 2024-09-18) Miyake, Ryoto; Fu, LipingRoundabouts are increasingly implemented as safer alternatives to stop-controlled and signalised intersections, with the goal of reducing the severity and frequency of traffic crashes. The safety performance of roundabouts, however, is influenced by their geometric design, and the effects of geometric design variables on safety can vary across different countries and regions. Despite this, there is limited research on these safety impacts within the Canadian context. This study addresses this gap by using data from the Region of Waterloo, Ontario, to develop a safety performance function (SPF) using a negative binomial regression model. The model identified significant geometric design variables affecting collision frequency, such as inscribed circle diameter (ICD), entry angle, entry lane width and number of entry lanes. The findings suggest that the safety impacts of geometric design in Canada may differ from those observed in other countries, highlighting the need for region-specific SPFs. Additionally, in areas where roundabouts are relatively new, it is expected that the safety performance of roundabouts may fluctuate over time and across different locations. However, spatio-temporal variations in roundabout safety have not been extensively studied. To fill this gap, a spatio-temporal analysis was conducted using Bayesian hierarchical models to capture spatial and temporal variations in collision frequency. The results reveal significant spatial autocorrelation, while no strong temporal patterns or novelty effect were detected within the scope of the data and modelling approach used in this analysis. This research advances the understanding of how geometric design and spatio-temporal factors influence roundabout safety, providing important insights for the planning and design of roundabouts. Moreover, it is pioneering in its application of spatio-temporal interaction effects in road safety analysis, demonstrating the potential for this approach in future studies.