Civil and Environmental Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9906
This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Recent Submissions
Item Development of Tools for Infrastructure Asset Management Cross-Asset Trade-off Analysis and Universal Performance Measure for Public Agencies(University of Waterloo, 2024-12-13) Posavljak, MilosWhile the modern monetary system, the limited liability corporation, and modern public infrastructure trace their beginnings to past centuries – use of the term “asset” when referring to public infrastructure started at the end of the 20th Century. With advances in computer technology, New Zealand and Australian public agencies were initial adopters and were the first to benefit from mass data availability on road infrastructure. Soon after, North America and the rest of the developed world followed in adopting what today is commonly referred to as infrastructure asset management practices. Infrastructure’s purpose remains the same as before – to support economic growth and societal accessibility. However, the new perspective of viewing it as an asset rather than an almost naturally occurring - passive societal commodity - has brought forth demand for increased transparency and evidence-based decision-making. Appropriate timing and action relative to asset’s performance and societal growth demands require a complex socio(org)-technical system to maximize the asset’s benefits to society and minimize the risk of it turning into a societal liability. The thesis presents an original approach to improving an organization’s decision-making capabilities by operationalizing asset management processes within vertical and horizontal public agency structures. One which uses organizational behaviour theory and operational analysis while leveraging civil engineering industry experience and engineering risk and reliability knowledge to develop-corporate data driven-asset performance measures. A novel horizontal information flow is mapped and introduced as the operationalizing asset management framework. It is used as a guide to shine a light on the asset management process complexities at tactical and operational levels of organizations. A new operational perspective on the definition of asset management is argued, one which sees it as an equal partnership between engineering and financial professionals reinforced with administrative policies and procedures. The effects of division of labour are reflected in the academic fields of engineering as well. Intra-departmental specializations within civil engineering include transportation, structural, hydrology, and further branching within each. With respect to infrastructure asset management this is a necessity as public agencies typically have a portfolio of varying assets ranging from roads, water distribution, sewer management, facilities, and parks - to name a few. For which different knowledge and skills are necessary in order to provide expert level management sought and claimed by managing agencies. As such, subject matter experts along with finance professionals make up the core team which functions within a compartmentalized structure of administrative policies and procedures. The two degrees of compartmentalization, one in the academic, other in the corporate setting has yielded organizationally silo-ed asset management processes competing for a single source of funding – public monies. Provided that all assets are equally important in providing a singular infrastructure system - as experienced by citizenry – the questions of which and why one is a priority over the other arise when there is a lack of funding for all within a particular time span. The research originally argued the need to use the inherit objectivity of monetary value to provide an objective method of cross-asset trade-off analysis to answer the “which”. While organizational theory and engineering experience is used to create new value from untapped potentials of existing organizational processes in creating one objective level playing field from which evidence-based decisions can rapidly be made and cataloged in answering the “why”. The research journey identified a significant bottleneck with the cross-asset item. Specifically, “field inspection of information” showed that the forecasting tools available to municipalities – within single asset classes – do not satisfy minimal scientific standards. Subsequently, it is argued that this is a naturally occurring limitation of the sample space, rather than a “continuous improvement item”. The research found that forecasting infrastructure spending needs according to the scientifically unreliable Age-Based approach overestimates them by 335%. This is compared to the scientifically reliable Consumer-Based approach that is based upon engineering risk and reliability.Item Increasing Nutrient Circularity and Reducing Water Pollution Through Anaerobic Digesters(University of Waterloo, 2024-12-11) Wallace, NettieWhile the intensification of agricultural practices over the last few decades has increased livestock and crop production, it has also led to unintended environmental consequences such as harmful algal blooms, drinking water contamination, and increased emissions of greenhouse gasses. Much of the increase in crop and livestock production can be attributed to a shift towards specialized agriculture which has resulted in the decoupling and spatial separation of livestock and crop systems. The spatial separation of the two systems has disrupted the circular flow of nutrients in agricultural systems. Relinking the livestock-nutrient economy has been identified as a strategy to reduce the overall environmental burden of the sector. The use of anaerobic digesters to manage livestock manure presents a promising pathway towards the recoupling of crop and livestock systems. Anaerobic digesters, or also referred to as biodigesters, utilize anaerobic decomposition to transform organic wastes into valuable by-products. During the digestion process, methane – a potent greenhouse gas emitted in traditional manure management – is captured to produce biogas which is a source of renewable energy. The process also produces digestate which is a nutrient rich effluent that can be applied to cropland as a fertilizer source. The nutrient dense nature of digestate, and the potential revenue from biogas production enable it to be economically transported over a greater distance than untreated manure – thereby providing a pathway to enhance the nutrient circularity in spatially separated livestock and crop systems. However, there is concern that digestate use can result in greater nitrogen leaching losses than manure. The work presented in this thesis estimates the nitrogen leaching losses from the corn and soybean cropland across 263 regions in Ontario and assesses the water quantity implications of manure and digestate land-application. To do this, a DeNitrification-DeComposition (DNDC) model was developed for each region. The models were calibrated individually to observed crop yields from 2011 and 2021. The calibrated models were able to capture the general magnitude and annual variation in reported corn and soybean yields across the study region. The median error between simulated and observed crop yields across all regions was 5.8% (mean absolute percent error). Corn crops were provided with synthetic fertilizer at an optimal rate, as determined by calibration. The results of the calibration showed that observed crop yields across the study region could be met through the application of 69% of the nitrogen fertilizer purchased in Ontario in 2021. This finding suggests corn nitrogen requirements are met through the application of purchased synthetic fertilizer while manure is applied to cropland in addition of crop needs. Next, I used livestock population data to estimate the quantity of manure nitrogen produced in each region. Using the calibrated DNDC models, I simulated a number of scenarios which explored various manure and digestate distribution configurations across the landscape. The results of this work show that when digestate was substituted for manure and subject to the same transportation constraints, the amount of nitrogen lost by leaching across the study region increased by 6% (from 46.77 to 49.42 kt N/yr). However, when the digestate distribution configuration was altered to reflect re-distribution from a centralized biodigester and its ability to be transported over a greater distance, the amount of nitrogen lost through leaching across the study region was reduced by 7% (43.42 kt N/yr). These findings show that when digestate was used as a direct substitute for manure and applied at equal rates based on total nitrogen content, it contributed to increased nitrogen leaching losses. However, when the distribution of the digestate was considered at a regional scale and the system dynamics of the biodigester were accounted for, the use of digestate reduced the total nitrogen leaching losses across the study region. This research shows that biodigesters can provide benefit to water quality when considered at a regional scale.Item Finite Element Models for Multiscale Theory of Porous Media(University of Waterloo, 2024-12-03) Campos, BrunaTheories to describe porous media aim to represent a combination of a solid matrix and a connected pore space, which may be occupied by fluids. Porous media applications are found in structural materials, geomechanics, biological tissues, and chemical filtration processes, amongst others. Early developments are attributed to Biot who, based on experimental data, derived a set of governing equations for consolidation problems. Concomitantly, other theories were based on mixture theory and volume averaging concept, establishing a clear connection between micro and macro scales. The work presented by de la Cruz and Spanos (dCS) is an example of a volume-averaging theory. The Biot (BT) theory can be seen as a simplified version of the dCS formulation, since the former assumes a unique energy potential, restrains solid-fluid deformations to reciprocal interactions, and does not account for fluid viscous dissipation terms on the fluid stress definition. Due to the complexity of the dCS theory governing equations, correspondent numerical models have been scarce in literature. To fill this gap, this research focuses on the development of novel finite element (FE) models for the dCS theory. The main applications are consolidation and wave propagation problems in water-saturated rock formations. Results are compared to the BT formulation, and the circumstances which can lead to discrepancies between these theories are studied. First, a novel FE method is developed for quasi-static solid deformations and transient pore fluid flow, where dynamic effects are neglected. The governing equations are written in terms of solid displacement, fluid pressure, and porosity. Fully implicit time integration and a mixed-element formulation are employed to ensure stability. The convergence rate of the dCS FE model is shown to be optimal in a one-dimensional consolidation problem, considering that rates are dependent on how significant the solid-fluid coupling terms are compared to the uncoupled terms. Two-dimensional examples further attest the robustness of the implementation and are shown to reproduce BT model results as a special case. Non-reciprocal solid-fluid interactions, inherent to the dCS theory, lead to significant differences depending on the properties of the porous media (e.g., permeability) and problem specific constraints. Extending the transient formulation to include inertia (acceleration) terms, a three-field dCS FE model for dynamic porous media is presented, now formulated in terms of solid displacement, fluid pressure, and fluid displacement. Due to fluid viscous dissipation terms, wave propagation in the dCS theory yields an additional rotational wave compared to BT theory. Besides the introduction of non-reciprocal solid fluid interactions, the dCS model further accounts for a dimensionless parameter related to the macroscopic shear modulus. Space and time convergence rates are demonstrated in a one-dimensional case. A dimensionless analysis performed in the dCS framework led to negligible differences between BT and dCS models except when assuming high fluid viscosity. Domains with small characteristic lengths resulted in BT and dCS damping terms in the same order of magnitude. One- and two-dimensional examples showed that the dCS non-reciprocal interactions and the macroscopic shear modulus parameter are responsible for modifying wave patterns. A two-dimensional injection well simulation with water and slickwater showed higher wave attenuation for the latter. Following the derivation of the dCS dynamic formulation, wave propagation phenomena in porous media is analyzed. The dCS slow S wave is studied, essential in the representation of fluid vorticity and shear motion at high frequencies. Differences between dCS and BT results appear at high frequency range. Solid-fluid non-reciprocal interactions and changes in the macroscopic solid shear modulus lead to distinct P wave patterns. The influence of permeability, porosity, and dynamic viscosity is evaluated, showing that wave patterns are generally most affected at the ultrasonic frequency range. At low frequencies, BT and dCS yield the same results for saturated rocks, except when the slow P wave is non-dissipative. While BT theory incorporates a correction factor to reproduce fluid behavior at high frequency, the dCS model naturally accounts for this effect due to the complete fluid stress tensor. High-frequency results from both theories, nonetheless, are discrepant. Since the dCS equations can be written in terms of different main variables, they can be chosen according to the problem setup. In this sense, a three-field dCS formulation is written in terms of solid displacement, fluid pressure, and relative fluid velocity. The verification study is in agreement with BT results, highlighting how the addition of fluid viscous dissipation terms does not influence load bearing. However, it is essential in the representation of fluid vorticity motion at high frequencies and in wave reflection/transmission problems. Optimal convergence rates for a one-dimensional example are obtained for same-order linear elements. Two-dimensional examples with sandstone and shale layers show how waves are transmitted and reflected at the domain interface at the low- and high-frequency regimes. The research development and findings reported in this thesis consist in novel FE models to represent the dCS porous media theory. The results show how the dCS formulation is able not only to recover BT results but further circumvent gaps in the BT theory. The dCS FE framework herein presented is also a foundation for future studies in the area. Examples are the expansion of wave propagation studies with a reduced number of assumptions, the combination of a inertial fracture flow model with a porous media representation of fractured rocks, the simulation of nonlinear solid matrix behavior, and the development of a dCS FE model for inertia-driven flow in porous media.Item Semi-Analytical Framework for Thermo-Mechanical Analysis of Energy Piles in Elastic and Elastoplastic Soils(University of Waterloo, 2024-10-29) Paul, AbhisekEnergy piles or geothermal piles are used to reduce the energy demand of a building by helping in the heating/cooling of the building as required. The advantage of energy piles over other ground heat exchangers is that piles are an integrated part of the foundation of a building to carry the superstructure load. The same piles can be used to extract/inject heat from/to the ground and that heat can be used in the cooling/heating of the building. Thus, the energy demand of the building for heating/cooling is reduced. Energy piles are subjected to mechanical load that comes from the superstructure as well as thermal load (temperature change) caused by the heat exchange operation. The combined mechanical and thermal load changes the behavior of the pile foundation. Because the length of the pile is much higher compared to its diameter, the temperature change affects the axial behavior of the pile rather than its lateral response. The settlement of an energy pile is different than a conventional pile foundation as heating or cooling of the pile causes extension in some parts of the pile and compression in other parts of the pile. The axial response of an energy pile under mechanical and thermal loads in terms of vertical settlement, strain, and stress in the pile is calculated in the available literature where the pile-soil interaction has been considered by modeling the soil with equivalent linear and nonlinear soil springs. The representation of soil as springs does not take into account the effect of three-dimensional pile-structure interaction. Analysis of the energy pile considering the three-dimensional pile-structure interaction has been done in the literature using numerical methods which are computationally expensive. Apart from that, the thermo-mechanical behavior of soil needs to be considered in the energy pile analysis because of the temperature change of the pile and soil, and heat exchange between the pile and the soil. In this context, the thermo-mechanical soil constitutive model should satisfy the laws of thermodynamics. The analysis of the energy pile with a thermodynamically acceptable thermo-mechanical soil constitutive model is lacking in the available literature. In this thesis, a continuum-based semi-analytical framework for the analysis of the energy pile that takes into account the three-dimensional pile-structure interaction is proposed. First, the semi-analytical framework for an energy pile that is embedded in multi-layered soil and subjected to mechanical axial load and thermal load is developed using the variational principle of mechanics by minimizing the potential energy of the pile-soil continuum where both the pile material and soil are considered to behave as a linear elastic material. In the next part of the thesis, a semi-analytical framework for the same energy pile is developed where the soil is modeled as an elastoplastic material. The analysis framework for the energy pile in elastoplastic soil is developed from the laws of thermodynamics with the energy potential and dissipation function where the plastic behavior of soil is taken into account through the dissipation function. The derived analytical framework is used for elastoplastic soil response with the Drucker-Prager constitutive model. The results from the present analysis are verified with available results in the literature from experiments and numerical analysis. The Drucker-Prager constitutive model does not take into account the effect of temperature on the stress-strain response of the soil. So, this constitutive model is most suitable to model the thermo-mechanical behavior of sand as the effect of temperature on the thermo-mechanical response of sand is not significant. Therefore, energy piles in sand can be analyzed using the present framework with the Drucker-Prager constitutive model. Energy pile in clay needs to be represented with a soil constitutive model that can capture the effect of temperature on the thermo-mechanical response of clay. In the third part of the thesis, the present analytical framework for an energy pile in elastoplastic soil is used to analyze the pile in clay with a thermo-mechanical constitutive model that takes into account the change in the mechanical response of clay due to temperature change. The thermo-mechanical model used in the analysis was developed using the hyperplasticity formalism, which satisfies the laws of thermodynamics. The present framework in all cases predicts results with acceptable accuracy. The present analytical framework predicts the response of energy pile under different loads (mechanical and thermal) in terms of vertical displacement, stress (with <10% difference between the results from the present analysis and finite element analysis/field tests) with acceptable accuracy in less computational time. The run time for the present analysis is approximately 10 and 5 times faster than axisymmetric finite element analysis for elastic and elastoplastic cases, respectively. In the final part of the thesis, the stresses developed in an energy pile under mechanical and thermal loads are observed for different soil properties. A parametric study is conducted on the developed stresses in an energy pile in single and two-layer soils under multiple loading conditions. A correlation between the applied mechanical and thermal load is established, and under these loading conditions, the effects of soil layering and material properties on the developed stresses are observed. The development of the tensile zone and its range in an energy pile for certain conditions are concluded from the parametric study.Item Rethinking Infrastructure Deconstruction Through Reality Data Capture and Interactive Simulations(University of Waterloo, 2024-10-22) Earle, GabrielThe demand for careful infrastructure deconstruction and disassembly increases every day, as the market for building material reuse grows, and the reality of construction landfill waste weighs on the environment. Mid-century nuclear power plants are one example of infrastructure that are reaching life expectancies and require massive deconstruction initiatives. These projects span multiple decades and bear massive costs, making effective planning a requirement. However, despite active research efforts in infrastructure lifecycle stages such as design, construction, inspection, and maintenance, deconstruction remains relatively unexplored. Meanwhile, applicable technologies like autonomous robotics, virtual reality headsets, and 3D simulation software are more capable and cost-effective than ever. In this research, the use of reality data capture and virtual reality simulations as tools for planning deconstruction projects is studied. Novel planning workflows using these technologies are designed, implemented, and tested for validation. The completion of this research yields the contribution of three novel planning methodologies. They are 1) a methodology for cutting and packing planning, 2) a methodology for building material reuse planning, and 3) methodology for automated critical path schedule generation. Several scientific problems are addressed in support of developing and validating the proposed methodologies. While the methodologies each address a different workflow within deconstruction planning, they build upon each other in terms of the scientific problems that are addressed, with the final methodology bearing the most significant and novel contributions. In the methodology for cutting and packing, the topics of creating immersive virtual reality simulations from reality data as well as conducting human-computer collaborative cutting and packing planning based on reality data are addressed. An optimal reality data processing approach for deconstruction planning simulations is detailed based on a quantitative comparison of reality data capture methods as well as qualitative observations. A simulation environment with rich feedback based on the processed reality data is designed to enable human-computer collaborative cutting and packing planning. These developments build upon prior work with similar capabilities, but that do not incorporate reality data. In the methodology for building material reuse planning, the problem of simultaneously optimizing deconstruction planning and reuse-based architectural design is studied. A gamified virtual reality simulation is developed and presented to address the underconstrained nature of this problem, which enables users to create a high level initial solution. A workshop with human participants is held and quantitative metrics from questionnaires as well as qualitative feedback on the simulation is collected. The feedback is analyzed in order to support future work in refining the methodology. In the methodology for automated critical path schedule generation, the most significant contributions of the thesis are presented, where the problem of efficiently and automatically generating accurate critical path schedules based on a virtual reality simulation run is addressed. In the methodology presented, a series of novel algorithms are designed to deduce the requisite information to produce a critical path schedule that reflects a user’s actions in a virtual reality environment. The algorithms support 1) detecting the occurrence of construction-centric actions within the virtual reality environment, 2) estimating the corresponding real-world durations of detected actions, and 3) deducing the precedence relationships of detection actions. Using this information, an automated approach for assembling a critical path schedule that is enhanced with rich metadata about the planning process and that is not resource constrained is presented.Item Removal of microplastics during drinking water treatment: Linking theory to practice to advance risk management(University of Waterloo, 2024-10-22) Gomes, AliceMicroplastics (MPs) have emerged in the past decade as widespread contaminants that are harmful to human and ecosystem health. While their removal from water may be similar to those of other particulate contaminants, its characterization is complicated because MPs can undergo weathering, photolysis, and microbial degradation in the natural environment, resulting in the presence of functional groups (e.g., carbonyl, hydroxyl) on their surfaces, which may affect their removal during drinking water treatment. Given that studies using seeded polystyrene microspheres/MPs as surrogates for oocysts have shown good (but sometimes variable) removals through conventional drinking water treatment composed of coagulation, flocculation and sedimentation (CFS) followed by filtration, MPs are likely to be well removed in optimized conventional drinking water treatment plants. While many studies have focused on the removal of larger (i.e., >50 µm sized microplastics), investigations of the removal of smaller sized (<10 μm) microplastics by drinking water treatment processes have been limited largely to case studies in which foundational mechanisms necessary for maximizing treatment performance have only been superficially investigated, if at all. To address this gap, the study focused on whether MPs removal by conventional chemical pretreatment (i.e., coagulation, flocculation, and sedimentation) with alum aligns with the removal of other particles, including Cryptosporidium oocysts, for which particle destabilization is essential for removal. The study aimed to advance knowledge through three main objectives: (1) characterize MPs removal by CFS with different particle destabilization mechanisms and compare it to other important particulate contaminants (i.e., Cryptosporidium spp. oocysts), (2) evaluate the effect of particle size on MPs removal by CFS, and (3) assess the influence of weathering on MPs removal by CFS. To evaluate MPs removal by chemical pretreatment reliant on (1) adsorption and charge neutralization and (2) enmeshment in precipitate (i.e., sweep flocculation) particle destabilization mechanisms, bench-scale investigations of alum-based CFS (i.e., jar tests) were conducted with synthetic water using pristine and weathered PS microplastics of 1, 5 and 10 μm diameter. Several synthetic raw water matrices were explored to identify scenarios in which both particle destabilization mechanisms were clearly discerned. The final synthetic raw water was composed of deionized water spiked with sodium carbonate and kaolin (70 NTU) at pH 7.0. To demonstrate that MPs removal by CFS aligns with coagulation theory, sixteen alum doses between 0–38.8 mg/L were used to evaluate MPs removal by CFS. Turbidity reduction was also evaluated, and zeta potential was analyzed to identify maximal particle destabilization. MPs removal increased with particle size, aligning with gravitational settling theory. MPs removal during CFS with optimized particle destabilization was generally consistent with reported removals of other particles, including Cryptosporidium spp. oocysts during optimized chemical pretreatment, thereby suggesting that similar approaches for risk management may be relevant to MPs. Notably, differences in pristine and weathered MPs removal by CFS were not significant under the conditions investigated, thereby suggesting that weathering does not affect MPs removal when particle destabilization by coagulant addition is optimized. This study bridges the gap between the theories of conventional drinking water treatment and concerns regarding the potential passage of MPs through drinking water treatment plants, demonstrating that MPs can be removed in the same manner as other colloidal particles using conventional chemical pretreatment and—by well-recognized theory-based extension—physico-chemical filtration.Item Phosphorus Legacies and Water Quality Trajectories Across Canada(University of Waterloo, 2024-10-15) Malik, LamisaPhosphorus (P) pollution in freshwater is a critical environmental issue, primarily driven by agricultural runoff, wastewater discharge, and industrial effluents. Across Canada, lakes such as Lake Erie and Lake Winnipeg experience severe and persistent algal blooms driven mainly due to excess phosphorus loading. Excessive phosphorus loading leads to eutrophication, which causes harmful algal blooms and hypoxia which disrupt aquatic life, reduce biodiversity, and impair water quality, making human consumption and recreational activities unsafe. Despite policies aimed at reducing phosphorus loading, such as improved farming practices and wastewater treatment upgrades, we have not seen a marked decrease in riverine loads. Phosphorus management goals often fall short due to the persistence of legacies – phosphorus that has accumulated in soils and sediments over decades of agricultural applications – which continue to release phosphorus into water bodies for decades after its initial application. Despite recognizing the existence and significant regional and global impact of legacy P on water quality and aquatic ecosystems, our understanding of the magnitude and spatial distribution of these P stores remains limited. Understanding the legacy P stores and their contributors is crucial for efficiently managing water quality, highlighting the importance of studying these factors to develop more effective and sustainable management strategies. The central theme of this thesis is the exploration of the phosphorus legacy across various landscapes. My work has three objectives. First, I explore phosphorus legacies and water quality trajectories across the Lake Erie basin. Second, I quantify various legacy P stores and evaluate their current and future impacts on water quality. Third, I quantified phosphorus accumulation for Pan Canada. In the first objective, I develop a comprehensive phosphorus budget for the Lake Erie Basin, a 60,000 km² transboundary region between the U.S. and Canada, by collecting, harmonizing, and synthesizing agricultural, climate, and population data. The phosphorus inputs included fertilizer, livestock manure, human waste, detergents, and atmospheric deposition, while outputs focused on crop and pasture uptake, covering a historical period from 1930 to 2016. The budget allowed us to calculate excess phosphorus as phosphorus surplus– surplus defined as the difference between P inputs and non-hydrological exports. A random forest model was then employed to describe in-stream phosphorus export as a function of cumulative P surplus and streamflow. The results indicated a significant accumulation of legacy P in the watersheds of the Lake Erie Basin. Notably, higher legacy P accumulation corresponded strongly with greater manure inputs (R²=0.46, p < 0.05), whereas fertilizer inputs showed a weaker relationship. For the second objective, I model the long-term nutrient dynamics of phosphorus across 45 watersheds in the Lake Erie basin using the ELEMeNT-P model. This aimed to quantify legacy phosphorus accumulation and depletion across different landscape compartments, including soils, landfills, reservoirs, and riparian zones, and to assess the potential for phosphorus load reductions under future management scenarios. The model sought to identify key legacy phosphorus pools and explore the feasibility of achieving significant reductions in phosphorus loading, with results indicating that 40% reductions are attainable only through aggressive management efforts. For the last objective, I develop a high-resolution phosphorus budget dataset for Canada, spanning the years 1961 to 2021, at both county and 250-meter spatial scales. This dataset aimed to capture phosphorus inputs from fertilizers, manure, and domestic waste, along with phosphorus uptake by crops and pastureland, across all ten provinces. With this dataset, I aim to better understand the state and progress of phosphorus management across space and time. The results reveal significant variation in P surplus attributable to differences in land use and management practices. The highest surpluses were observed in southern Ontario and Quebec, with approximately 50 kilotons in 2021, contributing to an accumulation of over 2 tera tons of phosphorus over the past 60 years.Item Non-Stationary Stochastic Modelling of Climate Hazards for Risk and Reliability Assessment(University of Waterloo, 2024-10-01) Bhadra, RiturajThis thesis presents methodologies for studying the effects of climate change on natural hazards. The thesis is structured around three key aspects: first, the stochastic modelling of non-stationary hazards; second, the modelling of concrete degradation in a changing climate; and third, the economic risk evaluation associated with these non-stationary hazards. The initial focus of this thesis is on applying a non-stationary stochastic process to model the increasing frequency and intensity of climate-driven hazards in Canada. The early chapters provide an overview of the effects of climate change in Canada. To understand the trends and projections of various climatic variables, such as temperature, precipitation, and wind speed, recent studies and reports from Environment and Climate Change Canada, along with other relevant literature, are examined and analyses were performed on the model outputs of the Couple Model Inter-comparison Project Phase 6 (CMIP6) data. The overview highlights the growing occurrence and severity of climate hazards, including hurricanes, droughts, wildfires, and heatwaves, as supported by other independent studies. In the light of such analyses, this study demonstrates the inadequacy of traditional stationary models for future predictions and risk assessments, thereby advocating for a shift to non-stationary frameworks. The thesis provides a robust theoretical foundation for non-stationary hazard modelling using stochastic process models. Traditional extreme value analysis (EVA) typically assumes stationarity. However, this assumption is invalidated by gradual changes in the frequency and intensity of climate-driven hazards. This research proposes methodologies to model climatic hazards using a non-stationary stochastic shock process, specifically the non-homogeneous Poisson process (NHPP), to derive the maximum value distributions over any finite period and not just restricted to annual maxima. These models account for changes in the underlying processes over time, providing a more accurate representation of climate-driven hazards by incorporating time-varying parameters that reflect the dynamic nature of climatic extremes. By integrating stochasticity and temporal variability, these stochastic process models offer a robust framework for predicting the future occurrence and intensity of climate-driven hazards. The proposed methods are demonstrated through the estimation of maximum value distributions for precipitation events using the Coupled Model Inter-comparison Project (CMIP) phase-6 multi-model ensemble data, with an analysis of inter-model variability. Furthermore, the thesis presents a case study on modeling heatwaves to illustrate the application of these models to climatic data, particularly for events where the asymptotic assumptions of extreme value theory do not hold. Climate change will not only influence the loads and hazards on infrastructure, but it will also exacerbate the degradation processes of structures due to harsher climatic conditions such as higher temperatures and increased humidity. To model these effects on the degradation of concrete bridges, simulations were conducted using physico-chemical concrete degradation processes. Based on the simulation results, non-stationary Markov transition probabilities were estimated for several key locations in Canada under various Shared Socioeconomic Pathway (SSP) scenarios. The final chapter of the thesis addresses the economic aspects of climate-driven hazards. It includes derivations to estimate various statistics of damage costs, such as the mean, variance, moments, and distribution, resulting from a non-stationary hazard process. The analytical results were derived for several cases, such as considering the loss magnitudes to be identically and non-identically distributed, and whether discounting is applied to the losses or not to address the effect of time in evaluating the net present losses or not. This analysis offers valuable information for policy makers, engineers, and scientists involved in climate adaptation and mitigation efforts.Item A Stochastic Framework for Urban Flood Hazard Assessment: Integrating SWMM and HEC-RAS Models to Address Watershed and Climate Uncertainties(University of Waterloo, 2024-09-25) Abedin, Sayed Joinal HossainUrbanization significantly alters natural hydrological processes, leading to increased flood risks in urban areas. The potential damages caused by flooding in urban areas are widely recognized, making it crucial for urban residents to be well-informed about flood risks to mitigate potential losses. Flood maps serve as essential tools in this regard, providing valuable information that aids in effective planning, risk assessment, and decision-making. Despite floods being the most common natural disasters in Canada, many Canadians still lack access to high-quality, up-to-date flood maps. The occurrence of recent major flood events across the country has sparked renewed interest among government officials and stakeholders in launching new flood mapping initiatives. These projects are critical for enhancing flood risk management across communities. Traditional flood hazard mapping methods, based on deterministic approaches, often fail to account for the complexities and uncertainties inherent in urban flood dynamics, especially under rapidly changing climate conditions. Uncertainty affects every stage of flood mapping, influencing accuracy and reliability. Recognizing this, recent studies advocate for stochastic approaches to explicitly incorporate these uncertainties. However, there is a lack of industry-standard tools that allow for a convenient and comprehensive analysis of uncertainty, making it challenging to routinely incorporate uncertainty into flood hazard assessments in practice. This underscores the need for a robust framework to model flood uncertainty. While various models have been proposed to address the uncertainty, many remain conceptual or lack the necessary automation. Despite no "perfect models", the Storm Water Management Model (SWMM) and the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) are widely used for urban hydrology and channel hydraulics modeling, respectively, due to their robust physics-based approaches. Both SWMM and HEC-RAS models have been enhanced with commercial and open-source extensions, built on algorithms written in various programming languages, to improve their utility, particularly for automating workflows to handle complex urban flood scenarios. While SWMM has more robust extensions, most available HEC-RAS extensions are designed for one-dimensional (1D) steady state models, which lack the complexity needed for accurate urban flood modeling. The release of HEC-RAS 6.0, which allows for two-dimensional (2D) unsteady flow modeling and incorporates structures like bridges and weirs, marks a significant advancement for urban flood modeling. The current research was motivated by the perceived benefits of designing such extensions for automating workflows in recent versions of SWMM and HEC-RAS, as well as automating the coupling of these two models in a stochastic framework to facilitate the integration of uncertainty into existing flood hazard mapping workflows. This thesis introduces the SWMM-RASpy framework, a novel automated stochastic tool built using the open-source Python programming language. SWMM-RASpy integrates SWMM's detailed urban hydrologic capabilities, such as dual-drainage modeling, with HEC-RAS's 2D unsteady hydraulic modeling, coupled with stochastic simulations through Latin Hypercube Sampling (LHS) to analyze the uncertainty in flood hazard mapping. The framework was demonstrated on the Cooksville Creek watershed, a highly urbanized area in Mississauga, Ontario, known for its susceptibility to flooding. An entropy map was successfully produced for the case study site, which better reflects the uncertainty of flooding and could be used to develop tailored flood planning and preparedness strategies for different zones within the site. This thesis also presents a detailed application of the SWMM-RASpy framework to assess flood hazards, with a specific focus on topography-based hydraulic uncertainties in the watershed, particularly surface roughness variability, which affects pedestrian safety during flood events. The study highlights that traditional hazard models, which focus mainly on residential buildings, do not adequately account for the risks to pedestrians who are a significant source of fatalities in flood events, especially in densely populated urban areas with high mobility. Three flood hazard metrics were developed and used to evaluate the flood risks to pedestrians given the uncertainty surrounding surface roughness: FHM1, based on inundation depth; FHM2, combining depth and velocity; and FHM3, incorporating depth, velocity, and duration. Key findings from the assessment indicate that surface roughness significantly affects pedestrian hazard estimation across the floodplain, making it a critical factor in flood hazard management. The FHM2 metric, which incorporates depth and velocity, was found to be highly sensitive to roughness variation, potentially leading to the misclassification of hazardous zones as safe and vice versa. The inclusion of velocity in the hazard assessment, while improving accuracy, also increased variability, emphasizing the need for a balanced approach in flood risk evaluations. In contrast, the FHM3 metric, which includes flooding duration, showed minimal sensitivity to surface roughness uncertainty. The research also suggests that confidence maps, produced as part of the analysis and accounting for estimated uncertainties surrounding the hazard metrics propagated from surface roughness, can offer a more reliable alternative to traditional deterministic hazard maps. Lastly, the study, through this analysis, emphasizes the importance of combining grid-level and zonal-level analyses for a more comprehensive understanding of flood hazards at different scales, thereby supporting more robust flood risk assessments. This thesis extends the application of the SWMM-RASpy framework to assess the impacts of climate change on flood hazards within the Cooksville Creek watershed. It examines how projected changes in rainfall intensity from Global Climate Models (GCMs) affect flood risks, particularly for urban buildings, as well as the importance of incorporating uncertainties from these projections into flood hazard assessments. The same hazard metrics used for pedestrian hazard assessment, FHM1, FHM2 and FHM3, were used to evaluate building hazards. The study predicts a significant increase in flood hazards within the watershed, with a substantial expansion of inundation areas affecting up to 40% more buildings when uncertainties are considered. The analysis shows that without considering uncertainties, FHM1 and FHM3 predict a higher number of damaged buildings than FHM2, with FHM1 predicting the highest number of affected buildings. This suggests that relying solely on FHM1 to estimate building hazards may be sufficient in similar climate change scenarios, although further investigations are needed. However, when uncertainties are included, FHM2 shows a greater increase in the number of buildings at risk compared to FHM1 and FHM3, due to the larger uncertainty associated with velocity versus depth and duration. This underscores the need to incorporate uncertainty into flood hazard assessments to ensure a more comprehensive understanding of potential future damages. Overall, this study has made significant contributions to the field of urban flood hazard assessment by developing a robust method for incorporating and analyzing uncertainties, thereby supporting more effective flood management and resilience planning. Future research should apply the SWMM-RASpy framework to other watersheds and investigate additional hydrologic and hydraulic variables to further improve flood risk assessments.Item Atmospheric Emissions Associated with the Use of Biogas in Ontario(University of Waterloo, 2024-09-24) Bindas, SavannahThis study aims to quantify the atmospheric emissions associated with an energy-from-waste transition in Ontario. Specifically, it explores the emissions from using livestock and food waste to produce biogas as a source of renewable natural gas. Biogas is one potential “closed loop” solution to waste management; however, there is potential for additional emissions associated with transitioning to anaerobic digestion as a waste management strategy. Each step along the anaerobic digestion process, from the transportation of feedstock to the storage of post-processed digestate, can release both greenhouse gas (GHG) and air pollutant emissions. Here, we quantified the net effects on emissions of a biogas transition in Ontario. We evaluated scenarios using up to 100% of available waste feedstocks in the province and compared these emissions to conventional manure management and landfilling. We found that emissions from current manure management strategies dominated GHG emissions, and, as more manure was utilized in the biogas system, there was a drastic decrease in corresponding emissions of CH4, N2O, and CO2, all of which are potent GHGs. All scenarios showed emission reductions compared to the traditional practice. By the 75% biogas scenario, GHG emissions associated with the biogas process are balanced by the potential offsets from avoided synesthetic fertilizer production, leading to negligible net GHG emissions from this system. In the 100% scenario, we observed SOx, VOCs, NH3, and PM2.5 were increasingly offset by emissions savings in the natural gas production and synthetic fertilizer production. The important exceptions were the significant NH3 and PM2.5 emissions associated with conventional manure management. Due to this, we did not see net emissions savings from the biogas scenarios until the 100% run. These results highlight the atmospheric impacts of conventional waste management and demonstrate the potential for anaerobic digestion to mitigating these emissions.Item Desorption of Per-and Polyfluoroalkyl Substances from Powdered and Colloidal Activated Carbon(University of Waterloo, 2024-09-24) Uramowski, AndrewPer- and polyfluoroalkyl substances (PFAS) are a group of synthetic chemicals with unique heat-resistant properties, leading to their usage in aqueous film-forming foams (AFFF) for fighting fuel-based fires at airports and military bases. The application of AFFF at these facilities has led to the contamination of groundwater with PFAS, which can threaten the safety of nearby drinking water, agricultural, and industrial supply wells, as well as downgradient surface water bodies. Drinking water supplies contaminated with PFAS can result in human exposure, which has been linked to developmental, immunological, endocrine, and cardiovascular disorders, and cancer. Since PFAS are resistant to biodegradation and traditional destruction technologies, current remediation efforts are focused on immobilizing PFAS using adsorptive processes that sequester PFAS from the aqueous phase, concentrating them on an adsorbent media. The injection of activated carbon (AC) particulate amendments into the subsurface has been suggested as a promising technique for the in situ immobilization of plumes of PFAS and to protect downgradient receptors. To predict the long-term performance of these AC barriers, a thorough understanding of adsorption and desorption processes is required. The objective of this research was to investigate the desorption behaviour of three PFAS (perfluorooctane sulfonic acid (PFOS), perfluorooctanoic acid (PFOA), and perfluorobutane sulfonic acid (PFBS)) from a powdered AC (PAC) and a colloidal AC (CAC). The research focused specifically on assessing whether desorption of PFOS, PFOA, and PFBS on these two AC materials was hysteretic. PFAS adsorption and desorption kinetic experiments using PAC or CAC were completed to determine the contact time required to reach near-equilibrium conditions. Adsorption experiments with PAC utilized the bottle-point method, and desorption experiments used a sequential desorption methodology where the aqueous phase of desorption reactors was replaced with an adsorbate-free solution. Adsorption of PFAS by CAC was also investigated using the bottle-point method; however, desorption experiments were conducted using two different methods: (1) a sub-sampling methodology where aliquots of slurry from a well-mixed adsorption isotherm bottle were diluted to initiate desorption, and (2) a whole-bottle dilution method where the entire contents of adsorption reactors were diluted to larger volumes to initiate desorption. The results indicated that for experiments utilizing PAC, adsorption and desorption equilibrium was established for all compounds within 72 h. Desorption of PFOS, PFOA and PFBS from PAC did not demonstrate hysteresis since all desorption data were contained within the 95% adsorption prediction band. For experiments using CAC, adsorption equilibrium was established by 120 h for all compounds, while desorption equilibrium was established by 120 h for PFOS, and 72 h for PFOA and PFBS. Desorption data using the sub-sampling method for PFOS, PFOA or PFBS and CAC were below and outside of the 95% adsorption prediction band. It was concluded that unrepresentative sub-sampling of CAC slurry occurred in the desorption step using this method. When the whole-bottle dilution method was adopted for PFOS, desorption data were within the 95% adsorption prediction band, indicating no evidence of hysteresis under the experimental conditions used. Since mass removal at each desorption step was extremely small compared to the sorbed fraction, desorption data did not reach aqueous equilibrium concentrations near the method detection limit. The absence of hysteretic behaviour in this research demonstrates that sorption processes are reversible over the concentration ranges explored. This reversibility implies that PFAS sorbed within AC barriers can be released when the groundwater concentration is decreased, either due to temporal heterogeneity in concentration profiles, or the depletion of the source zone.Item Spatio-Temporal Analysis of Roundabout Traffic Crashes in the Region of Waterloo(University of Waterloo, 2024-09-18) Miyake, RyotoRoundabouts are increasingly implemented as safer alternatives to stop-controlled and signalised intersections, with the goal of reducing the severity and frequency of traffic crashes. The safety performance of roundabouts, however, is influenced by their geometric design, and the effects of geometric design variables on safety can vary across different countries and regions. Despite this, there is limited research on these safety impacts within the Canadian context. This study addresses this gap by using data from the Region of Waterloo, Ontario, to develop a safety performance function (SPF) using a negative binomial regression model. The model identified significant geometric design variables affecting collision frequency, such as inscribed circle diameter (ICD), entry angle, entry lane width and number of entry lanes. The findings suggest that the safety impacts of geometric design in Canada may differ from those observed in other countries, highlighting the need for region-specific SPFs. Additionally, in areas where roundabouts are relatively new, it is expected that the safety performance of roundabouts may fluctuate over time and across different locations. However, spatio-temporal variations in roundabout safety have not been extensively studied. To fill this gap, a spatio-temporal analysis was conducted using Bayesian hierarchical models to capture spatial and temporal variations in collision frequency. The results reveal significant spatial autocorrelation, while no strong temporal patterns or novelty effect were detected within the scope of the data and modelling approach used in this analysis. This research advances the understanding of how geometric design and spatio-temporal factors influence roundabout safety, providing important insights for the planning and design of roundabouts. Moreover, it is pioneering in its application of spatio-temporal interaction effects in road safety analysis, demonstrating the potential for this approach in future studies.Item Quantifying gas phase chlorine compounds in sewer system headspaces(University of Waterloo, 2024-09-18) Sun, Xiaoyuchlorine chloramine chlorine gas sensor mass transfer deterministic modelItem Deep Learning-Based Probabilistic Hierarchical Reconciliation for Hydrological and Water Resources Forecasting(University of Waterloo, 2024-09-10) Jahangir, Mohammad SinaAccurate, probabilistic, and consistent forecasts at different timescales (e.g., daily, weekly, monthly) are important for effective water resources management. Considering the different timescales together as a hierarchical structure, there is no guarantee that when forecast models are developed independently for each timescale in the hierarchy, they will result in consistent forecasts. For example, there is no guarantee that one-seven day(s) ahead forecasts from one model will sum to a weekly forecast from another model. Significant efforts have been made in the time-series forecasting community over the last two decades to solve this problem, resulting in the development of temporal hierarchical reconciliation (THR) methods. Until recently, THR methods had yet to be explored for hydrological and water resources forecasting. The main goal of this research is to introduce THR to the field of hydrological and water resources forecasting and to merge it with the latest advancements in deep learning (DL) to provide researchers and practitioners with a state-of-the-art model that can be used to produce accurate, probabilistic, and consistent multi-timescale forecasts. To achieve this goal, this research follows three interconnected objectives, each including a main contribution to the field of DL-based hydrological forecasting. In the first main contribution of this research, the potential of THR to produce accurate and consistent hydrological forecasts was verified for the first time in hydrology through a large-scale precipitation forecasting experiment using 84 catchments across Canada. Three THR methods were coupled with three popular time-series forecasting models (exponential time-series smoothing, artificial neural network, and seasonal auto-regressive integrated moving average) for annual precipitation forecasting at monthly (12-steps ahead), bi-monthly (6-steps ahead), quarterly (4-steps ahead), 4-monthly (3-steps ahead), semi-annual (2-steps ahead), and annual (1-step ahead) timescales. It was confirmed that not only does utilizing THR guarantee forecast consistency across all timescales, but it can also improve forecast accuracy. DL models are increasingly being used for hydrological modeling, particularly for lumped simulation, due to their ability to capture complex non-linear relationships within hydrological data as well as their efficiency in deployment. Likewise, the application of DL for hydrological forecasting has gained momentum recently. DL models can extract complex patterns from meteorological forcing data (e.g., precipitation) to forecast future streamflow, often leading to forecasts that are more accurate than current conceptual models. However, due to uncertainty in the phenomena affecting hydrological processes, it is necessary to develop accurate probabilistic forecast models to provide insights for informed water management decisions. In the second main contribution of this research, two novel state-of-art sequence-to-sequence probabilistic DL (PDL) models were developed, tested, and applied for short-term (one-seven day(s) ahead) streamflow forecasting in over 75 catchments with varied hydrometeorological properties across both the continental United States (CONUS) and Canada. The two designed models, namely quantile-based encoder-decoder and conditional variational auto-encoder (CVAE) showed superior performance compared to the benchmark long-short-term memory (LSTM) network considering forecast accuracy and reliability. Specifically, CVAE, a generative DL model that can estimate magnitudes of different sources of uncertainty (e.g., aleatoric, epistemic), proved to be effective in producing reliable forecasts for longer forecast lead times (three-seven days ahead). Given the introduction of THR to the field of hydrological forecasting through the first main contribution, there is no guidance on how to couple THR with the latest advancements in DL, especially PDL, to produce accurate, and consistent probabilistic hydrological forecasts. Furthermore, existing methods for combining THR with DL models, particularly PDL models, suffer from several limitations. Firstly, almost all approaches treat THR as a post-processing step. Secondly, existing THR methods often lack adaptability, meaning they are unable to adjust properly to changing data distributions or new information. Finally, there is limited research on implementing probabilistic THR, a crucial aspect for making probabilistic forecasts consistent. As the third main contribution, a hierarchical DL model (HDL) was introduced where THR was integrated directly into the DL model. Specifically, a custom THR layer was developed that can be combined with any DL model, much like a LSTM layer or a linear layer, to produce the proposed HDL. This integrated approach (via the new THR layer) allows any DL model to leverage temporal information across multiple timescales during training, perform probabilistic THR, and be efficient for real-time application. Furthermore, the proposed HDL is based on auto-regressive normalizing flows, a state-of-the-art generative DL model that is more flexible than CVAE in that it can non-parametrically estimate the probability distribution of the target variable (e.g., streamflow). HDL was tested on more than 400 catchments across CONUS for weekly streamflow forecasting at daily (seven-steps ahead) and weekly (one-step ahead) timescales. The performance of HDL was benchmarked against LSTM variants. HDL produced forecasts that had substantially higher accuracy than the LSTM variants and simultaneously generated consistent forecasts at both daily and weekly timescales, without the need for post-processing (as in the vast majority of THR methods). The implementation of THR as a neural network layer allows it to be seamlessly combined with other DL layers. For example, the new THR layer can be coupled with physics-based differentiable routing layers for multi-timescale distributed hydrological forecasting. It is expected that HDL will serve as a benchmark upon which future THR methods will be compared for streamflow forecasting. Furthermore, given the generality of the approach, HDL can be used for forecasting other important variables within hydrology (e.g., soil moisture) and water resources (e.g., urban water demand), as well as other disciplines, such as renewable energy (e.g., solar power).Item Fatigue and Fracture Behaviour of Steel Wire-Arc Additively Manufactured Structural Materials(University of Waterloo, 2024-09-04) Lee, Jun SeoThere is an increasing demand for automation that has influenced many industries to find ways to integrate it into their markets. Within the civil sector, the integration of automation into the various stages of a project unlocks new opportunities that were previously difficult to achieve. If the desires of the architects appeared unachievable due to high planning and manufacturing costs, this can now be resolved with the addition of automation and robotics embedded in the early stages of project development. The wire arc additive manufacturing (WAAM) process is an additive manufacturing (AM) process that allows efficient fabrication of structural elements. This process, also referred to as gas-metal arc additive manufacturing (GMAAM), uses directed energy deposition (DED) to create components. Specific to the WAAM process, a metal wire is fed into an electric arc, and then welded into a designed shape. For structural steel fabricators, this automated technology could allow for the reduction of supply chains, part inventories, and scrap waste, and will help improve the digitalization of the fabrication process. Moreover, the WAAM process allows the fabrication of customized connection nodes and unique structural shapes that are difficult to achieve with conventional subtractive manufacturing. Despite the many potential advantages of the WAAM process, research is needed for WAAM structural steel to be used in the civil engineering sector. Mechanical properties such as the elastic modulus, yield strength (YS), and the ultimate tensile strength (UTS) of WAAM material should be tested and validated. In addition, WAAM steel can have a very rough and wavy surface due to the additive manufacturing process. The rough surface can cause stress concentration zones within the material that may affect its fatigue performance. Although this can be mitigated by post-processing steps such as surface milling, it is important to study its properties in its as-built state as milling is an additional fabrication step, which takes time and cost and may not be necessary for some projects and applications. This thesis aims to explore the material properties, fracture toughness, and fatigue behaviour of WAAM steel components. Through experimental testing, mechanical properties such as the elastic modulus, yield strength, and ultimate tensile strength are determined. Further test results include Charpy v-notch impact tests to determine fracture toughness, as well as tests to determine crack propagation properties. Lastly, the experimental program included testing the fatigue behaviour of WAAM steel for both the smooth and rough (as-fabricated) specimens. The experimental program also examined the effects of weld direction by including tests on specimens oriented both perpendicular and parallel to the weld. The fatigue data collected from the experimental program was used to plot a stress-life (S-N) curve for WAAM steel. The data was then statistically analyzed and compared to current codes such as CSA S6 and S16. It was found that the fatigue behaviour of the WAAM steel was dependent on the weld direction. The specimens oriented parallel to the weld showed behaviour similar to CSA Detail Category B. Specimens oriented perpendicular to the weld showed behaviour similar to CSA Detail Category E. A metallurgical study of the WAAM steel showed that its microstructure showed resemblance to welded steel components. Looking at the microstructure, the grain sizes and boundaries indicated differences in the as-deposited zones and the reheated zones. The reheated zones, where the addition of new layers disturbed the microstructure, consisted of finer grains expected to exhibit greater toughness. Lastly, a linear elastic fracture mechanics (LEFM) model was used to predict the fatigue lives of the WAAM steel fatigue specimens in the perpendicular- and parallel-to-weld orientations. The model was able to predict the fatigue behaviour of the WAAM steel specimens, but the results were greatly dependent on the assumed surface stress concentration factors, Kt. More research is needed to obtain Kt values that will enable greater accuracy in determining the fatigue behaviour of WAAM steel.Item Development of Improved Methods to Establish Toughness Requirements for North American Steel Highway Bridges(University of Waterloo, 2024-09-04) Chien, MichelleBrittle fracture is a major concern to structural engineers as it can have significant consequences in terms of safety and cost. Although modern day occurrences are rare, it is well known that they can occur without warning and may lead to the sudden closure of a bridge, loss of service, expensive repairs, and/or loss of property or life. In Canada, steel bridge fracture is a more significant concern due to the harsh climate present through much of the country, which, if the toughness properties are improperly specified, is sufficient to put many steels on the lower shelf of the toughness-temperature curve. The provisions for avoidance of brittle fracture in various bridge design codes vary in complexity. The existing Canadian CSA standards take a fairly simplistic approach for design against brittle fracture, using design tables that have two temperature zones. Depending on the minimum mean daily temperature of the location of interest, one can determine the Charpy V-Notch testing requirements for the grade of steel. However, it is known that temperature is not the only factor that plays a role in the fracture behaviour of steels. Other factors influencing fracture, such as plate thickness, crack size, demand-to-capacity ratio, and considerations related to traffic, are currently neglected. It is generally known, for example, that thin plates (e.g., less than 12.5-19.0 mm in bridge applications) are less susceptible to brittle fracture, due to the rolling reduction ratio at the mill. However, for the same steel grade (with a small distinction between base and weld metal), the same CVN requirements are applicable to a wide range of plate thicknesses (i.e., from the minimum allowed for corrosion considerations up to 100 mm). The existing CSA standards also assign responsibility for identifying fracture-critical members (FCMs) to the design engineer, though regulations on how to identify them are limited and vague, leaving much to engineering judgement. A comparison of brittle fracture design provisions around the world reveals that more sophisticated approaches have been developed in terms of modelling and understanding brittle fracture in existing and new bridges than the ones currently in use in North America. One of these more involved methods is the fracture mechanics method in the European EN 1993-1-10 standard, which allows factors such as plate thickness, crack size, and strain rate to be considered. This standard also gives designers the option of using a simplified method or a much more involved, fracture mechanics-based approach. While the current Canadian brittle fracture provisions generally appear to be meeting the needs of the code users, two issues are noteworthy. The first, which has already been alluded to, is that the North American provisions offer less flexibility and guidance for handling unusual situations than the Eurocode methods. The ‘one size fits all’ approach in the Canadian design standards may not be optimal and may result in structures being overdesigned or under-designed, leading to inefficiencies in safety and cost. This highlights the need for answering questions regarding the feasibility of allowing reduced toughness requirements for bridges fabricated with thinner plates or experiencing lower traffic volumes or demand-to-capacity ratios. The second issue is that few studies can be found in the literature around the world attempting to assess the level of reliability against brittle fracture provided by any of the existing design provisions. The lack of a probabilistic assessment of brittle fracture risk in Canada and the few studies globally highlights a gap in the current understanding and implementation of these design standards. This thesis includes a literature review on: 1) factors affecting material toughness, 2) common methods of evaluating toughness, 3) North American and European brittle fracture provisions, and 4) previous work on design code calibration and reliability analysis for steel structures subject to various failure modes, including brittle fracture. A comparison of the North American and European design provisions using the example of a typical steel-concrete composite highway bridge is then presented. For this case study, it was found that North American codes are typically more conservative than the Eurocode for bridge elements made with thinner plates and less conservative for elements made with thicker plates. Following this, the fracture mechanics-based European brittle fracture limit state is then evaluated in a probabilistic framework using Monte Carlo Simulation (MCS). In order to do this, statistical distributions are established for the various input parameters, and – in particular – statistical models for the live traffic load and temperature are established. Prior to application of the model, a calibration step is performed to establish a design crack depth. Sensitivity studies are then performed where key input parameters are varied to examine how the failure probability is affected by variations in each parameter. The work is then cast in a time-dependent reliability framework, using historical temperature and traffic data, to determine the failure probability with temperature and traffic loading fluctuating on a time scale throughout the year. This time-dependent model is then used to assess the reliability level provided by the current Canadian brittle fracture provisions. Given certain plate thickness, crack size, load levels and geographical temperature data, the annual probability of failure and annual reliability index, β, are obtained. The obtained reliability indices are compared with a target reliability index to assess the extent to which the Canadian provisions provide consistent and adequate levels of reliability against brittle fracture. On the basis of the results, the North American brittle fracture design provisions are critically assessed, and new design tools from these probabilistic studies are presented. Opportunities for improvement in the existing Canadian standards and areas warranting further study are lastly highlighted.Item Costs and Benefits of Building Airtightness Improvements for Air Pollution Exposure and Human Health(University of Waterloo, 2024-08-29) Salehi, Amir RezaAir pollution is the largest global environmental health threat; fine particulate matter (PM₂.₅) alone, as the most harmful air pollutant, is associated with millions of premature deaths each year. Most studies focus on the impacts of changes in outdoor air PM₂.₅ on human health. This overlooks the fact that most exposure to PM₂.₅ of outdoor-origin occurs indoors, as people tend to spend most of their time indoors, and that the pollution infiltrates the building envelope. Specifically, Americans spend almost 70% of their time in their homes, and approximately 50% of outdoor-origin PM₂.₅ health burdens are due to residential exposure. To better understand the effect of this infiltration on human health, and to explore opportunities for improvement, this study examines the health impact of enhancing building airtightness, particularly in single-family homes where approximately 75% of the U.S. population resides. This thesis conducts a historical study on modeled daily average PM₂.₅ levels between the years 1980 to 2010 in the United States to examine the national and spatial costs and benefits associated with improving the airtightness of these homes to mitigate the health effects of air pollution. To achieve this, an integrated modeling framework was developed, which incorporates mass balance modeling, health impact modeling, and economic modeling. This framework was used to establish baseline and alternative levels of exposure to outdoor-origin PM₂.₅ across the historic building stock in the contiguous United States under the current state and post-intervention state. Subsequently, it evaluates the health benefits and retrofit costs associated with improving airtightness levels. The primary scenarios evaluated involve enhancing building air sealing to meet the standards mandated by the International Energy Conservation Code (IECC) 2018. Additionally, secondary scenarios of 20%, 25%, 40%, and 60% air leakage reductions were also considered. This study analyzes the benefits of improving three distinct building age groups. The results reveal that enhancing the airtightness of single-family homes up to IECC 2018 mandates results in interventions costing approximately $105 ($102, $107) billion nationally. However, they could save about 44,611 (29,831, 58,905) lives annually and deliver annual health benefits valued up to $356 ($45, $1,173) billion in 2020 USD. The result is an annual net benefit of approximately $251 (-$62, $1,067) billion in 2020 USD, in the intervention year. This study also indicates that older homes, particularly those constructed before 1940, exhibit the greatest reductions in indoor PM₂.₅ levels from outdoor sources. These homes demonstrate a potential annual benefit of $55 ($7, $193) billion in 2020 USD and 7,104 (4,655, 9,832) lives saved, translating to about $3,066 ($390, $10,759) in 2020 USD in benefits per resident annually. On a per-house basis, the cost of improvements in these older homes averages $1,686 ($1,616, $1,756) in 2020 USD, while the net benefit per resident can reach up to $2,263 (-$442, $9,825) in 2020 USD in the year of intervention. Significant spatial variability in benefits exists, with the greatest impacts observed in the eastern U.S. due to higher regional pollution levels and leakier homes. Further, there is uncertainty associated with model parameters, particularly associated with the health response to exposure. Despite these uncertainties, most interventions studied show large mean net benefits. These findings strongly support that targeted enhancement of building airtightness substantially impacts public health and should be considered by decision-makers when designing building standards or developing retrofit plans.Item An Investigation of Factors Affecting the Adsorption of Per- and Polyfluoroalkyl Substances (PFAS) on Colloidal Activated Carbon (CAC): Implications for In-situ Immobilization of PFAS(University of Waterloo, 2024-08-28) Gilak Hakimabadi, SeyfollahThe immobilization of per- and polyfluoroalkyl substances (PFAS) by colloidal activated carbon (CAC) barriers has been proposed as a potential in-situ method to mitigate the transport of plumes of PFAS in the subsurface. However, if PFAS are continuously released from a source zone, adsorptive sites on CAC will eventually become saturated, upon which point breakthrough of PFAS from a barrier will occur. To predict the long-term effectiveness of CAC barriers, it is important to investigate the factors that may affect the adsorption of PFAS on CAC. The objective of this research is to investigate some of these factors by answering the following questions: (1) How do co-contaminants, aquifer materials, and typical groundwater constituents affect the adsorption of PFAS by CAC?; and (2) How does reducing the particle size of activated carbons (ACs) affect their physico-chemical properties and ability to adsorb PFAS? To address the first research question, the adsorption of seven anionic PFAS on a polymer-stabilized CAC (i.e., PlumeStop®) and a polymer-free CAC was investigated using batch experiments (Chapter 3). The research employed synthetic solutions consisting of one PFAS, 1 mM of sodium bicarbonate (NaHCO3), and inorganic and organic solutes, including Na+, Cl-, Ca2+, dissolved organic carbon (DOC), diethylene glycol butyl ether (DGBE), trichloroethylene (TCE), benzene, 1,4-dioxane, and ethanol. It was observed that the affinity of PFAS to CACs was in the following order: PFOS > 6:2 FTS > PFHxS > PFOA > PFBS > PFPeA > PFBA. This result indicates that hydrophobic interaction was the predominant adsorption mechanism and that hydrophilic compounds such as PFBA and PFPeA will breakthrough CAC barriers first. The partition coefficient Kd for the adsorption of PFAS on the polymer-stabilized CAC was 1.3–3.5 times smaller than the Kd for the adsorption of PFAS on the polymer-free CAC, suggesting that the polymers decreased the adsorption, presumably due to competition. Thus, the PFAS adsorption capacity of PlumeStop CAC barriers could increase once the polymers are biodegraded and/or desorbed. The affinity of PFOS and PFOA to CAC increased when the ionic strength of the solution increased from 1 to 100 mM, or when the concentration of Ca2+ increased from 0 to 2 mM. In contrast, less mass of PFOS and PFOA was adsorbed in the presence of 1–20 mgC/L Suwannee River fulvic acid, which represented dissolved organic carbon, or in the presence of 10–100 mg/L diethylene glycol butyl ether (DGBE), which is a major component in some aqueous film-forming foam (AFFF) formulations. Therefore, information on the occurrence of DGBE and other glycol ethers in AFFF-impacted groundwater is needed to assess if the effect of these species on CAC barrier performance is appreciable. The presence of 0.5–4.8 mg/L benzene or 0.5–8 mg/L TCE, the co-contaminants that may comingle with PFAS at AFFF-impacted sites, diminished PFOS adsorption but had no effect or slightly enhanced PFOA adsorption. When the initial concentration of TCE was 8 mg/L, the Kd (514 ± 240 L/g) for the adsorption of PFOS was approximately 20 times lower than that in the TCE-free system (Kd = 9,579 ± 829 L/g). Therefore, the effect of TCE and benzene may depend on the type of PFAS. To gain insight into the effect of aquifer materials and water chemistry, the adsorption of PFOS, PFOA, and PFBS on CAC was investigated in the presence of six aquifer materials. Further, the removal of five PFAS (PFOS, PFOA, PFHxA, PFHxS, and 6:2 FTS) from six actual groundwater samples was studied (Chapter 4). Under the experimental conditions employed, the mass of PFBS, PFOA, and PFOS removed from the solution in the presence of CAC and aquifer materials was 2 to 4 orders of magnitude greater than the mass removed when only aquifer materials were present. It was also observed that the presence of aquifer materials did not appreciably affect the adsorption of PFBS, PFOA, and PFOS on CAC. In the experiments with six actual groundwater samples, the affinity of the studied PFAS to CAC was in the following order: PFOS > 6:2 FTS > PFOA ~ PFHxS > PFHxA, except for two instances of 6:2 FTS being the compound removed to the greatest extent. The adsorption affinity trend among the studied PFAS is consistent with the adsorption being driven by the hydrophobic effect. Principal component analyses (PCA) of the results obtained from the experiments with aquifer materials demonstrated that the correlation between the partition coefficient Kd for each PFAS and Ca2+ and DOC was the opposite of the correlations observed in Chapter 3. In the groundwater experiments, the correlation between Kd for each PFAS and ionic strength and Ca2+ was also the opposite of the correlations observed in Chapter 3. These opposite effects were hypothesized to be due to a complex interplay among various parameters affecting the adsorption of PFAS on CAC, which may confound the effect of each parameter. The results of this study indicate that laboratory experiments designed to evaluate the retention of PFAS in a CAC barrier should employ site-specific groundwater and aquifer materials. To address the second research question, four commercial ACs (three granular and one powdered) were pulverized by grinding and micromilling to create powdered activated carbons (PACs) and CACs, and the adsorption of PFBS, PFOA, and PFOS on these adsorbents (11 in total) was investigated (Chapter 5). All three PFAS were adsorbed less by CACs (d50 = 1.2–2.5 m) than by their parents PACs (d50 = 12–107 m). A detailed characterization of the properties (surface area, micropore, and mesopore volumes, pHpzc, and surface elemental composition) of these adsorbents suggests that the reduced adsorption capacity of CACs was likely the result of AC oxidation during milling, which decreased surface hydrophobicity. Granular activated carbons (GACs, 425–1,700 m) adsorbed PFAS less than PACs and CACs, partly due to the slow rate of adsorption. Of all ACs, the materials made from wood possessed the greatest surface area and porosity but adsorbed PFAS the least. The repulsion between the negatively charged surface of these wood-based ACs (pHpzc = 5.1) and the negatively charged headgroup of PFBS, PFOA, and PFOS molecules was identified to be the dominant factor that inhibited adsorption. The results of this study suggest that the adsorption kinetic advantage of CACs may be achieved at the expense of reduced adsorption affinity and that the role of electrostatic interaction between PFAS and AC should be considered when selecting AC for PFAS treatment applications.Item Self-prestressing Iron-based Shape Memory Alloy (Fe-SMA) Epoxy Composite for Active Reinforced Concrete Shear Strengthening(University of Waterloo, 2024-08-23) Pinargote Torres, JohannaIn Canada, the rapid deterioration of aging reinforced concrete (RC) structures has become a continuing issue, with more than 40% of bridges being older than 40 years old and 38% being in poor and fair condition, necessitating billions for rehabilitation (Cusson & Isgor, 2004; Lafleur, 2023). The loss of the strength and stiffness in RC bridge structures can be attributed to age and exposure, and it has been exacerbated with the increase of freight weight, traffic, extreme freezing/thawing cycles, and climate change. A concerning RC failure mode is shear due to its brittle and abrupt nature. Hence, various shear-strengthening mechanisms have been developed. Most of these mechanisms involve fiberreinforced polymers (FRP) and are passive, acting after the structure experiences damage. Active (prestressing) mechanisms have gained notoriety due to their ability to act immediately after application, reducing crack widths and propagation. However, implementing shear prestressing is complex, often requiring expensive and impractical large jacking equipment. Smart materials such as iron-based shape memory alloys (Fe-SMAs) have the potential to be implemented in cost-efficient and simple shear strengthening and retrofitting techniques. Fe-SMAs present a thermomechanical property known as shape memory effect (SME) that allows the material to return to its undeformed shape after reaching an activation temperature, which can be done with resistive heating. If the material is restrained, the Fe-SMA has the capacity to self-prestress an element without the need of jacking tools. This project presents an experimental study on the shear strengthening feasibility and capacity of a near-surface bonded (NSB) active Fe-SMA epoxy composite. The composite consists of u-bent strips embedded into grooves filled with epoxy. After the epoxy cures, the Fe-SMA strips are heated to at least 180oC with an electric current to self-prestress the concrete. Three shear-critical RC beams were cast, with one beam being used as control, and the other two beams being shear strengthened. Two FeSMA ratios were assessed 0.05% and 0.1%. The strengthened beams exhibit about a 27% increase in strength, and the reduction of crack widths and stirrup stresses. The NSB Fe-SMA strips interrupt the formation and widening of diagonal cracks; however, increasing their ratio may not mean an increase in shear strength. A dense NSB Fe-SMA - concrete interface weakens the stirrup plane, creating horizontal cracks running along the top face of the beam (ends of the Fe-SMA u-wrapped strips) in the compression region and causing the separation of the side concrete cover. Additional insights on the active shear strengthening have been provided with two FEA parametric studies using Vector2 by evaluating prestress level and Fe-SMA ratio. This project assesses the shear strengthening effect of near-surface bonded (NSB) active Fe-SMA epoxy composites on the load-displacement response, crack widths, and reinforcement stresses on shear-critical RC specimens.Item Investigation of the Interrelationships between Orthophosphate Corrosion Inhibitors, Monochloramine Residual, Biofilm Development, and Nitrification in Chloraminated Drinking Water Distribution Systems(University of Waterloo, 2024-08-21) Badawy, MahmoudLead contamination in drinking water distribution systems (DWDS) due to pipe corrosion is a human health concern. Orthophosphate, used to control corrosion, creates passive films to eliminate lead release. At the same time, it may enhance biofilm growth, monochloramine decay, and nitrification potential since phosphorus is an essential nutrient for microorganisms. However, there is limited and contradictory information on these effects in the previous studies, which may be attributed to variations in nutrient limitations in the water used across these studies. Specifically, the addition of phosphate may enhance microbiological growth in phosphorus-limited water. However, most previous studies did not examine phosphorus limitations in the water employed in their experiments. Biofilm growth and monochloramine breakdown have not been tracked concurrently in most of the previous studies. This could be the key to understanding how orthophosphate affects monochloramine decay. Furthermore, there is a lack of research on the effect of phosphate on nitrification in real-world DWDS; hence, more research needs to be conducted. The main goal of this thesis was to investigate the effect of orthophosphate on biofilm development by assessing microbiological growth, biofilm formation potential, and metabolic activity, in addition to monitoring the effects of orthophosphate on monochloramine decay and nitrogenous compounds. These parameters were monitored simultaneously in both the presence and absence of orthophosphate to facilitate a more comprehensive understanding of its effects. This objective was achieved primarily through experiments with bench-scale flow through model distribution systems (MDS) and additional laboratory batch tests using treated water from a Great Lakes utility. In the first phase of this study, initial batch tests indicated that the test water used throughout the thesis is phosphorus-limited. Subsequently, in a 3-month experiment with 4 MDS fed with chloraminated water (2 mg Cl2/L) and orthophosphate doses of 0 to 4 mg PO43-/L, it was found that increasing the dose of orthophosphate enhanced the biofilm growth and monochloramine decay (measured as total chlorine) in the MDS, with the highest increases between 1 and 2 mg PO43-/L. A positive relationship between biofilm microbiological growth and the total chlorine decay coefficients indicates that the higher monochloramine decay due to orthophosphate addition are attributed to increased microbial activity. In the second phase, the impacts of monochloramine doses of 2 and 3 mg Cl2/L were explored with and without 2 mg PO43-/L of orthophosphate over 108 days. The presence of orthophosphate enhanced both the growth and development of the biofilm and the rates of monochloramine degradation, as observed in the first phase. Increasing the monochloramine dose from 2 to 3 mg Cl2/L slightly reduced microbiological growth and noticeably decreased first-order monochloramine coefficients (measured as total chlorine). Despite this reduction, free ammonia levels increased with the higher monochloramine dose due to a greater total ammonia presence. A strong correlation was also noted between total chlorine decay coefficients and biofilm microbiological parameters. Additionally, orthophosphate increased the genetic diversity within biofilm communities, whereas increasing the monochloramine dose resulted in a noticeable reduction in genetic diversity. In the third phase, the effect of residence time (6 days and 12 days) on monochloramine decay in the presence and absence of orthophosphate (2 mg PO43-/L) was studied using two MDSs. It was found that the longer residence time of 12 days led to higher microbial activity, monochloramine decay coefficients (measured as total chlorine), and nitrite formation compared to the shorter residence time of 6 days. Additionally, orthophosphate enhanced microbiological growth, monochloramine decomposition, and nitrite formation at the 12-day site, whereas its impact was less pronounced and became only evident after day 62 at the 6-day site. First-order total chlorine decay coefficients and nitrite concentration remained stable throughout the experiment in both MDSs at the 6-day residence time. However, at the 12-day residence time, monochloramine decay progressively increased over time, accompanied by a rise in nitrite formation by the end of the experiment. The links between monochloramine decay and biofilm microbiological parameters were also noted. These correlations suggest that the increase in monochloramine decomposition, which may have resulted from increased residence time and/or the addition of orthophosphate, was largely driven by microbiological growth and activity. In the fourth phase, the results from previous phases were evaluated on another phosphorus-limited water source with different water chemistry. A selected water source from batch-testing of different water sources and reference water from earlier phases were chloraminated at 2 mg Cl2/L and tested in four MDSs, two fed with the reference water (one control and one with 2 mg PO43-/L) and two fed with the selected water source (one control and one with 2 mg PO43-/L). The effects of orthophosphate on the two water sources concerning the growth and development of biofilm and the decomposition of monochloramine were similar. The similar impacts of orthophosphate on both water sources indicate that the results obtained in the previous phases may be valid for other phosphorus-limited water sources, even with different chemical compositions. In the final phase, a batch test study was conducted on full-scale DWDS samples that employ monochloramine and orthophosphate to assess monochloramine decay and nitrification potential. This study was compared to an earlier study conducted before the introduction of orthophosphate, which utilized samples from the same DWDS sampling sites and identical batch-testing procedures. Monochloramine decomposition due to microbiological processes was found to be higher at further points in the DWDS with longer residence time after adding orthophosphate. Also, nitrite formation during batch tests using samples collected from locations far from the distribution system's entrance was greater after adding orthophosphate, indicating a higher potential for nitrification. Monochloramine decay due to chemical processes was similar before and after orthophosphate addition. In conclusion, orthophosphate promoted biofilm formation, genetic diversity, and nitrification potential, which, in turn, increased monochloramine decay. To mitigate these effects, the thesis recommends some strategies that can be adopted, including decreasing orthophosphate dosages, increasing monochloramine dosages, and shortening the residence time while closely monitoring water quality parameters, especially nitrification indicators.