Civil and Environmental Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9906
This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Recent Submissions
Item type: Item , Unraveling the Influence of Natural Organic Matter on Lead Release in Drinking Water Distribution Systems(University of Waterloo, 2025-11-10) Mosavari Nezamabad, NastaranLead release in drinking water distribution systems remains a critical public health challenge, governed by chemistry of corrosion scales and their interactions with bulk water chemistry. Parameters such as pH, dissolved inorganic carbon, residual disinfectants, aluminum residuals, and corrosion inhibitors exert direct control over the solubility, transformation, and stability of lead-bearing components. Natural organic matter (NOM)—a heterogeneous assemblage of compounds produced by biological and geochemical processes—further complicates these processes. NOM can enhance metal mobilization through surface complexation, colloid stabilization, and modification of corrosion product surfaces. This thesis investigates the role of NOM on lead in drinking water systems with emphasis on its characterization, its influence on corrosion scales, and its effects on lead release under variable chemical conditions. NOM from Southern Ontario, Canada, riverine and lacustrine sources was analyzed using complementary techniques like liquid chromatography–organic carbon detection (LC-OCD), fluorescence excitation–emission matrix (FEEM) spectroscopy, and solid-state 13C NMR. These methods identified distinct compositional differences between riverine and lacustrine NOM. Such compositional contrasts can be translated into differential affinities for forming complexes with corrosion products and varying propensities to generate mobile colloids. A series of bench-scale galvanic corrosion cells simulating partial lead service line replacement were used to assess the effects of NOM type , NOM concentration, pH, dissolved inorganic carbon (DIC), residual aluminum, and orthophosphate on lead release under chloraminated conditions. Two-level factorial or half-fractional designs were used for experimental design. Statistical analyses, generalized additive mixed models (GAMM) and factorial analysis, were employed to evaluate the influence of studied factors and their combined impacts on lead release. These analyses consistently demonstrated that higher pH and DIC reduced total and dissolved lead release, while NOM enhanced dissolved lead mobilization. While aluminum had minimal impact in NOM-free conditions, its presence under NOM-rich conditions reduced lead concentrations. NOM also diminished the effectiveness of orthophosphate in reducing lead concentration, even under elevated pH. Corrosion scales were analyzed to identify mineral phases and morphologies. Scale analysis confirmed the predominance of smaller hydrocerussite and/or cerussite in the presence of NOM. Collectively, these findings provide mechanistic insight into the interplay between NOM, water chemistry, and corrosion control strategies. This research advances understanding of NOM–metal interactions and supports the development of more resilient strategies for managing lead in drinking water systems.Item type: Item , Experimental investigation of the Structural Perfomance of Engineered Bamboo Composites (Dragonwood) in Jacking Applications(University of Waterloo, 2025-10-15) Alayande, TolulopeClimate change has necessitated gravitation towards bio-based building materials. Many innovative, bio-based, structural products with exceptional strength and resistance, such as mass timber and bamboo composites have emanated from advances in adhesive chemistry and manufacturing to mitigate climate issues. This study was conducted to investigate the structural performance of an engineered bamboo composite, namely Dragonwood, in jacking applications. While tropical hardwoods have been the common choice for jacking applications across North America; the decline in quality is now raising high level concerns with regards to their performance in jacking applications. Engineered bamboo composites (EBCs) have been observed to be highly durable, resistant to fungal infection and show high strength to weight ratio. In terms of environmental sustainability, EBCs are a great substitute for tropical hardwoods since it can take as long as 60-80 years for hardwood to mature but bamboo can mature in 2-6 years. Furthermore, with EBCs showing superior long-term behavior under varying environmental exposure, many heavy-duty engineering companies involved in day-to-day jacking applications are incentivized to gravitate EBCs, soon. The mechanical properties of the Dragonwood specimens were investigated experimentally in bending and compression based on the principal modes of failure to ascertain the performance of these beams in different orientations and loadings. Since the manufacture of Dragonwood involves face bonding two sections of parallel strand bamboo (PSB) boards to form a larger laminated veneer bamboo (LVB) section, glue line failure was also observed. It was found that in bending, the primary failure mode in the specimens was simple tension as the fibers on the underside of the beams broke due to deflection. Horizontal shear failure was also observed along the glue line in some of the Dragonwood specimens. In compression, the failure mode of the Dragonwood specimens was an overall section buckling characterized by cracks and splits at the end. Regardless, the specimens still demonstrate bilinear behavior both in bending and compression as observed in the stress-strain graphs. With the Ekki specimens, the flexural response was independent of orientation. However, in compression, vertically oriented Ekki specimens were observed to fail in less time than horizontally oriented ones. From the experimental results, the strength and modulus properties were evaluated, resulting deviations and variances were obtained, and Kolmogorov-Smirnov (K-S) tests were employed to determine the goodness of fit for comparison with known theoretical distributions. From the K-S tests, the experimental data demonstrated conformity with normal distribution.Item type: Item , Municipal Wastewater Sludge as a Sustainable Bioresource: Spatial Analysis Across Ontario(University of Waterloo, 2025-10-08) Granito Gimenes, CamilaEffective wastewater sludge management is critical for sustainable wastewater treatment, nutrient recovery, and environmental protection. However, Ontario lacks in data of sludge generation and nutrient content, particularly across diverse facility sizes and treatment processes. This study aims to fill that gap by estimating wastewater sludge generation, nitrogen and phosphorus content, and disposal practices across 548 municipal wastewater treatment plants (WWTPs) in Ontario. Using a combination of facility-level annual reports, the Wastewater Systems Effluent Regulations (WSER) database, Ontario Clean Water Agency (OCWA) records, and National Pollutant Release Inventory (NPRI) was developed treatment-specific coefficients from a subset of plants with complete data to extrapolate sludge generation, nitrogen and phosphorus mass for facilities lacking direct data. For the year 2022, the most recent year with complete data, Ontario’s WWTPs generated an estimated 356,265 ± 35,859 dry metric tons of sludge, with a per capita generation of 23.5 kg/person/year falling within the range reported in the literature (17.8–31.0 kg/person/ year). Nutrient content analysis revealed median concentrations of 29 kg/ metric tons of dry sludge for phosphorus and 42 kg / metric tons of dry sludge for nitrogen, resulting in an estimated 9,937 ± 1,837 metric tons of phosphorus and 15,302 ± 9,044 metric tons of nitrogen generation per year in wastewater sludge. Over 50% of the nutrients are concentrated in larger, anaerobic digester-equipped facilities, located primary in Southern Ontario. Incineration accounts for the end-use of 30% of the total sludge generated, resulting in the loss of their nutrients. In contrast, agricultural disposal, practiced by 140 facilities, allows for nutrient recovery from 26% of total sludge generated. Spatial and process-level analysis revealed that plant size and stabilization method are predictors of disposal type. Large plants (defined with influent (≥ 37,850 m3/day), which are more likely to operate aerobic or anaerobic digesters, tend to adopt more sustainable disposal methods when conditions permit (e.g. during appropriate seasons). In contrast, small facilities (with influent (≤ 3,785 m3 /day) often lack in advance stabilization and are more likely to rely on less sustainable practices such as landfill. Many of these facilities also lack consistent reporting, making it difficult to track sludge generated and disposal pathways. By quantifying the generation of sludge and nutrient flows across Ontario, this study provides a baseline for evidence-based decision-making. The data can be used by municipalities and regulators to identify areas with high biosolids generation, data gaps, and to target specific regions for further study or investment. These findings highlight the need for provincial-level data transparency and target strategies to promote nutrient circularity in municipal sludge management, particularly by addressing the data and resource gaps at smaller facilities. While this study provides a valuable province level overview, a key limitation is the reliance on extrapolation data or facilities lacking complete records, underscoring the need for improved, standardized reporting and new methodologies with more data in the future.Item type: Item , Automating Construction Material Sourcing and Distribution for Circularity(University of Waterloo, 2025-09-23) Olumo, AdamaCircularity in the construction industry is developing, with increasing emphasis on extracting resources from existing infrastructure. Given the growing amount of resources embedded in the current housing stock, sustainability within the industry is critical. To support the large-scale reuse of Reclaimed Construction Materials (RCMs), through active reuse strategies, it is essential to develop tools and frameworks for sourcing RCMs. This study contributes to that effort by providing insights into the creation of such frameworks and emphasizing the value of material reuse within the construction sector. Although material reuse is considered an excellent circular strategy, the application of reuse across the industry still faces technical, social, and environmental limitations. A significant drawback of material reuse is the complexity of finding RCMs that fit a design with limited alterations required for use. Furthermore, the environmental and economic cost of acquiring and reusing RCMs is taxing, compared to acquiring New Construction Materials (NCMs). Additionally, there is limited insight into other options for restoring existing building resources before replacement. Therefore, this thesis develops decision support frameworks for component level assessment of RCMs and assembly level assessment of RCMs. The component level assessment tool is designed to integrate 3D scanning, Optimization Programming Languages (OPL), Life Cycle Assessment (LCA) and Building Information Modeling (BIM) tools to create an enhanced digital supply sourcing system; whereby RCMs at secondary sources can be found with basic required information like; cost, proximity and dimensions to enable planning and implementation. The component-level assessment framework is refined and extended through a policy assessment study, demonstrating its adaptability to diverse challenges presenting both risks and potential benefits for policy implementation. This thesis is fundamentally based on real-world data gathered from RCM stores and it challenges the current ongoing building design practices that deem material reuse as a problematic approach by enabling flexible sourcing of used and new building materials and providing an assessment framework for selecting appropriate restoration strategies. The approach alters the social perspective to consider partial integration of RCMs at varying levels of integration in new building projects.Item type: Item , Methods for Modelling Wetlands in Hydrologic Models(University of Waterloo, 2025-09-22) Tucker, Madeline GabrielaWetlands are abundant natural systems that serve as important ecosystems, mechanisms for nutrient filtering and storage, and providers of flood mitigation services. Wetlands strongly influence the hydrologic response and water balance on a landscape. The practice of water resources management often relies on numerical computer models that represent hydrologic features within a watershed like wetlands, lakes, and rivers, to accurately simulate the movement of water. However, representation of wetlands in hydrologic models is challenged by their small-scale nature, numerous classification schemes that are not readily associated with a water balance conceptual model, and sometimes complex hydrology. A shortcoming of existing wetland modelling studies includes the lack of multiple wetland types being represented, often due to the complexity that accompanies wetland classification schemes. In this study, we address three research objectives: 1) to inventory existing wetland modelling methods and develop a catalogue of conceptual-numerical wetland modelling methods in hydrology based on wetland classifications and numerical water balance equations, 2) to implement conceptual-numerical wetland modelling methods in a regional hydrologic model case study and evaluate model performance to determine the impact of wetlands on simulation results, and 3) to examine how available wetland mapping products can inform wetland modelling. A hydrologic model of the Nipissing watershed in Ontario was built using the Raven Hydrologic Framework and calibrated in a multi-objective calibration to both high and low flow objective functions in three modelling scenarios. The first modelling scenario (Scenario 1) contained no wetland representation; the second modelling scenario (Scenario 2) contained explicit wetland representation of one wetland conceptual-numerical model type; and the third modelling scenario contained explicit wetland representation of three wetland conceptual-numerical model types based on connectivity of wetlands to modelled streams and lakes. Calibration results indicated good model performance for all model scenarios, as an adequate performance threshold of 0.50 for the Kling Gupta Efficiency (KGE) and log transformed Nash Sutcliffe Efficiency (logNSE) was achieved for both performance metrics. In calibration, Scenario 2 most often outperformed Scenario 1 (no wetland scenario) at individual calibration gauges and Scenario 3 (most complex wetland scenario) due to pareto solution uncertainty and site-specific properties. Validation results indicated that Scenario 3 most often outperformed the other two scenarios across multiple performance metrics at individual flow gauges and handled low flows especially well when analyzing low flow performance metrics and hydrographs. This is attributed to Scenario 3 storing the most water in wetland depressions out of all modelling scenarios from abstraction, lateral diversion of water accounting for wetland contributing areas, and groundwater process parameters set up for each simulated wetland type. Percent bias median and spread across all flow gauges significantly decreased by 15% from Scenario 2 to Scenario 3, highlighting the importance of low flow accuracy to hydrologic model performance. Flow duration curves and hydrographs plotted by flow gauge demonstrated that site-specific properties of the entire study area and individual gauge drainage areas can impact simulation results. There was no relationship found between gauge drainage area, wetland coverage percent by area, and model performance at individual gauges in this study. Four wetland mapping datasets in Ontario were compared to select a wetland data input to the Nipissing model. By comparing each wetland dataset, a formalized checklist is provided for modellers to use as a reference when making similar comparisons between their own wetland mapping products. It is recommended that wetland mapping product comparisons for project suitability be performed by first comparing wetland coverage between datasets using the wetland polygon coverage by area, then comparing spatial variability between datasets by inspecting areas of overlap and non-overlap, and finally comparing data attributes, particularly wetland classifications and any discrepancies between dataset attributes. While the results of this study demonstrate the importance of low flow accuracy to model performance through the representation of wetlands, improvements could be made to aid future studies. It is recommended that future studies select a watershed with high quality flow and meteorological data, basins with varying wetland coverage, and little to no water regulation influence (e.g., hydroelectric dams). It is also recommended that the wetland conceptual-numerical models presented in this thesis be further tested on watersheds of different sizes, different combinations of wetland types, and varying degrees of complexity.Item type: Item , Quantifying and Mitigating Uncertainty in Crash Risk Prediction for Road Safety Analysis(University of Waterloo, 2025-09-17) Aminghafouri, RezaRoad safety analysis is a cornerstone of traffic safety management programs like Vision Zero, which aim to eliminate fatalities and serious injuries on roadways. Central to road safety analysis is the ability to accurately predict crash risk; however, this task is challenged by significant uncertainty arising from the random nature of crashes (aleatoric uncertainty) and limitations in data and modeling (epistemic uncertainty). These uncertainties can lead to the misidentification of hazardous locations, resulting in false positives and negatives, and the inefficient allocation of limited safety resources. While numerous statistical models exist for risk prediction, most traditional crash-based approaches provide simple point estimates, failing to formally quantify the inherent uncertainty in their predictions. Proactive conflict-based analysis has emerged as a promising alternative that avoids direct reliance on sparse crash data, but its application introduces new methodological challenges. The reliability of conflict-based predictions is not well understood, and key methodological choices, such as the duration of data collection and the selection of analytical thresholds for Extreme Value Theory (EVT) models, introduce significant, often unaddressed, uncertainty into the results. To overcome these challenges, this thesis systematically develops and evaluates a framework to quantify, investigate, and reduce critical sources of uncertainty in road safety analysis. First, to quantify the impact of uncertainty on network screening, a frequentist approach is employed to establish a joint confidence region (CR) for hotspot rankings, moving beyond simple point estimates. This is achieved by first estimating the confidence interval (CI) of risk for each location using a hierarchical Full Bayesian (FB) model that considers both crash frequency and severity. Second, this research investigates a primary source of data uncertainty in conflict-based analysis by systematically assessing the relationship between sample size and prediction reliability using a unique, year-long LiDAR dataset and a Bayesian Peak-Over-Threshold (POT) EVT model. Third, to address methodological uncertainty in EVT, an automated and objective approach for threshold selection is developed and validated, comparing a Sequential Goodness-of-Fit Selection Method (SGFSM) with an Automatic L-moment Ratio Selection Method (ALRSM) to reduce analytical subjectivity. The analysis demonstrates that explicitly accounting for uncertainty can lead to substantially different hotspot identifications, revealing that rankings based on point estimates alone may be unreliable. The sample size analysis reveals that the common practice of using short-term conflict data is inadequate for reliable collision predictions, a finding that challenges the validity of a significant portion of the existing literature on conflict-based safety analysis. Finally, the automated threshold selection approach, particularly the L-moment-based approach, proves to be a robust and objective method that improves the accuracy of crash risk estimation. Collectively, this research provides researchers and practitioners with an evidence-based methodology to understand, quantify, and mitigate key uncertainties in road safety analysis, fostering more reliable safety assessments and a more effective allocation of resources.Item type: Item , Evaluating Window View Quality of Building-Integrated Photovoltaic Using Immersive Virtual Reality(University of Waterloo, 2025-09-16) Yekeh, SorourBuilding-Integrated Photovoltaic (BIPV) windows have gained increasing attention due to their onsite energy generating capability and possibility of reducing operational carbon, but their impact on occupants’ view experience in terms of view clarity and privacy has not been addressed. This research addresses this gap by assessing how various design approaches of mono- and poly-crystalline Silicon BIPV windows impact occupant satisfaction using a human-centric perspective, based on the combination of subjective experience and objective BIPV windows performance analysis. For this purpose, an immersive virtual reality (IVR) test was developed which enabled more than 70 participants to assess BIPV window views in a virtual office setting. The IVR approach is a scalable process for early-stage assessment of the façade which is particularly useful in environments where the development of a physical prototyping is expensive, time-consuming or not possible. Solar cell size and visible light transmittance (TVIS) were varied to assess trade-offs among view clarity, privacy, and overall satisfaction. The selected by the participants configurations were further analyzed in terms of annual daylighting performance and PV energy generation. This included performance metrics as daylight autonomy and glare probability, as well as potential generated electricity (kwh/m2/year), to paint a picture of how user-preferred BIPV configurations will perform in the context of energy generation and daylighting. The results show a trend where BIPV configurations with increasing TVIS were preferred overall by participants for better view clarity and higher overall satisfaction, even if their view privacy was slightly compromised. A BIPV window with full size cells and TVIS = 0.48 was found to be the preferential configuration, providing both view satisfaction and an acceptable PV output. This scalable, repeatable, and cost-effective approach enables researchers and designers to simulate and assess occupant perception of BIPV window views, offering valuable insights for sustainable, occupant-centered façade design. These insights aim to guide improvements in BIPV window design, ensuring a balance between maximizing energy generation while maintaining window functionality and occupant comfort.Item type: Item , Enhanced Constraints Representation and MTSP Schedule Optimization for Repetitive Projects Considering Environmental Sustainability(University of Waterloo, 2025-09-04) Saeed, FamThe Canadian construction industry is facing serious challenges regarding project delays (affecting 25% - 50% of large infrastructure projects), cost overruns (totaling around $91 billion between 2017 and 2023), and contributing nearly 39% to Canada’s annual Greenhouse Gas emissions. Most of the infrastructure projects, e.g. bridges, highways, and multi-school rehabilitation, involve tasks that are repetitive in nature. Thus, repetitive scheduling techniques offer multiple benefits, including economy of scale and learning curve savings. However, planning such projects remains complex, particularly when units are not identical in size and multiple crews are being involved to fast-track the project. In response, this research developed novel Multi-Travelling Salesman Problem (MTSP)-based schedule optimization model(s) offering an enhanced constraints representation for optimizing key repetitive construction projects. The proposed research framework incorporates three modules to handle various types of repetitive projects (linear and scattered). It also provides more sustainable schedules by integrating environmental impacts into the optimization process. Unlike existing models, the proposed modules formulate the scheduling problem as a MTSP where crews move across units to complete their assigned tasks while meeting time and budgetary constraints. One of the modules also quantifies, monetizes and incorporates key environmental impacts into the optimization process, accounting for variations in construction methods and materials and their effects on time, cost, and sustainability. The optimized schedules are then visualized through legible charts, GIS maps, and intuitive construction simulations. This modules formulation is unique. Its novelty beyond existing repetitive scheduling models stems from: (1) handling multiple crews simultaneously by incorporating crew routing sequences as key scheduling variables through a MTSP formulation; (2) introducing adjacency constraints to maintain, for all crews, both work continuity and unit linearity, if needed, while accounting for transition times between units; (3) adding new criteria to the multi-mode construction scheduling models where each task can be executed using optimal alternative methods and materials affecting time, cost and environmental considerations; and (4) generating sustainable construction schedules by quantifying and integrating environmental impacts into the optimization process. The proposed model is validated against previous models in the literature through several case studies. Not only do these modules provide more efficient computational performance, but they also demonstrate significant improvements across various optimization objectives, such as project duration, crew transition-time, crew idle-time, environmental impacts, total cost, and various combinations. The proposed research framework provides significant benefits as a scalable and adaptable solution for managing complex repetitive projects, enhancing the delivery of infrastructure projects and offering substantial benefits to the multi-billion-dollar construction industry.Item type: Item , Radon Exposure in Canadian Residences: Predictive Models, Low-Cost Sensor Evaluation, and Multi-Family Building Assessments(University of Waterloo, 2025-08-28) Giang, AmandaRadon gas is a naturally occurring carcinogen that infiltrates homes and buildings, where it can accumulate to dangerous levels. Although it is the second leading cause of lung cancer in Canada, public testing and mitigation remain limited. Modern changes to residential construction, particularly the move toward airtight designs, have inadvertently contributed to increased radon levels in Canada. Alongside geological variations and the emergence of new radon detection tools, the interplay of all these factors needs to be further explored. This thesis examines the residential radon exposure across Canada through three key research objectives: 1) the development of predictive models for radon risk; 2) the evaluation of low-cost electronic radon monitors (ERMs); and 3) the characterization of radon patterns in Canadian high-rise multi-family buildings. To address the first objective, random forest regression models were developed using national and regional datasets from Health Canada (HC) and the British Columbia Radon Data Repository (BCRDR), respectively. These models integrated spatial, seasonal, and building-specific variables to estimate residential radon levels. The BCRDR model explained up to 47% of the variance in radon concentrations, whereas the HC model was lower at 27%, with forward sortation area ranking consistently as the most influential predictor across both models. However, limited cross-dataset performance using the BCRDR model on the HC model highlights the ongoing challenge in the transferability of models from one region to another. The second objective involved short- and long-term colocation tests of several low-cost ERMs with reference-grade instruments in residential settings. Results show that while a few ERMs achieved an acceptable agreement with reference values (within 10% relative percent error), most demonstrated substantial variability, even among sensors of the same model. Some issues included delayed response times and wide limits of agreement. These findings emphasize the limitations of deploying ERMs for consumer or research use without regulatory supervision. For the third objective, radon levels were monitored in a 12-storey, multi-family student residence building at the University of Waterloo. Average radon concentrations were relatively low (3.7 – 8.9 Bq/m3); however, concentrations exceeding 50 Bq/m3 were recorded during periods of reduced ventilation on all floors. Notably, the conventional assumption that radon concentrations decrease with building height was challenged by the observation of the highest average concentration on the 12th floor, albeit with low absolute values. Collectively, these results provide insights into the limitations of predictive modelling for substitution in real-world testing, the performance of low-cost ERMs and reliability for use for testing, and the complex nature of indoor radon distribution, particularly in multi-family buildings. A coordinated strategy integrating predictive risk mapping, validated ERMs, and targeted testing can inform evidence-based policies and ensure that radon risk is addressed across all housing types and Canadian regions.Item type: Item , Computer Vision Based High-Fidelity Mapping and 3D Reconstruction for Civil Infrastructure Inspection(University of Waterloo, 2025-08-26) Bajaj, RishabhGlobally, infrastructure is deteriorating due to aging structures and delays in timely and effective rehabilitation. As a result, there has been a growing demand for efficient and scalable methods to assess the condition of civil infrastructure. Traditional inspection practices, which rely heavily on manual visual assessments, are time-consuming, labour-intensive, and prone to human error. As a result, there is growing interest in automating traditional inspection processes. A common approach involves using 3D reconstructions of civil structural components through off-the-shelf Structure-from-Motion (SfM) software to create digital twins for measurement and analysis. However, this method faces several challenges. First, the black-box nature of current SfM software used for 3D reconstruction is not optimized for the visually degenerate surfaces common in civil infrastructure and lacks transparency for error diagnosis in case of a failed reconstruction. Secondly, existing methods often fail to provide end-to-end support for extracting measurements that comply with structural inspection manuals. This thesis proposes an open-access, end-to-end framework for enabling vision-based structural inspection through high-fidelity 3D reconstructions. The motivation is to address key technical and scientific challenges in adopting computer vision tools for infrastructure assessment by providing field-deployable and standards-compliant solutions that enhance both visualization and quantification of structural conditions. The methodologies developed in this thesis support inspections needed at small, medium and large-scales of structural components. The first and second parts of this thesis address small-scale inspection. In the first part, high-fidelity reconstructions from smartphone-based LiDAR sensors are utilized to extract concrete surface roughness profiles. Point cloud processing methods are calibrated against existing subjective field tools used by inspectors to ensure compatibility and enable classification of roughness profiles within current inspection frameworks. The second part addresses deployment challenges by introducing a reconstruction tool that only uses images. This enables small-scale reconstruction in environments where LiDAR or high-end equipment is unavailable. By removing hardware constraints and validating the proposed tools through field deployment, this thesis demonstrates the practical feasibility of an open, vision-based inspection workflow for performing surface roughness measurements. The third part focuses on medium-scale inspection. An image-based 3D reconstruction pipeline is developed, followed by integration with an interactive segmentation algorithm. The AI-based segmentation method is integrated with 3D models to detect and quantify a common defect, concrete spalling, in accordance with structural inspection standards. Finally, a large-scale multi-resolution map (MRM) reconstruction workflow is developed for constructing 3D maps with varying resolutions by integrating LiDAR sensor-based 3D maps (coarse resolution) with maps built from images (fine resolution). This method uses a novel image-based localization algorithm to precisely align two maps into a cohesive 3D point cloud. To facilitate MRM, an in-house, cost-effective and portable backpack-based scanner and mapper is designed to collect large-scale colourized LiDAR maps. Experimental results are presented for each method, including field tests, demonstrating the accuracy and utility of the proposed system for real-world inspections. The major contribution of this work is bridging the gap between academic research and practical implementation in infrastructure inspection, advancing toward a more intelligent, scalable, and accessible inspection paradigm.Item type: Item , River Resilience Requires Sufficient Floodplains: Experimental Insights from a Novel Flume Study Investigating Meander Constriction(University of Waterloo, 2025-08-22) Messenger-Lehmann, RachaelGlobally, riparian zones are in poor condition. Numerous anthropogenic watershed modifications negatively affect water quality, resiliency, and habitat diversity of river systems. Research on the effects of constricting meandering rivers is limited. This has led to few methods that optimize riparian zone widths in ways that maintain adequate corridors or floodplains to support natural river processes and protect public safety. The goal of our research was to determine the effects of constraining the floodplain of a meandering river. Specifically, we studied the effects of constraining the Bow River in Calgary where the floodplain corridor is currently facing extensive development pressures. To achieve these goals a mobile bed laboratory experiment was completed. The experiment involved developing and then constricting a gravel bed meandering river from an initially straight channel. Alfalfa was grown alongside the river and within the floodplains to provide necessary bank strength. During the experiment, sediment leaving the flume was collected and aerial images were captured to allow topographic and sediment transport analysis. Results showed that as alfalfa grew, bank strength increased, limiting meander evolution. Despite the relatively fixed meanders, findings suggest that floodplain constraints significantly reduce river-floodplain connectivity, alter a channels flow regime by increasing velocity and flow depth, increase sediment transport, and narrow channel widths. The results of this study will impact river management practices, emphasize creating room for rivers as a nature-based solution, and improve laboratory methods for investigations of meandering rivers.Item type: Item , Modelling and Analysis of Retrofit Strategies for Mid-Rise Multi-Unit Residential Buildings in Toronto: Advancing Energy Efficiency, Comfort, Resilience, and Decarbonization(University of Waterloo, 2025-08-15) Minniti, AlexisOver the past century, global temperatures have risen significantly due to human activity, contributing to more frequent and intense extreme weather events such as heatwaves. This underscores the urgent need to reduce emissions from the building sector while enhancing the climate resilience of existing buildings. Most of the buildings that will be in use in Canada by 2050 have already been constructed; therefore, the existing building stock must be retrofitted to enhance energy efficiency and increase occupant safety and comfort under extreme future climate conditions. The objective of this study is to evaluate the effectiveness of various energy conservation measures (ECMs) in advancing energy efficiency and decarbonization, while enhancing thermal resilience and reducing thermal stress, for mid-rise multi-unit residential buildings (MURBs) in Toronto. The ECMs evaluated in this study included the effective thermal resistances (R-values) of the wall assembly and the roof assembly, window thermal conductance (U-factor), infiltration, roof colour, lighting power density (LPD), equipment power density (EPD) and domestic hot water (DHW) peak flow rate. Mechanical system upgrades, including the addition of cooling and ventilation, as well as the electrification of space heating and DHW, were also studied. A numerical archetype building energy model was calibrated with actual energy consumption data from an existing MURB. This baseline model was then used to (1) develop a surrogate model for predicting energy use intensity (EUI) and (2) to evaluate the effectiveness of the ECMs under current and future climate conditions. A Multilayer Perceptron (MLP, a feedforward neural network) and fourth-degree polynomial surrogate model achieved coefficient of determination (R²) values of at least 99.98% and 99.82% respectively, and normalized root mean squared errors (NRMSE) of at most 0.17% and 0.55%, respectively. The polynomial equations can easily be used to predict the EUI of similar MURBs for a variety of ECMs. The numerical and surrogate models were then used to evaluate the impact of various ECMs on reducing EUI, advancing building decarbonization and improving occupant thermal comfort. The building envelope ECMs were successful at reducing the EUI, and adding cooling and ventilation maintained safe and comfortable indoor conditions throughout the year. Electrification was an effective method for reducing operational carbon emissions due to Ontario’s low carbon electricity grid. The combination of envelope retrofits and mechanical system upgrades resulted in a 57% reduction in EUI (from 270 kWh/m²/year to 116 kWh/m²/year), a 90% reduction in operational carbon emissions (from 43 kg CO₂e/m²/yr to 4 kg CO₂e/m²/yr), maintained comfortable conditions for 97% of the year (8,481 out of 8760 hours of the year), and maintained safe conditions for all hours of the year. Under predicted future climate conditions, EUI was lower due to the lower heating demand; however, the thermal resilience metrics decreased by up to 47% highlighting the need to consider both energy efficiency and thermal resilience when retrofitting MURBs.Item type: Item , Health and economic benefits of reducing air pollution exposure through adaptation and mitigation under a changing climate(University of Waterloo, 2025-08-06) Sparks, Matthew StevenAir pollution is the world’s largest environmental health risk. Even in countries with perceived clean air, like the United States of America (U.S.) and Canada, ambient air pollution still contributes to approximately 150,000 and 17,500 annual premature mortalities, respectively. Air pollution is expected to worsen under climate change, leading to increases in mean ozone and PM2.5 concentrations and higher extreme values. These changes could lead to more air quality alerts, which are triggered when Air Quality Index (AQI) values exceed certain thresholds. Though they are the main medium for communicating air pollution risk to the public, the effect of climate change on air quality alerts has not been previously studied. The effectiveness of air quality alerts, and the adaptation behaviors they recommend, is also not well known. Few studies have investigated how people respond to air quality alerts, and none have looked at how behavioral responses may change in the future. Even fewer studies quantify the health benefits that adapters, or those who respond to air quality alerts, receive from their adaptation. This is critical to understand, as air pollution is a significant public health threat now and, without emission reductions, is expected to worsen this century. The studies included in this dissertation use modeled future data to elucidate how air quality alerts driven by PM2.5 and ozone change throughout the 21st century. They identify who is affected by the increase in air quality alerts and model how these populations might respond. The use detailed time use, location, and building parameter data to provide improved estimates of adaptation behaviors. Across the three studies, we find adaptation - including limiting time outdoors, masking, and reducing infiltration - to be useful in reducing ambient air pollution exposure. However, adaptation benefits are not distributed evenly across the population. Certain populations, like seniors (aged 65+), receive much higher benefits than other groups. So too, do those who have cleaner environments in which to adapt. Reducing outdoor concentrations, through policy addressing climate change or air pollution, reduces the need to adapt and protects those who cannot adapt. However, the studies herein show that behavioral change must also be considered, as it can either offset or amplify health improvements from ambient pollution reduction.Item type: Item , Human-centric Path Planning and Motion Behaviour Analysis in Hazardous Environments(University of Waterloo, 2025-06-13) Gao, NoreenHazardous work environments, such as nuclear facilities and construction sites, present critical safety challenges that need more attention. While nuclear operations expose workers to imperceptible radiation risks, construction activities involve physically demanding tasks that contribute to ergonomic injuries such as musculoskeletal disorders (MSDs). Persistent risks in hazardous environments require more effective solutions to address their unique occupational challenges through advanced technologies. This research explores the potential of augmented reality (AR) and virtual reality (VR) for improving safety through human-centric design, focusing on path planning in radiation environments and human motion analysis in masonry as potential use cases. The research addresses the practicality issues during their implementation and assesses the feasibility of the developed solutions. The first study develops an AR-based path planning system in radiation environments. Here, path planning algorithms in existing methods only prioritize exposure minimization but result in routes poorly suited for human navigation, such as zigzagged paths with too many turns or paths that are unnecessarily long just to follow the lowest doses. To overcome this, I propose a two-stage human centric path planning framework. First, the A*-based algorithm is enhanced with a novel multi objective cost function to generate candidate paths that balance cumulative radiation exposure, travel distance, and the number of turns. Second, a parameter sweep procedure is introduced to select Pareto-optimal solutions. Unlike traditional methods that prioritize either radiation reduction or path length, this framework offers users a variety of path options tailored to their specific needs. Also, considering fewer turns for easier navigation, this approach is more intuitive than traditional robot centric path planning methods, offering greater flexibility and safety for workers in real-world applications. The second study evaluates VR’s efficacy for training in masonry work to reduce ergonomic risks like MSDs while lifting heavy blocks. While VR training is effective in training for various fields, its effectiveness in teaching proper ergonomic posture and reducing injury risks has not been thoroughly explored. This study conducted experiments to compare real lifts (lifting physical blocks in a real world setting) and VR lifts (lifting virtual weightless blocks in a VR-simulated environment), assessing motion behaviour in both contexts. In both experiments, while performing the same tasks of lifting blocks, participants were asked to wear a motion capture suit to record the motion data. The collected data were processed for analysis using the Rapid Upper Limb Assessment (a standard test for ergonomic risk), followed by a detailed analysis of the scores for body sections, including upper arm, lower arm, neck, and trunk. Experimental results demonstrate a significant statistical difference in motion behaviour between VR and real-life tasks, particularly in the trunk and neck. We conclude that VR training developments for the trades must recognize this limitation.Item type: Item , Analytical Seismic Performance Assessment of Braced Timber Frames with Shape Memory Alloy Fasteners(University of Waterloo, 2025-06-13) Fierro Orellana, Javier SebastianThe increasing adoption of mass timber construction requires a thorough understanding of its structural performance. Brace timber frames (BTFs) commonly serve as lateral load-resisting systems in mass timber buildings; however, further insight into their seismic performance, particularly regarding ductility and energy dissipation, remains essential. Shape memory alloy (SMA) dowels at BTF connections provide potential advantages due to their superelasticity, enabling large displacements with minimal permanent deformation. This analytical research investigates the seismic performance of BTFs with SMA dowel-type connections compared to traditional steel connections. The methodology involves developing a numerical framework in OpenSees to capture the nonlinear behaviour of BTFs with steel and SMA dowel-type connections. The framework involves calibrating uniaxial material models at the connection level based on experimental hysteresis results. Brace-level models incorporate asymmetrical deformation at each brace end, documented in the literature. Frame-level models facilitate pushover and time history analyses. This approach bridges the gap between experimental observations in connections and structural system applications, specifically evaluating how connection-level self-centering translates into system-level performance in moderate seismic zones in Canada. Using this numerical framework, the self-centering capacity of SMA dowel-type connections is assessed at the system-level. The seismic response of SMA-connected frames is compared to traditional steel-connected frames using BTF building prototypes. Key findings show that SMA connections exhibit superior self-centering, substantially reducing residual drift compared to steel connections, thus enhancing seismic resilience. Although SMA-connected frames experienced higher peak drifts, their minimal permanent deformations significantly reduced post-earthquake repair needs. Numerical results indicated that SMA connections increased peak interstorey drift by 8-32% but significantly reduced residual drift by approximately 90% in both prototypes. Peak floor accelerations decreased by 3-13% with SMA connections. These findings confirm that SMA dowels effectively improve post-earthquake performance without compromising structural strength. This will guide future code developments and research directions for BTF systems incorporating SMA dowel-type connections.Item type: Item , Estimating annual average daily pedestrian traffic volume at intersections(University of Waterloo, 2025-05-22) Tito Pereira Sobreira, LucasPedestrian exposure at intersections is a critical input for jurisdictions developing pedestrian-centric strategies and is typically quantified as the annual average daily pedestrian traffic (AADPT). The ability to estimate AADPT across all intersections depends on the availability of pedestrian volume data – specifically, the number of sites with continuous count (CC) stations, sites with only short-term counts (STCs), and sites with no available data. At sites with CC stations, AADPT can be directly calculated. Where only STCs are available, expansion factors derived from CC stations are used to expand STCs to AADPT. For sites lacking pedestrian volume data, Direct-Demand (DD) models are commonly employed to estimate pedestrian exposure based on land use, socioeconomic, and transportation attributes. This paper-based PhD thesis addresses gaps in the existing literature by exploring methods for estimating pedestrian exposure at intersections under varying data availability scenarios. Objectives 1 and 2 focus on estimating pedestrian exposure at sites with no available data in jurisdictions that lack sufficient sites to locally develop DD models. Objective 1 examines the direct (or naïve) spatial transferability of DD models developed in other jurisdictions. Performance was found to be inconsistent, varying based on the similarity between the characteristics of the jurisdiction where the model was developed and where it was applied. In Objective 2, it is assumed that a limited number of CC sites are available within the study jurisdiction, enabling the adjustment (or local calibration) of transferred DD models. This approach improved the transferability of DD models, achieving performance levels comparable to those of locally developed models. Continuing on jurisdictions that lack sufficient data to develop their own DD model, Objective 3 proposes a novel method to estimate pedestrian exposure leveraging AI Large Language Models (such as ChatGPT), satellite imagery, and pedestrian volume data from a few key sites. The performance of this method rivaled – and in some cases surpassed – existing conventional methods like DD models, all without requiring complex statistical models or extensive datasets. Objective 4 shifts the focus to sites where STCs are available, providing a detailed application of the expansion factor method to expand 8-hour STCs. Specifically, it examines grouping sites with CC stations (where expansion factors are derived) into factor groups based on similar activity patterns. To achieve this, temporal indicators capturing different types of seasonality were developed, and models were built to associate STC sites with an appropriate factor group. The results indicated only marginal improvements when using the factor group approach compared to applying a single factor group (i.e., the average across all sites). Objective 5 examines the scenario in which CC stations are unavailable within a jurisdiction, preventing the local calculation of expansion factors. The spatial transferability of expansion factors across jurisdictions is investigated. The findings indicate that transferring expansion factors across jurisdictions with similar characteristics in terms of weather and school holiday periods proved to be of practical value, with only minor performance degradation compared to expansion factors developed within the local jurisdiction. Objective 6 evaluates various methods for estimating pedestrian exposure under different data availability scenarios. The performance of these methods was analyzed based on the number of sites with available data, resulting in practical guidelines for practitioners. Specifically, it provides recommendations on the minimum number of sites with pedestrian volume data needed for the local development of DD models, compared to alternative approaches such as spatial transferability and local calibration of DD models. Objective 7 introduces a novel approach to disaggregating pedestrian volumes from the intersection level to the crosswalk level. Land use models were developed and compared to methods that incorporate crosswalk-specific volume data from STCs. Overall, methods using STC data outperformed the land use models.Item type: Item , Electrochemical Modeling of Bioenergy Generation from Wastewater by Microbial Fuel Cells(University of Waterloo, 2025-05-09) Li, YimingAs global water scarcity and environmental pollution continue to escalate, innovative wastewater treatment technologies are needed to ensure sustainable water resource management. Conventional wastewater treatment methods, such as activated sludge processes, are energy-intensive, costly, and contribute significantly to greenhouse gas emissions. Microbial fuel cells (MFCs) present a promising alternative, harnessing electroactive bacteria to simultaneously degrade organic pollutants and generate electricity. By leveraging microbial metabolism, MFCs can convert chemical energy in wastewater into usable electrical energy, offering a dual benefit of pollution reduction and renewable energy production. This study focuses on developing a numerical simulation framework to optimize MFC performance, with an emphasis on real-world application at the Guelph Water Resource Recovery Centre (WRRC). A steady-state microbial fuel cell model was developed and validated using experimental data from previous studies. The model employs a finite difference method to solve mass balance equations for key reactants and products, including acetate, dissolved CO₂, protons, and oxygen. The simulation results highlight the influence of various operational parameters—such as substrate concentration, internal resistance, wastewater flow rate, and temperature—on the performance of a dual-chamber MFC. The study further compares MFC efficiency with conventional wastewater treatment processes, demonstrating a significantly higher chemical oxygen demand (COD) removal rate in MFCs (0.0633 kg COD/m³/day), which is approximately 4.7 times greater than that observed at the WRRC. The results emphasize the role of microbial activity and electrochemical interactions in optimizing power generation and pollutant degradation. Key limitations such as oxygen transport restrictions, internal resistance, and pH imbalances were identified, suggesting areas for improvement in MFC design. Numerical simulations were further extended to model full-scale integration within WRRC, providing insights into the feasibility of MFC technology as an alternative treatment strategy. Despite challenges in large-scale deployment, MFCs show strong potential for reducing wastewater treatment energy demands and mitigating environmental impacts. This research contributes to the advancement of MFC applications in wastewater treatment by demonstrating the effectiveness of numerical modeling in predicting and optimizing system performance.Item type: Item , Upscaling and downscaling snow processes with machine learning in watershed models(University of Waterloo, 2025-05-08) Burdett, HannahHydrologic models play a vital role in understanding and predicting the movement of water within watersheds, providing essential insights for effective management and sustainability of water resources. However, watersheds exhibit significant heterogeneity in their landscape properties and complex responses to spatiotemporal variations in climatic inputs. This variability introduces a gap between the representation of physical processes at the point scale and their behaviour at the watershed scale, making it challenging to accurately capture the full complexity of the hydrologic cycle across different spatial scales. Bridging this gap requires the identification of effective scaling approaches tailored to capture the complexities across scales. Scaling approaches look to translate information from one scale to another, whether moving from a smaller to a larger scale (upscaling) or from a larger to a smaller scale (downscaling). Although various approaches in the literature have been applied to develop scaling methods for forcing variables, such as precipitation and temperature, and fluxes (e.g., evapotranspiration), there is a notable gap in deriving and applying scaling techniques for snow-related variables, such as SWE, snowmelt, or sublimation. Addressing this gap may help in improving hydrologic model accuracy in snow-dominated regions, where snow dynamics significantly influence water availability and watershed resources. The primary objective of this thesis is to develop, implement, and evaluate machine learning-based upscaling methodologies to aid in understanding the relationship between local-scale snow-related variables, landscape heterogeneity, and the large-scale hydrologic response of a catchment. Such methods are useful for effectively simulating the net impact of local variability in snow processes without resorting to fine-resolution models. A secondary focus of this research aims to identify the conditions under which emergent constitutive relationships specific to snow-related fluxes are (or are not) valid and to assess the transferability of these relationships. Finally, this work introduces a machine learning-based downscaling approach that refines large-scale mean model outputs into localized snow states and fluxes. Together, these scaling techniques explore the potential of machine learning to address challenges in hydrologic scaling specific to snow-related fluxes.upItem type: Item , Real-Time Short-Term Intersection Turning Movement Flows Forecasting Using Deep Learning Models for Advanced Traffic Management and Information Systems(University of Waterloo, 2025-05-07) Zhang, CeTraffic congestion remains a persistent challenge in urban transportation systems, causing excessive travel delays, increased fuel consumption, and severe environmental pollution. To address these issues, Advanced Traffic Management and Information Systems (ATMIS) have been developed, integrating real-time traffic monitoring, adaptive control strategies, and data-driven decision-making to enhance overall traffic efficiency. A crucial component of ATMIS is the real-time forecasting of intersection Turning Movement Flows (TMFs), which provides essential data for optimizing signal timings, improving vehicle routing, and implementing proactive congestion mitigation strategies. By leveraging accurate TMFs predictions, transportation agencies can dynamically adjust traffic signals, enhance intersection operations, and reduce delays, ultimately improving urban mobility and minimizing environmental impacts. While numerous traffic forecasting models exist, they face significant limitations in capturing the complex spatial and temporal patterns inherent in intersection-level TMFs, as they primarily rely on historical traffic data without adequately modeling these dependencies. Moreover, most existing approaches fail to incorporate exogenous factors, such as weather conditions, road characteristics, and other time-dependent variables, which significantly influence traffic flow but are often ignored. These shortcomings lead to poor generalization performance when applied to hold-out intersections (few-shot) and unseen regions (zero-shot), making them less effective in real-world dynamic traffic environments. To overcome these challenges, this study systematically develops and evaluates a deep learning-based TMFs forecasting framework designed for improved generalization and interpretability. First, we employ a Parallel Bidirectional LSTM (PB-LSTM) with multilayer perceptron (MLP) to capture both long-term seasonality and spatial dependencies, thereby enhancing the model's transferability across different locations, improving performance across hold-out intersections. Second, we integrate an encoder-decoder architecture using Deep Autoregressive (DeepAR) model, which enables probabilistic forecasting and quantifies uncertainty, ensuring robust predictions under varying traffic conditions. Third, we leverage the Temporal Fusion Transformer (TFT) to assess the relative importance of external covariates, such as weather conditions and road characteristics, improving interpretability and model reliability by identifying speed zone, road category, hour of the day, and temperature as key influential factors. Finally, we explore the potential of TimesFM, a decoder-only model, to enhance zero-shot learning capabilities, demonstrating strong performance in previously unseen intersections and new city datasets, particularly when enhanced with EMD and RF. To evaluate model performance, we conduct a series of experiments, including hold-out intersection tests, cross-city generalization assessments, and evaluations under extreme weather conditions, to assess robustness and adaptability. Experimental results highlight the effectiveness of integrating exogenous factors and hybrid modeling approaches in improving real-time TMFs forecasting accuracy, generalizability, and robustness under dynamic conditions. These insights provide valuable contributions to the development of scalable and interpretable deep learning models for intersection-level traffic flow prediction, supporting more adaptive and data-efficient traffic management strategies.Item type: Item , Effect of Biofilm Formation on the Sorption of Per- and Polyfluoroalkyl Substances to Colloidal Activated Carbon(University of Waterloo, 2025-04-29) Moran, Erica LynnePer- and polyfluoroalkyl substances (PFAS) are a class of contaminants that have garnered increasing concern due to their widespread presence and harmful effects on humans and ecosystems. PFAS enter the environment via many different pathways, with the release of PFAS-containing aqueous firefighting foams being a major source of groundwater contamination. Because PFAS are highly resistant to most chemical and biological degradation processes, they are currently removed from groundwater mainly by ex-situ adsorption, which is expensive and energy intensive. Recently, activated carbon (AC) permeable reactive barriers (PRBs) have been proposed and used in-situ to limit the downgradient migration of PFAS by groundwater. AC PRBs are created by injecting powdered activated carbon (PAC) or colloidal activated carbon (CAC) into the subsurface to generate a stationary zone that removes PFAS by adsorption. As with any adsorption technology, however, PFAS breakthrough will occur once adsorptive sites in the barrier are exhausted. To improve our understanding of the ability of AC PRBs to adsorb PFAS and their longevity, there is a need for research that evaluates the adsorption of PFAS on AC and the factors affecting this process. The research reported in this thesis focused on one potential influencing factor, namely biofilm. Specifically, the objectives of this study were first, to evaluate if a biofilm can form on small (<5 µm) CAC particles, and second, to examine the impact that biofilm may have on the adsorption of PFAS to CAC. To address the first objective, the growth of Pseudomonas putida (P.putida), an aerobic bacterium, in the absence of particulate and in the presence of either CAC or fine silica was investigated. P.putida was selected because it has been shown to readily form a biofilm, is not infectious to humans, is commonly found in the environment, and has applications in the bioremediation of organic contaminants. Analyses of the bacterial samples by confocal laser scanning microscopy (CLSM) indicated that the bacteria remained planktonic when no particulate was present but formed a biofilm consisting of cells and CAC or sand particles held together by extracellular polymeric substances (EPS). Over seven days of growth, the biofilm formed on CAC increased in thickness and decreased in roughness as it developed and formed more cohesive structures. Results suggest that P.putida is capable of forming a biofilm on CAC particles. Rather than the classical depiction of a biofilm adhered to a single surface, the P.putida biofilm was formed on an aggregate of CAC particles, which were held together by EPS. To address the second objective, the adsorption of perfluoroctane sulfonate (PFOS, a hydrophobic PFAS) and perfluoropentane carboxylate (PFPeA, a hydrophilic PFAS) on virgin and biofilm-coated CAC was investigated. P.putida was grown in the presence of CAC, and either PFOS or PFPeA was added to the microcosms once a biofilm was formed. Because the adsorption of PFAS to CAC is known to be impacted by the presence of dissolved organic carbon (DOC), experiments were also conducted to determine the impact of broth (used for culture growth) concentration on the extent of PFAS sorption to CAC and the development of the biofilm. In the experiments without bacteria, the amount of PFOS adsorbed to CAC decreased as the concentration of broth was increased. The relationship between aqueous and sorbed PFOS could not be described by a linear, Freundlich, or Langmuir isotherm model, likely due to competitive sorption between the DOC present in the broth and PFOS. In the experiments with P.putida, it was observed that as the broth concentration increased, the biofilm became thicker and smoother, as the additional broth appeared to have aided biofilm development. Subsequent experiments, conducted with 3 mg/L broth and 80 mg/L broth (which represented high and low DOC concentrations, respectively), revealed that the majority of PFOS sorption on virgin and biofilm-coated CAC occurred during the first three days, and the biofilm resulted in a decrease in PFOS adsorption. This decreased adsorption is presumed to be due to biofilm blocking sorption sites. For PFPeA, limited sorption occurred, and no significant difference was observed between the amount adsorbed in the bacteria-free CAC and P.putida-containing CAC systems. The difference in sorption between PFOS and PFPeA was attributed to decreased hydrophobic interactions between CAC and the shorter fluorinated tail of PFPeA. The results of this study improve our understanding of how biofilm may impact CAC PRBs implemented for the management of PFAS. Biofilm can form on cell-sized particles and, as a result, may reduce the adsorption of long-chain compounds, such as PFOS. The effect of biofilm on the adsorption of short-chain compounds, such as PFPeA, may be less prominent than for PFOS, as the extent of sorption is comparatively limited. Further investigation is required to evaluate the impacts of biofilm on CAC sorption of other PFAS, the interactions of biofilm with other groundwater parameters, and the extent to which biofilm plays a role in the longevity of CAC PRBs in column or field scale studies.