Civil and Environmental Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9906
This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Browsing Civil and Environmental Engineering by Title
Now showing 1 - 20 of 909
- Results Per Page
- Sort Options
Item A Route-Choice Model for Predicting Pedestrian Behaviour and Violations(University of Waterloo, 2024-08-19) Lehmann Skelton, ChristopherPedestrians exhibit diverse behaviours, including crossing violations. Traditionally, development of behavioural models has been divided into route choice and crossing behaviour. Route choice models are stochastic and focused on crowd dynamics, while crossing behaviour models are probabilistic or deterministic and focused on local-level behaviours. Route choice and crossing behaviour are often addressed separately, but they are inherently related. This research proposes a new pedestrian simulation model where pedestrians navigate through an intersection or mid-block environment, modelled as a grid. Each cell is assigned a cost that varies over time based on the presence of nearby vehicle traffic or changes to signal indications. Each pedestrian perceives the costs in the environment uniquely depending on their own personal preferences, like desired crossing gap or comfort committing a violation and seeks to minimize their total path cost. Pedestrians who are more comfortable committing violations perceive a lower cost for committing a violation. This approach integrates crossing behaviour with route choice and models the trade-offs of engaging in a particular behaviour. The proposed model is calibrated using video data. The model was applied to three case-studies: a stop-controlled intersection, mid-block crossing, and two crosswalks along the minor approach of a signalized intersection. The model simulates the trade-offs between walking on different surfaces, as well as the trade-off between waiting for a gap in traffic to cross, versus diverting to the nearest designated crosswalk. In the third case study, the model successfully reproduced the proportion of pedestrians crossing against the signal for the north leg crosswalk but did not reproduce the proportion of violations for the south leg crosswalk, which is across a private access. Further investigation should be undertaken into the causes of this and other differences.Item A Stochastic Framework for Urban Flood Hazard Assessment: Integrating SWMM and HEC-RAS Models to Address Watershed and Climate Uncertainties(University of Waterloo, 2024-09-25) Abedin, Sayed Joinal HossainUrbanization significantly alters natural hydrological processes, leading to increased flood risks in urban areas. The potential damages caused by flooding in urban areas are widely recognized, making it crucial for urban residents to be well-informed about flood risks to mitigate potential losses. Flood maps serve as essential tools in this regard, providing valuable information that aids in effective planning, risk assessment, and decision-making. Despite floods being the most common natural disasters in Canada, many Canadians still lack access to high-quality, up-to-date flood maps. The occurrence of recent major flood events across the country has sparked renewed interest among government officials and stakeholders in launching new flood mapping initiatives. These projects are critical for enhancing flood risk management across communities. Traditional flood hazard mapping methods, based on deterministic approaches, often fail to account for the complexities and uncertainties inherent in urban flood dynamics, especially under rapidly changing climate conditions. Uncertainty affects every stage of flood mapping, influencing accuracy and reliability. Recognizing this, recent studies advocate for stochastic approaches to explicitly incorporate these uncertainties. However, there is a lack of industry-standard tools that allow for a convenient and comprehensive analysis of uncertainty, making it challenging to routinely incorporate uncertainty into flood hazard assessments in practice. This underscores the need for a robust framework to model flood uncertainty. While various models have been proposed to address the uncertainty, many remain conceptual or lack the necessary automation. Despite no "perfect models", the Storm Water Management Model (SWMM) and the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) are widely used for urban hydrology and channel hydraulics modeling, respectively, due to their robust physics-based approaches. Both SWMM and HEC-RAS models have been enhanced with commercial and open-source extensions, built on algorithms written in various programming languages, to improve their utility, particularly for automating workflows to handle complex urban flood scenarios. While SWMM has more robust extensions, most available HEC-RAS extensions are designed for one-dimensional (1D) steady state models, which lack the complexity needed for accurate urban flood modeling. The release of HEC-RAS 6.0, which allows for two-dimensional (2D) unsteady flow modeling and incorporates structures like bridges and weirs, marks a significant advancement for urban flood modeling. The current research was motivated by the perceived benefits of designing such extensions for automating workflows in recent versions of SWMM and HEC-RAS, as well as automating the coupling of these two models in a stochastic framework to facilitate the integration of uncertainty into existing flood hazard mapping workflows. This thesis introduces the SWMM-RASpy framework, a novel automated stochastic tool built using the open-source Python programming language. SWMM-RASpy integrates SWMM's detailed urban hydrologic capabilities, such as dual-drainage modeling, with HEC-RAS's 2D unsteady hydraulic modeling, coupled with stochastic simulations through Latin Hypercube Sampling (LHS) to analyze the uncertainty in flood hazard mapping. The framework was demonstrated on the Cooksville Creek watershed, a highly urbanized area in Mississauga, Ontario, known for its susceptibility to flooding. An entropy map was successfully produced for the case study site, which better reflects the uncertainty of flooding and could be used to develop tailored flood planning and preparedness strategies for different zones within the site. This thesis also presents a detailed application of the SWMM-RASpy framework to assess flood hazards, with a specific focus on topography-based hydraulic uncertainties in the watershed, particularly surface roughness variability, which affects pedestrian safety during flood events. The study highlights that traditional hazard models, which focus mainly on residential buildings, do not adequately account for the risks to pedestrians who are a significant source of fatalities in flood events, especially in densely populated urban areas with high mobility. Three flood hazard metrics were developed and used to evaluate the flood risks to pedestrians given the uncertainty surrounding surface roughness: FHM1, based on inundation depth; FHM2, combining depth and velocity; and FHM3, incorporating depth, velocity, and duration. Key findings from the assessment indicate that surface roughness significantly affects pedestrian hazard estimation across the floodplain, making it a critical factor in flood hazard management. The FHM2 metric, which incorporates depth and velocity, was found to be highly sensitive to roughness variation, potentially leading to the misclassification of hazardous zones as safe and vice versa. The inclusion of velocity in the hazard assessment, while improving accuracy, also increased variability, emphasizing the need for a balanced approach in flood risk evaluations. In contrast, the FHM3 metric, which includes flooding duration, showed minimal sensitivity to surface roughness uncertainty. The research also suggests that confidence maps, produced as part of the analysis and accounting for estimated uncertainties surrounding the hazard metrics propagated from surface roughness, can offer a more reliable alternative to traditional deterministic hazard maps. Lastly, the study, through this analysis, emphasizes the importance of combining grid-level and zonal-level analyses for a more comprehensive understanding of flood hazards at different scales, thereby supporting more robust flood risk assessments. This thesis extends the application of the SWMM-RASpy framework to assess the impacts of climate change on flood hazards within the Cooksville Creek watershed. It examines how projected changes in rainfall intensity from Global Climate Models (GCMs) affect flood risks, particularly for urban buildings, as well as the importance of incorporating uncertainties from these projections into flood hazard assessments. The same hazard metrics used for pedestrian hazard assessment, FHM1, FHM2 and FHM3, were used to evaluate building hazards. The study predicts a significant increase in flood hazards within the watershed, with a substantial expansion of inundation areas affecting up to 40% more buildings when uncertainties are considered. The analysis shows that without considering uncertainties, FHM1 and FHM3 predict a higher number of damaged buildings than FHM2, with FHM1 predicting the highest number of affected buildings. This suggests that relying solely on FHM1 to estimate building hazards may be sufficient in similar climate change scenarios, although further investigations are needed. However, when uncertainties are included, FHM2 shows a greater increase in the number of buildings at risk compared to FHM1 and FHM3, due to the larger uncertainty associated with velocity versus depth and duration. This underscores the need to incorporate uncertainty into flood hazard assessments to ensure a more comprehensive understanding of potential future damages. Overall, this study has made significant contributions to the field of urban flood hazard assessment by developing a robust method for incorporating and analyzing uncertainties, thereby supporting more effective flood management and resilience planning. Future research should apply the SWMM-RASpy framework to other watersheds and investigate additional hydrologic and hydraulic variables to further improve flood risk assessments.Item AADT Estimation Models and Analytical Comparison of Pedestrian Safety Risk Evaluation Methods for Signalized Intersections(University of Waterloo, 2021-01-13) Xaykongsa, AlanPedestrian road safety is a priority for Canadian municipalities due to the particularly large social and economic impact that pedestrian-vehicle collisions can have on society. There is a constant need to improve roads to make them safer for all users, but especially for vulnerable road users such as pedestrians. Despite the existence of several methods to prioritize locations for improvements in pedestrian safety, there is no consensus on which method should be used. In this thesis, several methods were identified which could be used to prioritize sites for pedestrian safety improvement (i.e., network screening), specifically signalized intersections, using their geometric, operational, and land-use characteristics. Three methods were selected for further investigation, including the NCHRP ActiveTrans Priority Tool (APT), the FHWA Pedestrian Intersection Safety Index (Ped ISI), and the ODOT Pedestrian Intersection Risk Score. Traffic volume data in the form of annual average daily traffic (AADT) are required as input for these methods. Given that AADT are frequently not available for all intersections, another objective of this thesis was to develop a set of multiple linear regression models for the AADT of signalized intersection legs. Site data for both safety method application and AADT estimation modelling were collected for 438 Niagara Region signalized intersections from site imagery, GIS, and other online sources. Using existing AADT data as the dependent variable, six multiple linear regression models were developed. Each model is structured to be applied when different geometric, operational, and land-use characteristics are available as inputs. As one might expect, the models with the highest predictive power were those that also required the greatest amount of knowledge about existing conditions. Nevertheless, all models were shown to be statistically significant and provided reasonably strong to very strong predictive power. The AADT estimation models were used to estimate AADT for intersections for which observed AADT was not available. The pedestrian safety risk evaluation methods were then each applied to rank the 438 signalized intersections in Niagara Region. The potential for safety improvement (PSI) method was also applied to obtain a ranking based on collision frequency. Overall, the rankings were found to be very different between methods, as demonstrated by high measures of rank error (relative rank error weighted averages ranged from a low of 34% to more than 80%). This revealed a substantial challenge for practitioners as they would be faced with substantially different and inconsistent site prioritization results depending on which ranking method was chosen, despite all methods aiming to provide measures of pedestrian risk. To explain the differences in ranking between methods, the contribution of each method’s input variables and their correlations were examined. A combination of differences in the inclusion of input variables, their influence on the levels of risk for sites within a given method, and poor correlation among surrogate variables were suggested for the lack of similarity between rankings of different methods. The question of which method should be applied could not be answered. Several considerations for choosing a method were discussed. Further research into development of a robust pedestrian safety evaluation method was recommended.Item Acceptance Criteria for Ultrasonic Impact Treatment of Highway Steel Bridges(University of Waterloo, 2012-08-30T16:24:48Z) Tehrani Yekta, RanaThe need for rehabilitation of bridges has become a critical challenge due to aging and an increase in traffic loads. Many of these bridges are exceeding their design fatigue life. Since many of these bridges are structurally deficient, they need to be rehabilitated or replaced by a new bridge. The most susceptible and weak parts of steel bridges to cracks and fatigue are the welds, due to the presence of high stress concentrations, tensile residual stresses, and imperfections as a result of the welding process. Inspection and repair of welds are difficult and elimination of welded details is not possible in steel bridge construction. Ultrasonic impact treatment (UIT) is a promising and innovative post-weld treatment (PWT) method for improving the fatigue performance of existing welded steel and steel-concrete composite structures such as highway bridges. The fatigue resistance of treated joints is enhanced by improving the geometry of the weld toe, and introducing compressive residual stresses. However, a lack of tools for quality assurance has slowed UIT’s adoption by bridge authorities. The current study was undertaken to examine the fatigue performance of structural steel welds subjected to UIT at various levels, including intentional under-treatment and over-treatment, and to relate the fatigue performance of the treated welds to geometric and metallurgical properties measured to control the treatment quality. The last objective was to use the laboratory results to develop acceptance criteria for the quality control of UIT in bridge applications. Fatigue tests of non-load carrying fillet welded attachments were conducted on properly treated, under-treated, and over-treated weld toes. Statistical analyses of the fatigue life data were performed and crack growth was monitored using the alternating current potential drop (ACPD) method. Measurement of local properties (such as weld toe geometry, local hardness, and residual stresses) and examination of the weld toe microstructure were also performed on the untreated and treated welds. The effects of weld toe geometry on the local stresses in the untreated and treated welds were also investigated using elastic finite element analysis (FEA) to obtain the stress concentration factor (SCF) for the different treatment cases and to examine the changes in the SCF for the different weld toe geometries. Based on the statistical analysis performed in this research, the results illustrated that UIT significantly improved the fatigue lives of weld details regardless of the investigated level of treatment quality. The fatigue lives of welded details under constant amplitude (CA) loading and constant amplitude loading with under-loads (CA-UL) were increased up to 30 and 27 times respectively. On average, the fatigue life of the treated weld details was slightly lower under CA-UL than under CA loading. Treatment quality had little impact on the mean of the S-N curves. However, it did impact the design (95% survival probability) S-N curves, with the curve associated with a proper treatment slightly higher than the curves for poor or unknown treatment quality. Local near-surface microhardness and compressive residual stresses were greatest for the over-treated welded details, followed by the properly treated and then the under-treated welded details. Increasing the treatment speed resulted in a greater reduction in the surface microhardness and compressive residual stresses than decreasing the treatment intensity. Finite element analyses showed that changes in weld toe geometry due to UIT can cause a decrease in the SCF near the surface of the treated weld toe. The SCF was the lowest for the properly treated steel specimens and slightly higher for the under-treated specimens. For the over-treated specimens, the SCFs were nearly as high as for the untreated weld. The SCF increases as the thickness of the flange increased up to 19 mm. With further flange thickness increase to 38 mm, the SCF did not change substantially. The work presented herein demonstrated that indent depth measurements from the base metal side, commonly used for quality control, may not identify over-treatment on their own. Indent depth measurements from both the weld and the base metal sides, obtained by measurement of weld toe impressions, offer a good alternative means for identifying over-treatment. However, for identifying under-treatment, indent depth measurements should be used in conjunction with visual inspection for traces of the original weld toe.Item Achieving contrapuntal balance at a lake outlet: Restoring salmon spawning habitat with gravel augmentation design(University of Waterloo, 2024-04-09) Iun, MeganThe degradation of gravel bed rivers in logging and climate-impacted watersheds has led to an increase in salmonid habitat conservation and restoration efforts. Gravel augmentation is one such technique that seeks to restore altered aquatic habitat and sediment transport dynamics in sediment-starved streams by placing gravel pads in the river. The pads act as a sediment source that can be reworked by flow or fauna to suit their ecological needs, thereby allowing the river to “heal” itself without imposing static structural designs on a natural environment. Common points of failure for these projects include: (1) the immediate scour of the gravel pad which can wipe out buried eggs, (2) the infilling of the gravel interstices with fine particles which can limit oxygenated hyporheic flow and choke the buried eggs, and (3) low utilization by the target species due to unsuitable hydraulic conditions for spawning. The risk of the first two hazards can be limited by designing the gravel placements at downstream of lakes, as the upstream lake can buffer peak flows to its outlet stream and trap fine sediment. Much like composing counterpoint in music, a delicate balance is needed between multiple engineering criteria to identify the optimal area along the accelerating channel length where most criteria are fulfilled. Despite this understanding and the common use of lake outlets for these projects in industry, there are few design guidelines tailored for these environments. This project evaluates the design criteria of a gravel augmentation project for salmon spawning habitat restoration at a lake outlet. A calibrated two-dimensional (2D) hydrodynamic model was developed using TELEMAC-2D. The sediment transport predictions from the model were verified by tracking tracer stones equipped with Radio Frequency Identification (RFID) after one year. The model results were used to identify optimal placement areas. The 2D model was found to adequately capture the measured depth-averaged velocities when set with appropriate boundary conditions and calibrated with the roughness coefficient. However, the roughness-dependent shear velocity calculation in the model formulation results in a direct dependency of the model outputs on the sole calibration parameter. This relationship is highly sensitive and creates modeling artefacts when using roughness zones to calibrate the model. The tracer results indicate that the model appears to overpredict sediment transport due to the limitations of the deterministic approach of assessing grain mobility, which is inherently a stochastic process. Although there is likely error associated incorrect assumptions on critical mobility thresholds and representative grain sizes, the relative lack of tracer mobility during the study year limits the current ability to revise these assumptions. Nevertheless, the results show that the application of 2D hydrodynamic models to sediment transport predictions should be approached with caution and should be accompanied by field validation to ensure confidence in the model conclusions. Using the model as is, the optimization analysis results suggest that designing to prioritize longevity may require compromising the other design criteria. Conversely, optimizing the design to minimize the risk of fines infilling severely limits the available placement areas. Prioritizing stability may decrease the likelihood of immediate utilization but may be required due to the uncertain availability of funding for re-injecting gravel after the initial construction.Item Acoustic Monitoring for Leaks in Water Distribution Networks(University of Waterloo, 2020-04-21) Cody, RoyaWater distribution networks (WDNs) are complex systems that are subjected to stresses due to a number of hydraulic and environmental loads. Small leaks can run continuously for extended periods, sometimes indefinitely, undetected due to their minimal impact on the global system characteristics. As a result, system leaks remain an unavoidable reality and water loss estimates range from 10\%-25\% between treatment and delivery. This is a significant economic loss due to non-revenue water and a waste of valuable natural resource. Leaks produce perceptible changes in the sound and vibration fields in their vicinity and this aspect as been exploited in various techniques to detect leaks today. For example, the vibrations caused on the pipe wall in metal pipes, or acoustic energy in the vicinity of the leak, have all been exploited to develop inspection tools. However, most techniques in use today suffer from the following: (i) they are primarily inspection techniques (not monitoring) and often involve an expert user to interpret inspection data; (ii) they employ intrusive procedures to gain access into the WDN and, (iii) their algorithms remain closed and publicly available blind benchmark tests have shown that the detection rates are quite low. The main objective of this thesis is to address each of the aforementioned three problems existing in current methods. First, a technology conducive to long-term monitoring will be developed, which can be deployed year-around in live WDN. Secondly, this technology will be developed around existing access locations in a WDN, specifically from fire hydrant locations. To make this technology conducive to operate in cold climates such as Canada, the technology will be deployed from dry-barrel hydrants. Finally, the technology will be tested with a range of powerful machine learning algorithms, some new and some well-proven, and results published in the open scientific literature. In terms of the technology itself, unlike a majority of technologies that rely on accelerometer or pressure data, this technology relies on the measurement of the acoustic (sound) field within the water column. The problem of leak detection and localization is addressed through a technique called linear prediction (LP). Extensively used in speech processing, LP is shown in this work to be effective in capturing the composite spectrum effects of radiation, pipe system, and leak-induced excitation of the pipe system, with and without leaks, and thus has the potential to be an effective tool to detect leaks. The relatively simple mathematical formulation of LP lends itself well to online implementation in long-term monitoring applications and hence motivates an in-depth investigation. For comparison purposes, model-free methods including a powerful signal processing technique and a technique from machine learning are employed. In terms of leak detection, three data-driven anomaly detection approaches are employed and the LP method is explored for leak localization as well. Tests were conducted on several laboratory test beds, with increasing levels of complexity and in a live WDN in the city of Guelph, Ontario, Canada. Results form this study show that the LP method developed in this thesis provides a unified framework for both leak detection and localization when used in conjunction with semi-supervised anomaly detection algorithms. A novel two-part localization approach is developed which utilizes LP pre-processed data, in tandem with the traditional cross-correlation approach. Results of the field study show that the presented method is able to perform both leak-detection and localization using relatively short time signal lengths. This is advantageous in continuous monitoring situations as this minimizes the data transmission requirements, the latter being one of the main impediments to full-scale implementation and deployment of leak-detection technology.Item Acquiring Multimodal Disaggregate Travel Behavior Data Using Smart Phones(University of Waterloo, 2013-01-25T19:15:06Z) Taghipour Dizaji, RoshanakDespite the significant advances that have been made in traffic sensor technologies, there are only a few systems that provide measurements at the trip level and fewer yet that can do so for all travel modes. On the other hand, traditional methods of collecting individual travel behavior (i.e. manual or web-based travel diaries) are resource intensive and prone to a wide range of errors. Moreover, although dedicated GPS loggers provide the ability to collect detailed travel behavior data with less effort, their use still faces several challenges including the need to distribute and retrieve the logger; the potential need to have the survey participants upload data from the logger to a server; and the need for survey participants to carry another device with them on all their trips. The widespread adoption of smart phones provides an opportunity to acquire travel behavior data from individuals without the need for participants to record trips in a travel diary or to carry dedicated recording devices with them on their travels. The collected travel data can then be used by municipalities and regions for forecasting the travel demand or for analyzing the travel behavior of individuals. In the current research, a smart phone based travel behavior surveying system is designed, developed, and pilot tested. The custom software written for this study is capable of recording the travel characteristics of individuals over the course of any period of time (e.g. days or weeks) and across all travel modes. In this system, a custom application on the smart phone records the GPS data (using the onboard GPS unit) at a prescribed frequency and then automatically transmits the data to a dedicated server. In the server, the data are stored in a dedicated database to be then processed using trip characteristics inference algorithms. The main challenge with the implemented system is the need to reduce the amount of energy consumed by the device to calculate and transmit the GPS fixes. In order to reduce the power consumption from the travel behavior data acquisition software, several techniques are proposed in the current study. Finally, in order to evaluate the performance of the developed system, first the accuracy of the position information obtained from the data acquisition software is analyzed, and then the impact of the proposed methods for reducing the battery consumption is examined. As a conclusion, the results of implemented system shows that collecting individual travel behavior data through the use of GPS enabled smart phones is technically feasible and would address most of the limitations associated with other survey techniques. According to the results, the accuracy of the GPS positions and speed collected through the implemented system is comparable to GPS loggers. Moreover, proposed battery reduction techniques are able to reduce the battery consumption rate from 13.3% per hour to 5.75% per hour (i.e. 57% reduction) when the trip maker is non-stationary and from 5.75% per hour to 1.41% per hour (i.e. 75.5% reduction) when the trip maker is stationary.Item Active Acoustics for Monitoring Unreported Leaks in Water Distribution Networks(University of Waterloo, 2021-09-08) Kafle, Marshal DeepWater distribution networks (WDNs) are critical infrastructure elements conveying water through thousands of kilometers of pipes. Pipes - one of the most critical elements in such systems - are subjected to various structural and environmental degradation mechanisms, eventually leading to leaks and breaks. Timely detection and localization of such leaks and bursts is crucial to managing the loss of this valuable resource, maintaining hydraulic capacity, and mitigating serious health risks which can potentially arise from such events. Much of the literature on leak detection has focused on passive methods; recording and analyzing acoustic signatures produced by leak(s) from passive piezo acoustic or pressure devices. Passive acoustic methods have received disproportionate attention both in terms of research as well as practical implementation for leak (or, bursts) detection and localization. Despite their popularity, passive methods have shown not to be reliable in detecting and localizing small leaks in full-scale systems, primarily due to acoustic signal attenuation and poor signal-to-noise ratios, especially in plastic materials. In this dissertation, an active method is explored, which uses an acoustic source to generate acoustic signatures inside a pipe network. A combination of active source and hydrophone receivers is demonstrated in this thesis as a viable method for monitoring leaks in water distribution pipes. The dissertation presents experimental results from two layouts of pipes, one a simple straight section and another a more complex network with tees and bends, with an acoustic source at one end, and hydrophones at strategic locations along the pipe. For leak detection, the measured reflected and transmitted energy using hydrophone receivers is used to determine the presence of a leak. To this effect, new leak indicators such as power reflection and transmission coefficients, power spectral density, reflected spectral density, and transmission loss are developed. Experimental results show that the method developed in this thesis can detect leaks robustly and has significant potential for use in pressurized water distribution systems. This thesis also presents a new framework for active method-based localization. Starting with a simple straight section for a proof of concept study and moving to lab-based WDNs, several methods are explored that simultaneously detect and locate a leak. The primary difficulty in detecting and estimating the location of a leak is overcome through a statistical treatment of time delays associated with multiple acoustic paths in a reverberant environment and estimated using two approaches: (i) classical signal decomposition technique (Prony's / matrix pencil method (MPM)) and (ii) a clustering pre-processing approach called mean-shift clustering. The former works on the cross-correlation of acoustic data recorded at two locations, while the latter operates on acoustic sensor data from a single location. Both methods are tested and validated using experimental data obtained from a laboratory testbed and are found to detect and localize leaks in plastic pipes effectively. Finally, time delay estimates obtained from Prony's / MPM are used in conjunction with the multilateration (MLAT) technique and extended Kalman filter (EKF) for localization in more complex WDNs. This study shows that the proposed active technique can detect and reliably localize leaks and has the potential to be applied to complex field-scale WDNs.Item Active Retrofit of Shear Critical RC Components Using Self-Prestressing Iron-Based Shape Memory Alloys(University of Waterloo, 2023-09-26) González Góez, MiguelWith growing traffic demands and structural degradation accelerated by climate change, there is a critical need for continued advancement of concrete repair and strengthening technologies to enable extended bridge service life. Specifically, transitioning to cost-effective reinforced concrete (RC) retrofit strategies that further enhance durability under service loading conditions and minimize damage development under extreme hazard conditions, which are more probable to occur in longer-lasting concrete structures, are key elements of next-generation concrete bridges. This thesis explores the use of iron-based shape memory alloys (Fe-SMAs) as an active shear retrofit strategy for RC components. SMAs are a class of smart materials with the unique property of the shape memory effect, allowing them to fully recover plastic deformations when subsequently heated. By installing pre-strained Fe-SMA strips and activating the shape memory effect, an active pressure can be introduced to help close cracks and apply a confining stress in the concrete. The primary objective is to evaluate the performance and practicality of Fe-SMA as an active shear strengthening technique in comparison to traditional methods such as external steel reinforcement and fibre-reinforced polymer (FRP) composites. The experimental phase of this study involved conducting push-off tests on RC specimens retrofitted with pre-deformed Fe-SMA strips. The goal was to assess the efficiency of active shear retrofitting. Additionally, finite element analysis (FEA) simulations were employed to model Fe SMA-retrofitted RC structures and investigate their behaviour under shear loading conditions. Key findings indicate that Fe-SMA retrofits, through transverse prestressing as part of the active retrofit strategy, contributed to notable improvements in the stiffness and strength of damaged RC components. Similar to passive retrofit methods, increased shear capacity was observed with higher levels of transverse reinforcement. Notably, combinations of substantial shear reinforcement ratios and elevated transverse prestressing provided the most significant gains in shear strength. Furthermore, the addition of prestressed Fe-SMA retrofits was found to effectively reduce shear crack widths and mitigate the progression of subsequent shear crack width growth. This study not only demonstrates the potential of Fe-SMA as a promising solution for active shear strengthening but also contributes to the development of future field-scale tests. The presence of FeSMA in damaged structures offers the prospect of multiple improvements, marking a significant advancement in the field of shear retrofittingItem Activity Analysis for Continuous Productivity Improvement in Construction(University of Waterloo, 2010-08-30T16:06:52Z) Gouett, Michael C.In the construction industry, onsite labour is one of the most variable and costly factors which affect project profits. Due to the variable nature of construction labour and its correlation with profits, construction managers require a comprehensive understanding of the activities of workers onsite. For project success, it is important that workers are spending the majority of their time installing materials which advance the project. This material installation time is known in the construction industry as “direct-work” or “tool time”. Site management should continuously seek to improve the direct-work rate through the life of the project. A review of the literature indicates that no workface assessment method exists in the literature which provides: (1) a detailed description of worker activities, and (2) a continuous productivity improvement process to help management identify productivity inhibitors affecting site labour, to develop a plan to reduce or eliminate these issues, and to measure improvements as a result of these changes. In response to this need, this research has focused on the development of a workface assessment method called activity analysis. Activity analysis is a continuous productivity improvement process which efficiently measures the time expenditure of workers onsite and identifies productivity inhibitors that management must reduce or eliminate to provide workers with more time for direct-work activities. Six case studies were conducted to verify the feasibility of the activity analysis process. Further, cyclical data from two major construction firms was collected and statistically analyzed to validate the hypothesis that activity analysis can improve direct-work rates. It has been concluded that activity analysis, as a continuous productivity improvement process, is both feasible and when continually applied to a construction site, can significantly improve direct-work rates through the life of a project.Item Activity-Based Data Fusion for the Automated Progress Tracking of Construction Projects(University of Waterloo, 2012-03-09T19:29:41Z) Shahi, ArashIn recent years, many researchers have investigated automated progress tracking for construction projects. These efforts range from 2D photo-feature extraction to 3D laser scanners and radio frequency identification (RFID) tags. A multi-sensor data fusion model that utilizes multiple sources of information would provide a better alternative than a single-source model for tracking project progress. However, many existing fusion models are based on data fusion at the sensor and object levels and are therefore incapable of capturing critical information regarding a number of activities and processes on a construction site, particularly those related to non-structural trades such as welding, inspection, and installation activities. In this research, a workflow based data fusion framework is developed for construction progress, quality and productivity assessment. The developed model is based on tracking construction activities as well as objects, in contrast to the existing sensor-based models that are focussed on tracking objects. Data sources include high frequency automated technologies including 3D imaging and ultra-wide band (UWB) positioning. Foreman reports, schedule information, and other data sources are included as well. Data fusion and management process workflow implementation via a distributed computing network and archiving using a cloud-based architecture are both illustrated. Validation was achieved using a detailed laboratory experimental program as well as an extensive field implementation project. The field implementation was conducted using five months of data acquired on the University of Waterloo Engineering VI construction project, yielding promising results. The data fusion processes of this research provide more accurate and more reliable progress and earned value estimates for construction project activities, while the developed data management processes enable the secure sharing and management of construction research data with the construction industry stakeholders as well as with researchers from other institutions.Item Adaptable Three Dimensional System for Building Inspection Management(University of Waterloo, 2012-08-30T15:07:48Z) Abou Shaar, BelalSustaining the safety and operability of civil infrastructure assets, including buildings, is a complex undertaking that requires a perpetual cycle involving inspection, and further decisions for renewal fund allocation. However, inspection, which is the basis for all subsequent decisions, is a complex task to manage, particularly when a large number of assets are involved. The current lack of a structured process with visual referencing as well as the high subjectivity and inflexibility to changing inspection requirements make current inspections very costly and time consuming. This research improves the building inspection process by introducing a 3D system for inspection management that has four unique features: (1) a structured assessment approach that considers multiple organizations, buildings and inspectors, using a GIS interface; (2) a 3D visual referencing method for marking problem areas during inspections to facilitate all on-site inspections, thus reducing time and cost; (3) a visual guidance module to reduce inspection subjectivity; and (4) a flexible module for designing different assessment types. The proposed inspection management system creates 3D building plans from 2D Computer-Aided Drawing (CAD) to provide location referencing that enhances inspection effectiveness. The visual guidance system allows inspectors with various experience levels to perform consistent inspections and requires less training, thus reducing costs. Flexible inspection generation also allows a variety of inspection types, such as condition and level of service, to be readily incorporated. A computerized prototype system has been developed using the Windows Presentation Foundation’s XAML markup language with underlying C# programming on a tablet computer for experimentation. The thesis provides a detailed description of system development and reports the benefits of the system on a sample inspection. Accordingly, the system has proven most useful for large organizations that own a large number of building assets that require frequent inspections.Item The Adaption of a Rapid Screening Test to Rank the Corrosion Behaviour of Stainless Steel Reinforcing Bars in Concrete(University of Waterloo, 2018-08-21) Loudfoot, PeterThe costs associated with aging reinforced concrete infrastructure in Ontario continue to rise as highway infrastructure, such as bridges, continuously deteriorate. The use of de-icing salts on these bridges in the winter often leads to the corrosion of the reinforcing steel, cracking the concrete and reducing the service life of the structure. To meet a minimum required service life of 75 years for bridge infrastructure, the Ministry of Transportation of Ontario uses stainless steel grades UNS S31653 (316LN) and UNS S31803 (2205) for corrosion resistance [1], [2]. However, with the wide variety of existing stainless steel grades and the continuous development of new grades, the selection of the most appropriate grade of stainless steel for current projects is limited by the time necessary to determine their corrosion resistance under realistic conditions. An experimental project was undertaken to determine, through a rapid screening test, if less costly grades of stainless steel would be competitive with 2205 or 316LN in their corrosion resistance. The relative corrosion resistance of three grades of stainless steel were compared: UNS S24100 (XM-28) and S32304 (2304) with S31803 (2205) as the “control”. The main objectives for this project were as follows: 1) to experimentally assess and evaluate the parameters of the Rapid Screening Test such that recommended parameters can be used to compare the relative corrosion resistance of new and existing grades of stainless steel, and 2) to assess the impact of the parameters on the probability of corrosion for each tested stainless steel grade using statistical analyses. The experimental procedure involved casting stainless steel specimens into concrete with admixed chloride concentrations of 4, 6, or 7.5% by mass of cementitious materials, measuring the open circuit potentials (Ecorr) of the specimens from 24 hours to 48 after casting, and immediately applying an anodic polarization potential of +100, +200, +300, or +400 mV to the specimens for 96 hours after the Ecorr monitoring period. During the applied polarization potential period, corrosion current density (icorr) values were monitored. Corrosion initiation was considered to have occurred if the icorr of a specimen surpassed the proposed pass/fail limit of 0.025 A/m2 for more than 2 hours. All specimens were autopsied at the end of the test and visually examined for any signs of corrosion. The results of the electrochemical testing and the observations made during and after autopsying the bars were found to differ. The detection of corrosion initiation in the electrochemical testing was then changed to reflect the results of the autopsied bars. If any bar had an increase in icorr by at least one order of magnitude, it is considered to have corroded. Logistic regression models were created to model the probability of corrosion in each of the tested grades of stainless steel based on the electrochemical testing and autopsy results. It was determined that increasing the admixed chloride concentration of the concrete has a far more significant impact on the corrosion initiation of the bars than the applied polarization potential. Theoretical critical chloride thresholds of the XM-28, 2304, and 2205 bars directly exposed to admixed chlorides in concrete were estimated to be 7.1%, 7.1%, and 9.4% by mass of cementitious materials, respectively. However, based on the limited pitting corrosion damage observed in the photomicrographs, it is believed that 2304 has a higher chloride threshold than XM-28. The threshold values of the 2304 and 2205 specimens may not be accurately estimated due to the imbalanced number of corroded versus non-corroded specimens for each of these grades. Based on the experimental and analytical results, 7.5% admixed chlorides by mass of cementitious and an applied polarization potential of +300 mV are the recommended parameters for the Rapid Screening Test. The proposed relative ranking of stainless steel specimens is based on the number of corroded specimens, the order of magnitude of the corrosion rates experienced by the specimens, and the severity of the pitting corrosion observed on the specimens. The ranking of the relative corrosion performance of the stainless steel grades tested in the Rapid Screening Test is as follows, in order of the most to least resistant: 2205, 2304, and XM-28. Based on the results of this test, it is recommended that 2304 would be a suitable alternative to 2205 as corrosion resistant reinforcing bars in concrete highway structures. It should be noted that chloride concentrations in excess of 5% by mass of cementitious material in concrete highway structures have not been reported in the available literature to date.Item Adaptive Traffic Signal Optimization Using Bluetooth Data(University of Waterloo, 2017-10-23) Zarinbal Masouleh, AmirTraffic congestion continues to increase in urban centers causing significant economic, environmental, and social impacts. However, as urban areas become more densely developed, opportunities to build more roadways decline and there is an increased emphasis to maximize the utilization of the existing facilities through the use of more effective advanced traffic management systems (ATMS). In recent years, Bluetooth technology has been adapted for use as a sensor for measuring vehicle travel times along a segment of roadway. However, using Bluetooth technology in ATMS such as advanced traffic signal controls has been limited in part because there has been little research conducted to investigate the opportunity to estimate intersection control delay and desired travel time using Bluetooth collected data. Furthermore, until recently, there has been a lack of tools, such as simulation, to predict the behavior of the system before it is developed and deployed. Therefore, the main intention of this research was to develop a robust framework for adaptive traffic signal control using Bluetooth detectors as the main source of data. In this thesis, I addressed the aforementioned challenges and proposed the first integrated framework for real time adaptive traffic signal management using Bluetooth data through the following steps: 1. I have developed a state of the art simulation framework to simulate the Bluetooth detection process. The simulation model was calibrated and validated using field data collected from two custom built Bluetooth detectors. The proposed simulation framework was combined with commercially available traffic microsimulation models to evaluate and develop the use of Bluetooth technology within ATMS. 2. I proposed two methods for estimating control delay at signalized intersections using Bluetooth technology. Method 1 requires data from a single Bluetooth detector deployed at the intersection and estimates delay on the basis of the measured Bluetooth dwell time. This model has a high level of accuracy when the queue length is less than the effective range of the Bluetooth detectors but performs poorly when queues exceed the detector range. Method 2 uses Bluetooth measured travel time to estimate control delay. This method requires data from two Bluetooth detectors, one at the intersection and one upstream. This method can be used regardless of the length of queues. The methods were evaluated using a custom-built simulation framework and field data. The results show that the proposed methods provide an accuracy of mean absolute error equal to 3 seconds, indicating that they are suitable for estimating control delay at signalized intersections. 3. I proposed a method to dynamically optimize green splits of signalized intersection on the basis of control delay estimated from Bluetooth detector data. Evaluation of the proposed method using a simulated hypothetical intersection demonstrated that proposed method is able to provide reduction in average delay in comparison with fixed signal timing and fully actuated controller. 4. I proposed and evaluated a method to estimate the desired travel time of the platoon in real-time and then use this estimate to optimize the offset of signalized intersections in a corridor. The proposed method has been evaluated using simulated data for a range of traffic demands and weather conditions and results indicate that the proposed method can provide up to a 75% reduction in average vehicle delay for vehicles discharging in the coordinated phase in comparison with fixed offset timing in different weather conditions.Item Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration Data(University of Waterloo, 2010-10-08T16:36:58Z) Schmidt, Philip J.Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.Item Addressing Uncertainty in Health Impacts of Air Pollution under Climate Change Mitigation(University of Waterloo, 2022-01-18) Nallathamby, PunithSimulations to evaluate climate policy take a lot of time, but little evidence exists to say how long is enough, especially for policy impacts related to air pollution. Air pollution and climate change are the two leading global environmental issues affecting human health. Climate change can increase air pollution, an effect called the “climate penalty”. Climate policy can thus reduce air pollution, offering “co-benefits” for human health and the economy. However, climate policy makers lack robust information on these air pollution related co-benefits. This is due partly to uncertainty in these co-benefits. One such uncertainty is due to the natural variability in the climate system. Another is the response of the human system – including human health and the economy – to changes in air pollution. Natural variability obscures the effects of climate policy on air pollution and its associated health impacts. However, the computational cost of modelling health responses under many future climate scenarios means little is known about the size of this effect, or its implications for policy evaluation. This study seeks to address these gaps by determining minimum simulation lengths needed to address natural variability. It employs a novel analysis of results from a previously developed integrated modelling framework. This framework implemented global climate policies consistent with the Paris Agreement on Climate Change. It captured resulting changes to illness and premature death in the United States associated with outdoor concentrations of air pollutants including ozone and fine particulate matter, and resulting economic damages. Five initializations of the climate system and 30-year modelling periods resulted in 150 annual simulations for each pollutant (ozone and fine particulate matter), policy scenario (reference, a policy that meets a 2 degree warming target, and a policy meeting 2.5 degrees), and time period (2050 and 2100). In this new analysis of these results, climate policies were found to produce large co-benefits that were highest in the Eastern US and increased from 2050 to 2100. These co-benefits also had significant uncertainty related to both natural variability and uncertainty in health and economic responses (“health-related uncertainty”). Uncertainty due to natural variability was reduced by sampling within the annual simulations and averaging their results together. This process was continued until all initializations fall within the 95% confidence interval of health-related uncertainty. At this point, the simulation length was deemed sufficient to filter out natural variability. The simulation length required was found to vary depending on the signal-to-noise ratio (SNR), where co-benefits are the signal and the spread due to natural variability is the noise. SNR values increased over time from 2050 to 2100. In 2050, some regions, like the Midwest, showed a lower SNR and greater influence of natural variability. For these cases, eight years or more of simulation were needed to address natural variability. For cases with high SNR, as in 2100, less than three years were needed for all regions in the US. This work demonstrates the effect of natural variability on air quality co-benefits, and provides insights to inform simulation lengths to address it.Item An Advanced Construction Supply Nexus Model(University of Waterloo, 2013-04-22T22:48:12Z) Safa, MahdiThe complex and challenging process of construction supply chain management can involve tens of thousands of engineered components, systems, and subsystems, all of which must be designed in a multi-party and collaborative environment, the complexity of which is vastly increased in the case of megaprojects. A comprehensive Advanced Construction Supply Nexus Model (ACSNM) was developed as a computational and process-oriented environment to help project managers deal efficiently and effectively with supply chain issues: fragmentation, resource shortages, design delays, and planning and scheduling deficiencies, all of which result in decreased productivity, cost and time overruns, conflicts, and time-consuming legal disputes. To mitigate the effects of these difficulties, four new prototype systems are created: a front-end planning tool (FEPT), a construction value packaging system (CVPS), an integrated construction materials management (ICMM) system, and an ACSNM database. Because these components are closely interdependent elements of construction supply nexus management, the successfully developed model incorporates cross-functional integration. This research therefore effectively addresses process management, process integration, and document management, features not included in previous implementations of similar models for construction-related applications. This study also introduces new concepts and definitions, such as construction value packages comprised of value units that form the scope of value-added work defined by type, stage in the value chain, and other elements such as drawings and specifications. The application of the new technologies and methods reveals that the ACSNM has the potential to improve the performance and management of the enterprise-wide supply chain. Through opportunities provided by our industry partners, Coreworx Inc. and Aecon Group Inc., the elements of the developed model have been validated with respect to implementation using data from several construction megaprojects. The model is intended to govern current supply nexus processes associated with such megaprojects but may be general enough for eventual application in other construction sectors, such as multi-unit housing and infrastructure.Item Advanced Traffic Signal Control Using Bluetooth Detectors(University of Waterloo, 2018-03-14) Hart-Bishop, JordanTraditionally signal timing plans are developed for expected traffic demands at an intersection. This approach generally offers the best operation for typical conditions. However, when variation in the traffic demand occurs, the signal timing plan developed for typical conditions may not be adequate resulting in significant congestion and delay. There have been many techniques developed to address these variations and they fall into one of two categories: (1) if the variations follow a consistent temporal pattern, then a set of fixed-time signal timing plans can be developed, each for a specific time of the day; (2) if the variations cannot be predicted a priori, then a system that measures traffic demands and alters signal timings in real-time is desired. This research focuses on improving the latter approach with a novel application of Bluetooth detector data. Conventional traffic responsive plan selection (TRPS) systems rely extensively on traffic sensors (typically loop detectors or equivalent) to operate, which are costly to install and maintain, and provide information about traffic only at the points which they are installed. This thesis explores the use of Bluetooth detectors as an alternative data source for TRPS due to their ease of installation and capability to provide information over an area rather than at a single point. This research consists of simulated and field traffic data associated with Bluetooth detectors. The field and simulated traffic data were from a section of Hespeler Road in Cambridge Ontario, bounded by Ontario Highway 401 to the north and Highway 8 to the south. The study corridor is approximately 5.0 kilometres long, and consists of 14 signalized intersections. In order to determine the potential of Bluetooth detectors as a data source, several measures of performance were considered for use in a Bluetooth-based system. The viability of each one was assessed in microsimulation experiments, and it was found that Bluetooth travel time was the most accurate at identifying true traffic conditions. On the basis of the simulation results a field pilot study was designed. Bluetooth detectors and conventional traffic detectors were installed at study intersections along the Hespeler Road corridor to measure real traffic conditions. From these measurements an algorithm was developed to determine when traffic conditions varied from the expected conditions. The final stage of the research evaluated the proposed algorithm using a controlled simulation environment with known atypical traffic patterns. It was found that the algorithm was capable of identifying the atypical conditions that were simulated based on field conditions. The key findings of this research are that (1) Bluetooth detectors are able to provide measured travel times from individual vehicles with sufficient accuracy, and with sufficient sample sizes, that the aggregated travel time information can be used to identify the traffic conditions at a signalized intersections; and (2) these measurements can be used instead of data from conventional traffic detectors, to determine when to switch from time of day fixed time traffic signal control to TRPS control.Item Advanced Water Treatment Strategies for the Removal of Natural and Synthetic Organic Contaminants(University of Waterloo, 2014-01-24) Halevy, PatrickPrior to full-scale implementation of process modifications at the Brantford WTP, a pilot-scale treatability study was conducted to investigate intermediate ozonation/AOP and to determine the most suitable granular media (anthracite, GAC, and Filtralite®) for deep-bed biological filtration. The primary objectives of this research were to provide insight into the destruction of natural and synthetic organics and assess ozonated and halogenated DBP formation. Ozone alone was unable to achieve the 1-log removal target for geosmin or MCPA, unless disinfection-level dosages were applied. No improvement was observed when adding hydrogen peroxide. A major obstacle to the implementation of ozonation in bromide-laden source waters is the formation of bromate. There is a direct correlation between ozone dose and bromate formation and by applying ozone dosages at disinfection levels, bromate is likely to exceed regulatory limits. However, adding hydrogen peroxide reduced the amount of bromate formed, and in most cases levels fell below regulatory limits. A linear correlation was established between bromate inhibition and increasing H2O2/O3 ratio at constant ozone dose. Amongst the three filtration media investigated, only GAC achieved 1-log removal for geosmin and MCPA. The superiority of GAC over anthracite and Filtralite® was attributed to its adsorption affinity. Filtralite® and anthracite media were both ineffective for MCPA removal due to its non-biodegradable nature under conventional water treatment conditions. At a 1 mg/L-ozone dose, GAC and Filtralite® filters achieved a 1-log geosmin removal. In contrast, a 1.44 mg/L ozone dose was required to meet this target with anthracite. The tandem of ozone followed by biological filtration was very effective for the control of distribution system TTHM production regardless of filter media, with levels well below current and anticipated provincial regulatory limits. The combination of intermediate ozonation followed by deep-bed biological filtration is well suited for treating Grand River water. Scale-up considerations include pairing the proper filter media to the size of the ozone generator. The best two treatment scenarios were: Option 1: select GAC media and size the ozone generator to produce a 1 mg/L dose. Option 2: select anthracite media and size the ozone generator to deliver a 2 mg/L dose.Item Advancements in bender-element testing - frequency effects(University of Waterloo, 2019-09-24) Irfan, MuhammadModern building and bridge codes require seismic design of foundations and structures; for which, the evaluation of the soil’s response to dynamic loads is an important requirement in seismic design. The dynamic soil response is governed by its dynamic properties such as shear modulus (wave velocity) and damping ratio. These soil dynamic properties are typically measured in laboratory mostly using a bender element system (BE) or a resonant column (RC) device. However, the operating frequency range of BEs (e.g. 1 to 15 kHz) and the RC (e.g. 20 to 220 Hz) are not representative of typical earthquake loads (e.g. 0.1 to 10 Hz). In addition, there are significant limitations in BE and RC testing which reduce their reliability. Thus, current seismic designs could be either conservative or unsafe. A major limitation in BE testing is that there is no standard procedure; mostly because the soil-BE interaction is not well understood; and the characterization of BE inside a soil specimen was not possible. On the other hand in RC testing, the soil dynamic properties cannot be evaluated simultaneously as function of frequency and strain. In a typical narrow-band resonant column test (e.g. sine sweep, random noise), the induced shear strains are different at each frequency component. Therefore, the main objectives of this study are to understand better the soil-BE interaction; which will provide the basis for the development of reliable guidelines for BE testing; and to verify the BE test results using the standard RC device. The main objectives are achieved by testing the BE using a state-of-the-art laser vibrometer and a newly developed transparent soil to measure the actual response of the bender element transmitter (Tx) and receiver (Rx) inside different media such as air, liquids, and sand under different confinements. Then, the dynamic characteristics of the Tx are measured using advanced modal analysis techniques originally developed for structural applications (e.g. Blind Source Separation). The modal analysis is used to investigate if the different BE vibration modes correspond to a cantilever beam, as currently assumed or a cantilever plate. The Rx is also studied to assess the effects of compressional waves, the total damping of the BE system inside the medium on the actual evaluation of the shear wave velocity of the soil. In addition, the dependence of the output voltage from the Rx and the applied strain is investigated at different confining pressures. The thesis concludes with the dynamic characterization of a sensitive clay (Leda clay) that is present in large areas of Eastern Canada (Leda or Champlain sea clay) BE and RC tests are performed on unique undisturbed samples. All results presented in this study represent to the averages of multiple tests (more than 10 for RC and more than 500 for BEs). In all cases, the maximum coefficient of variance was 3 % which demonstrates the repeatability of the measurements. Contrary to a common assumption in BE testing, measurements on the transparent soil show that the Tx response inside the specimen is significantly different from the actual input voltage. In addition, BE measurements in soil and oil show that the time delay between input excitation and Tx response is not constant but it decreases with the increase in frequency. Results from the modal analysis of the Tx show a cantilever beam deformation (2D) only for the first mode of the Tx response in air and liquids; however, the response inside the soil specimen (no confinement) shows a cantilever plate behavior (3D). The excitation frequency in BE test should not be constant as commonly done; but it should be increased at each confinement level to match the increase in natural frequency and improve the signal-to-noise ratio. The overall damping ratio of the Tx increases up to 30% with confinement because of the soil-BE interaction, causing additional challenges in the evaluation of shear wave velocity and damping ratio from BE tests. The measured BE-system response shows a significant p-waves interference that affects the evaluation of the shear wave velocity. The p-wave interference must be carefully evaluated for the correct interpretation of the results. The p-wave interference is clearly observed when the Rx response is measured inside different liquids. This interference increases with the increase in the excitation frequencies. The Rx response in the transparent soil shows that participation of high frequencies and the interference of p-waves increases with increase in confinement. The p-wave arrivals mask the shear wave arrivals; which can lead to the overestimation of shear waves by more than 25 %. The results from the RC and BE tests on fused quartz and Leda clay specimens confirm the conclusion that high input frequencies enhance the generation of p-waves. The theoretical relationship between the maximum BE displacement and maximum input voltage for the Tx or the maximum output voltage for the Rx is verified for the first time for liquids and sands at no confining pressure. The peak displacements at the tip of the BE increased linearly with the input voltage because the maximum displacement in a piezo-electric transducer is proportional to the applied voltage. RC and BE tests performed on four Leda clay samples showed the effects of shear strain, confinement, and excitation frequency on shear modulus and damping ratio of the Leda clay. The effect of frequency is evaluated using a recently proposed methodology called the ‘carrier frequency’ (CF) method. The stiffest sample displayed the highest degradation with the increase in shear strain. There is a 15 % difference observed between the shear wave velocity estimates from RC and BE tests. The RC tests at frequencies below 100 Hz showed no effect of loading frequency on shear modulus and damping ratio; however, BE tests at frequencies centred at 12kHz did show a 15% change in wave velocity. This change could be attributed to the loading frequency or to the complex interaction of between p-waves and s-wave in BE testing. Loading frequency in BE tests does have a significant effect in the results, up to 40% error in the estimation of s-wave velocity, as the interaction between p-waves and s-waves increases with frequency.