Theses

Permanent URI for this collection

The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.

This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)

This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)

Browse

Recent Submissions

Now showing 1 - 20 of 16062
  • Item
    Methods for Improving Performance of Precision Health Prediction Models
    (University of Waterloo, 2024-09-20) Krikella, Tatiana
    Prediction models for a specific index patient which are developed on a similar subpopulation have been shown to perform better than one-size-fits-all models. These models are often called \textit{personalized predictive models} (PPMs) as they are tailored to specific individuals with unique characteristics. In this thesis, through a comprehensive set of simulation studies and data analyses, we investigate the relationship between the size of similar subpopulation used to develop the PPMs and model performance. We propose an algorithm which fits a PPM using the size of a similar subpopulation that optimizes both model discrimination and calibration, as it is criticized that calibration is not assessed as often as discrimination in predictive modelling. We do this by proposing a loss function to use when tuning the size of subpopulation which is an extension of a Brier Score decomposition, which consists of separate terms corresponding to model discrimination and calibration, respectively. We allow flexibility through use of a mixture loss term to emphasize one performance measure over another. Through simulation study, we confirm previously investigated results and show that the relationship between the size of subpopulation and discrimination is, in general, negatively proportional: as the size of subpopulation increases, the discrimination of the model deteriorates. Further, we show that the relationship between the size of subpopulation and calibration is quadratic in nature, thus small and large sizes of subpopulation result in relatively well-calibrated models. We investigate the effect of patient weighting on performance, and conclude, as expected, that the choice of the size of subpopulation has a larger effect on the PPM's performance compared to the weight function applied. We apply these methods to a dataset from the eICU database to predict the mortality of patients with diseases of the circulatory system. We then extend the algorithm by proposing a more general loss function which allows further flexibility of choosing the measures of model discrimination and calibration to include in the function used to tune the size of subpopulation. We also recommend bounds on the grid of values used in tuning to reduce the computational burden of the algorithm. Prior to recommending bounds, we further investigate the relationship between the size of subpopulation and discrimination, as well as the size of subpopulation and calibration, under 12 different simulated datasets, to determine if the results from the previous investigation were robust. We find that the relationship between the size of subpopulation and discrimination is always negatively proportional, and the relationship between the size of subpopulation and calibration, although not entirely consistent among the 12 cases, shows that a low size of subpopulation is good, if not optimal, in many cases we considered. Based on this study, we recommend a lower bound on the grid of values to be 20\% of the entire training dataset, and the upper bound to be either 50\% or 70\% of the training dataset, depending on the interests of the study. We apply the methods proposed to both simulated and real data, specifically, the same dataset from the eICU database, and show that the results previously seen are robust, and that the choice of measures for the general loss function have an effect on the optimal size of subpopulation chosen. Finally, we extend the algorithm to predict the longitudinal, continuous outcome trajectory of an index patient, rather than predicting a binary outcome. We investigate the relationship between the size of subpopulation and the mean absolute error, and find that the performance drastically improves up to a point, where it then stabilizes, leading to the model fit to the full training data as the optimal model, just tending to be slightly better than a model fit to 60\% of the subpopulation. As these results are counter-intuitive, we present three other simulation studies which show that these results stem from predicting the trajectory of a patient, rather than from predicting a continuous outcome. Although the answer to why this is the case is an open research question, we speculate that, since a personalized approach still leads to comparable performance to the full model, these results can be attributed to testing these methods on a small sample size. Due to the computational intensity of the methods, however, testing on a larger sample size to generalize these results is currently impractical. Areas of future work include improving the computational efficiency of these methods, which can lead to investigating these same relationships under more complex models, such as random forest or gradient boosting. Further investigation of personalized predictive model performance when predicting a trajectory should also be considered. The methods presented in this thesis will be summarized into an R package to allow for greater usability.
  • Item
    Fabrication and Characterization of Nanoparticle Microporous Layers on Platinized Titanium Fiber Felt for Electrolyzer Anodes
    (University of Waterloo, 2024-09-20) Jamali, Nooruddin
    This study is concerned with the incorporation of various nanoparticles in the microporous layers (MPL) on titanium fiber felts for use at the anode in proton exchange membrane (PEM) water electrolyzers. The nanoparticle MPLs were coated onto Ti fiber felt using various methods. Three types of nanoparticles were utilized: indium tin oxide (ITO), tin (Sn) and titanium (Ti). The ITO and Sn nanoparticles were applied using an electrospraying technique, with Nafion as a binder (in the case of ITO) to ensure adhesion to the felt substrate and polyvinylpyrrolidone (PVP) as a surfactant to prevent nanoparticle sedimentation. This method resulted in uniformly smooth coatings. In contrast, Ti nanoparticles were deposited via a solvent evaporation method without a binder. This was followed by sintering of the nanoparticle-coated Ti felt at 750°C for 1 hour under an argon atmosphere. The resulting MPLs underwent comprehensive characterization, including surface imaging via scanning electron microscopy (SEM), assessments of permeability and porosity and measurements of electrical conductivity. The final and critical phase of characterization involved testing the samples in a laboratory-scale water electrolyzer. The electrolyzer setup included titanium bipolar plates with a once-through 2.1 x 2.1 cm – flow field leading to the membrane electrode assembly with an active area of 0.9 x 2.0 cm. All cells used to characterize performance consisted of a commercial carbon fiber cathode coated with an MPL (SGL 22BB) and a Hydrion N-115 catalyst-coated membrane. The tests revealed that the performance using sintered MPLs was superior to that of the electrosprayed MPLs and surpassed that of the baseline case (Ti felt with no coating). The sintered Ti coating with the lowest loading operated the best indicating that the rougher and thinner MPL was the best choice. The poor performance of the electrosprayed MPLs is attributed to the higher interparticle resistance due to the presence of non-conducting materials (dispersant and binder) as reflected in the lower conductivity of these MPL.
  • Item
    Competing effects of arterial pressure and carbon dioxide on cerebrovascular regulation during exercise and orthostatic stress
    (University of Waterloo, 2024-09-20) Hedge, Eric Thomas
    The human brain is highly sensitive to changes in cerebral blood flow. There are multiple integrated and redundant regulatory mechanisms acting simultaneously to ensure adequate cerebral perfusion and removal of waste products. However, the contribution of different cerebrovascular control mechanisms to the increase in cerebral blood flow during exercise or the reduction in flow during orthostatic stress are controversial, especially for the competing roles of arterial pressure and CO2. Therefore, the purpose of this thesis was to identify which regulatory factors play prominent roles in modulating cerebral blood flow during and following transitions in exercise intensity or posture change. This was accomplished through a series of experiments that evaluated cerebrovascular responses to moderate- and high-intensity interval exercise, bed rest, and orthostatic stress tests to pre-syncope. Through causal time-series modeling, it was identified that cerebral autoregulation effectively minimized the effects of exercise-induced increases in mean arterial pressure (MAP) on middle cerebral artery blood velocity (MCAv), and that changes in estimated arterial partial pressure of CO2 (PaCO2) largely dictated MCAv dynamics in response to step changes in work rate. These findings sharply contrast with recent attempts to characterize the increase in MCAv at the onset of exercise as a mono-exponential response. Sex-specific effects of MAP and end-tidal PCO2 on MCAv were identified while standing following two weeks of bed rest in post-menopausal women and similar-aged men, with reduced end-tidal PCO2 contributing to reductions in men and lower MAP contributing to reductions in women. Vertebral artery blood flow was also identified as an important factor potentially mediating cerebrovascular-respiratory interactions during orthostatic stress in the progression to syncope. Overall, the results of these experiments demonstrate important connections between the cerebral vasculature and respiratory control during exercise and orthostatic stress, enhancing our fundamental understanding of cerebrovascular control and the integrative cerebrovascular cascade leading to syncope.
  • Item
    Statistical Methods for Joint Modeling of Disease Processes under Intermittent Observation
    (University of Waterloo, 2024-09-20) Chen, Jianchu
    In studies of life history data, individuals often experience multiple events of interest that may be associated with one another. In such settings, joint models of event processes are essential for valid inferences. Data used for statistical inference are typically obtained through various sources, including observational data from registries or clinics and administrative records. These observation processes frequently result in incomplete histories of the event processes of interest. In settings where interest lies in the development of conditions or complications that are not self-evident, data become available only at periodic clinic visits. This thesis focuses on developing statistical methods for the joint analysis of disease processes involving incomplete data due to intermittent observation. Many disease processes involve recurrent adverse events and an event which terminates the process. Death, for example, terminates the event process of interest and precludes the occurrence of further events. In Chapter 2, we present a joint model for such processes which has appealing properties due to its construction using copula functions. Covariates have a multiplicative effect on the recurrent event intensity function given a random effect, which is in turn associated with the failure time through a copula function. This permits dependence modeling while retaining a marginal Cox model for the terminal event process. When these processes are subject to right-censoring, simultaneous and two-stage estimation strategies are developed based on the observed data likelihood, which can be implemented by direct maximization or via an expectation-maximization algorithm - the latter facilitates semi-parametric modeling for the terminal event process. Variance estimates are derived based on the missing information principle. Simulation studies demonstrate good finite sample performance of proposed methods and high efficiency of the two-stage procedure. An application to a study of effect of pamidronate on reducing skeletal complications in patient with skeletal metastases illustrates the use of this model. Interval-censored recurrent event data can occur when the events of interest are only evident through intermittent clinical examination. Chapter 3 addresses such scenarios and extends the copula-based joint model for recurrent and terminal events proposed in Chapter 2 to accommodate interval-censored recurrent event data resulting from intermittent observation. Conditional on a random effect, the intensity for the recurrent event process has a multiplicative form with a weak parametric piecewise constant baseline rate, and a Cox model is formulated for the terminal event process. The two processes are then linked via a copula function, which defines a joint model for the random effect and the terminal event. The observed data likelihood can be maximized directly or via an EM algorithm; the latter facilitates a semi-parametric terminal event process. A computationally convenient two-stage estimation procedure is also investigated. Variance estimates are derived and validated by simulation studies. We apply this method to investigate the association between a biomarker (HLA-B27) and joint damage in patients with psoriatic arthritis. Databases of electronic medical records offer an unprecedented opportunity to study chronic disease processes. In survival analysis, interest may lie in studying the effects of time-dependent biomarkers on a failure time through Cox regression models. Often however, it is too labour intensive to collect and clean data on all covariates at all times, and in such settings it is common to select a single clinic visit at which variables are measured. In Chapter 4, we consider several cost-effective ad hoc strategies for inference, consisting of: 1) selecting either the last or the first visit for a measurement of the marker value, and 2) using the measured value with or without left-truncation. The asymptotic bias of estimators based on these strategies arising from misspecified Cox models is investigated via a multistate model constructed for the joint modeling of the marker and failure processes. An alternative selection method for efficient selection of individuals is discussed under budgetary constraint, and the corresponding observed data likelihood is derived. The asymptotic relative efficiency of regression coefficients obtained from Fisher information is explored and an optimal design is provided under this selection scheme.
  • Item
    Below the Plains: Navigating Groundwater Depletion in Kansas through Collective Action
    (University of Waterloo, 2024-09-20) Michaud, Melanie
    In the context of increasing groundwater depletion and the critical need for sustainable water management, my research examines Kansas's transition toward enhanced groundwater conservation through the lens of the Multi-Level Perspective (MLP) framework. This study focuses on the roles of state actors, policy entrepreneurs, local experiments like real-world labs, and the influence of landscape factors (external pressures) and cultural values in driving sustainability transitions. Kansas, facing significant groundwater depletion, provides a compelling case to explore how conservation initiatives, such as LEMAs (Local Enhanced Management Areas), emerged and gained acceptance in a traditionally depletion-oriented agricultural regime. Guided by the research objectives to understand how policy diffusion occurred, how actors changed roles, and the state's involvement in shaping transitions, I employed a qualitative approach. My research uses document analysis, interviews, and case studies of Kansas's Groundwater Management Districts (GMDs) and LEMA policies to investigate the factors driving the adoption of conservation measures. The case of the Sheridan 6 LEMA serves as a pivotal example of a "real-world lab" that influenced the broader adoption of conservation practices across Kansas and the subsequent passage of state legislation mandating groundwater management plans for all GMDs. The findings reveal that real-world labs like Sheridan 6 provided empirical evidence demonstrating that conservation could be achieved without economic harm, which built trust among local stakeholders and influenced the shift from depletion to conservation practices. Landscape factors like groundwater depletion and regulatory threats interacted with cultural values like preserving family legacies and local control, pushing incumbent regime actors to adopt conservation measures. Policy entrepreneurs, including state officials and GMD staff, played a central role in framing conservation in ways that aligned with these cultural values, leveraging political opportunities, and building coalitions that supported policy change. The research also challenges traditional views of the state's passive role in transitions, illustrating how state actors actively created and nurtured niche innovations, such as LEMAs. This research contributes to the MLP literature by addressing gaps related to the role of the state and the uneven impacts of landscape pressures and cultural values on influencing conservation behaviors across the GMDs. By integrating insights from the Kansas case, this study offers broader implications for water management in other regions. It highlights the importance of empowering policy entrepreneurs, leveraging local experiments, and understanding the interaction between landscape pressures and cultural values to drive sustainability transition.
  • Item
    Digital Twin-Enhanced Radar and Joint Communication-Sensing Systems: Application in Accurate Fall Severity Classification and Beyond
    (University of Waterloo, 2024-09-20) Elbadrawy, Abdelrahman
    The growing population of old seniors presents a significant challenge to healthcare systems worldwide. According to the United Nations Population Fund (UNFPA), the number of people aged 65 and older is 10.3% of the global population and is expected to reach 20.7% by 2074. The World Health Organization (WHO) reports that in the near future, by 2023, one in 6 people will be over the age of 60. This increase in the elderly population poses a serious challenge to healthcare providers at retirement homes because of the need to provide individual care to their residents. Falls stand out as the predominant cause of injury and death among the elderly. As stated by the National Council on Aging (NCOA), 1 in 4 Americans aged 65 and older fall each year, which equates to 14 million people. The NCOA also reports that the cost of treating injuries resulting from falls is expected to reach $101 billion by 2030. Moreover, the Center for Disease Control and Prevention (CDC) reports that repeated fall incidents double after falling once. This sets a significant burden on healthcare providers to assess the severity of falls and provide immediate care for those who are in need. In this study, we present a novel approach for fall detection, leveraging radar-based sensing systems as well as joint communication-sensing systems and advanced digital twin simulations. The choice of radar technology is rooted in its capability for high-resolution detection of micro-movements and its inherent respect for individual privacy, as it does not require visual imaging. Moreover, the choice of joint communication-sensing systems is motivated by the growing potential of 5G technology in enabling real-time sensing along with communication. Both systems have the capability for utilizing more physical resources, enabling greater resolution enhancement and more accurate detection. Both systems offer a non-intrusive and privacy-preserving solution for fall detection, ensuring the safety and dignity of the elderly. The integration of digital twins, replicating a diverse array of human physiology and fall dynamics, allows for extensive, varied, and ethical training of sophisticated machine learning algorithms without the constraints and ethical concerns of using human subjects. Our proposed methodology has led to significant advancements in the accuracy and sensitivity of detecting and assessing fall severity, especially in diverse populations and scenarios. We observed notable improvements in the system’s ability to discern subtle variations in falls, a critical factor in elderly care where such incidents can have serious health implications. Our approach not only sets a new benchmark in fall detection technology but also demonstrates the vast potential of combining radar and joint communication-sensing technology with digital simulations in medical research. This research paves the way for innovative patient monitoring solutions, offering a beacon of hope in improving senior care and proactive health management. In this study, the digital twin environment was created for both systems, radar and 5G, to simulate various fall scenarios under different conditions. For both systems, the simulated data was used to train machine learning models to detect the severity of falls, verifying the proposed methodology for severity of fall classification in an ideal environment. Furthermore, the correlation between the simulation and measurement results is presented. Measurement campaigns were conducted for both systems to validate the simulation results and to demonstrate the feasibility of the proposed methodology in real-world scenarios. Employing convolutional neural networks for the radar system, we obtained an accuracy of 99.45% using simulated data and 81.25% using measured data in detecting the severity of falls. The analysis addressed various parameters distinguishing different scenarios, including fall speed and the participant’s body size. On the other hand, for the 5G system, we achieved an accuracy of 92.46% using simulated data and 88.9% using measured data in detecting the severity of falls.
  • Item
    An Investigation into Automatic Photometric Calibration
    (University of Waterloo, 2024-09-20) Feng, Chun-Cheng
    Photometric calibration is a critical process that ensures uniformity in brightness across images captured by a camera. It entails the identification of a function that converts the scene radiance into the pixel values in an image. The goal of the process is to estimate the three photometric parameters - camera response function, vignette, and exposure. A significant challenge in this field is the heavy reliance on ground truth information in current photometric calibration methods, which is often unavailable in general scenarios. To address this, we investigate our proposed simple method, New Photometric Calibration (NPC), which eliminates the need for ground truth data. Firstly, we integrated our photometric calibration algorithm with long-term pixelwise trackers, MFT, enhancing the system’s robustness and reliability. Since the MFT effectively handles occlusion and reduces drifting, it results in a more stable trajectory. By incorporating MFT to track feature points across frames and using the trajectory as corresponding points, we can utilize the pixel intensity of corresponding points to forgo the need for exposure ground truth during initialization. Subsequently, we independently optimize the photometric parameters to sidestep the exponential ambiguity problem. Our experiments demonstrate that our method achieves results comparable to those utilizing ground truth information, as evidenced by comparable root mean square errors (RMSE) of the three photometric parameters. In scenarios without ground truth data, NPC outperforms existing methods. This indicates that our approach maintains the accuracy of photometric calibration and can be applied to arbitrary videos where ground truth information is not provided. In conclusion, our research represents a significant advancement in the field of photometric calibration. We investigate a novel and effective method that requires no ground truth information during the photometric calibration process. Our approach incorporates the use of a robust tracker, enhancing the trajectories of feature points, thereby improving the overall performance of our method. Furthermore, our model not only bypasses the exponential ambiguity problem inherent in the optimization process but also addresses the challenges associated with the traditional reliance on ground truth information, outperforming previous photometric calibration methods when the input lacks ground truth data.
  • Item
    Black-Box Barriers Behind Registration-Based Encryption
    (University of Waterloo, 2024-09-19) Sarfaraz, Sara
    Registration-Based Encryption (RBE) is a cryptographic primitive designed to achieve the functionalities of Identity-Based Encryption (IBE) while avoiding the key-escrow problem. In an RBE system, participants generate their own secret and public keys and register their public keys with a transparent entity known as the Key Curator (KC), who does not possess any secret information. The KC’s role is limited to managing the public keys without any secret information, effectively eliminating the central key management issues inherent in IBE. Early constructions of RBE relied on non-black-box techniques, incorporating advanced cryptographic primitives such as indistinguishability obfuscation or garbled circuits. However, non-black-box constructions often face practical inefficiencies. Recent works have shown that black-box constructions of RBE are achievable, though these constructions often involve a relaxed model where the Common Reference String (CRS) can grow with the number of registered users. This work investigates the minimal assumptions needed for black-box constructions of RBE. Specifically, we explore whether it is possible to construct RBE schemes using assumptions comparable to those used in public-key encryption or algebraic assumptions that hold in the generic group model. In our work, we present the first black-box separation results for RBE that extend beyond the implications of the known relationship between RBE and public-key encryption. We demonstrate that neither trapdoor permutations nor generic group model, including Shoup’s model, are sufficient on their own to serve as the basis for constructing RBE schemes. Furthermore, we demonstrate that even a relaxed version of RBE, where all keys are registered and compressed simultaneously, cannot be constructed using these primitives in a black-box manner.
  • Item
    Tactile Narratives in Virtual Reality
    (University of Waterloo, 2024-09-19) Kunjam, Punit
    This research explores how to design haptic feedback systems in virtual reality (VR) environments to support relational presence, a concept central to the design of Virtual Learning Environments. Unlike traditional presence, which focuses on simulation, interactivity, and user satisfaction, relational presence emphasizes engagement with values such as impression, witnessing, self-awareness, awareness of difference, interpretation, inquiry, and affective dissonance. The objective is to develop haptic designs that enhance these aspects of relational presence, facilitating users’ engagement with challenging content that prompts thoughtful questions about responsibility and action. The key technical contributions to this project include the design and implementation of the haptic feedback system, VR User Interface, the development of the Sankofa Interface, and enhancements to the Macaron haptic editor. These efforts were essential in aligning haptic feedback with emotional and narrative arcs, helping to refine the interaction design and ensure its effectiveness in promoting relational presence. The integration of advanced programming techniques and responsive haptic feedback synchronized with audio-visual elements contributed to the creation of a cohesive and relationally enhanced VR environment. Through these technical developments, this research identified mechanisms and design criteria that could help align haptic feedback with emotional and narrative arcs, potentially enhancing users’ connection to the content and fostering a more reflective and empathetic engagement. By focusing on creating interactions that resonate emotionally and cognitively with users, rather than just achieving representational fidelity, this research contributes design principles that enable haptic technology to foster a richer, more reflective user experience in educational and narrative-driven VR applications.
  • Item
    Waring rank, Border rank and support concentration of partials
    (University of Waterloo, 2024-09-19) Sanyal, Abhiroop
    In this thesis, we study the classical problem of decomposition of a homogeneous polynomial into sums of powers of linear forms with minimal summands (also known as the Waring Rank of the polynomial) and related problems. The case of quadratic polynomials and binary forms was studied by Sylvester in his seminal paper in 1851 and the case for generic polynomials was resolved more than a century later by Alexander and Hirschowitz in 1995. The problem is NP-hard computationally and finding the Waring rank for several interesting classes of polynomials, for example, the general n*n symbolic determinant/permanent, remains an open problem. An important parameter in the study of this problem is the dimension D of the vector space of partial derivatives of the given polynomial. It is known that if the Waring rank of a polynomial in n variables of degree d is s, then D is at most s*(d+1). A longstanding conjecture states that given D, the Waring rank is upper bounded by a polynomial in n, d, and D. To study this conjecture, we restrict to a very special class of polynomials with no redundant variables (also called concise), which we call 1-support concentrated polynomials, that are defined by the following property: Given such a polynomial f, all its partial derivatives can be obtained as linear combinations of derivatives with respect to powers of a fixed set of n linearly independent linear forms. A crucial property of such f is that the dimension of partial derivatives of f at any degree is at most n. We show that the converse is true: any concise polynomial for which dimension of partial derivatives at any degree is less than n, is also 1-support concentrated. We also generalize an example given by Stanley to give an explicit class of concise polynomials ST(n,d) in O(max{n^d, d^n}) variables of degree d that is 1-support concentrated. A polynomial is a Direct Sum if it can be written as a sum of two polynomials in distinct sets of variables, up to a linear change of variables. A polynomial f is a limit of direct sums if there is a sequence of polynomials, each a direct sum, converging to f. Necessary and sufficient conditions for a polynomial to be a direct sum or a limit of direct sums were extensively studied by Buczy\'nska et al. and Kleppe. We show that any concise 1-support concentrated polynomial in n variables with degree d > 2n is a limit of direct sums. We also show that ST(n,d) (which does not satisfy the previous degree hypothesis) is a limit of direct sums. The border rank of a homogeneous polynomial f is the minimal r such that there is a sequence of polynomials, each with Waring rank at most r, converging to f. The debordering question is as follows: given f with border Waring rank r, what is the best upper bound for Waring rank of f in terms of n,r, and d? The best-known bound is due to Dutta et al., in 2024. In the context of this problem, it is interesting to find examples f for which the Waring rank of f is strictly greater than its border Waring rank. We show that ST(3,4) and ST(2,d), for any d > 2, have this property.
  • Item
    The Effects of Temperature on Lithium-Ion Battery Cells and Packs
    (University of Waterloo, 2024-09-19) Mevawalla, Anosh

    In the United States, transportation accounts for 28% of total greenhouse gas emissions. Electric vehicles are a significant step toward lowering emissions. Lithium-ion batteries are critical to the commercialization of electric vehicles; nevertheless, batteries are temperature sensitive, and sub-optimal temperatures can cause degradation, loss of power, loss of voltage, and thermal runaway. A lightweight, safe, and effective heat management system improves the vehicle's mileage, speed, safety, and longevity. The necessity for research into the effects of temperature on lithium-ion batteries and battery packs is obvious and important in order to develop electric vehicles that are widely adopted by the public. Models that quickly and accurately forecast the temperature and voltage depending on operational parameters can avoid thermal runaway, increase charging speed, prevent lithium plating, and increase cycle life.
    The work consists of a thorough investigation of the temperature effect at both the cell and pack level on various battery parameters such as state of health, internal resistance, capacity and performance. Battery models based on both equivalent circuits and physiochemical models are produced and various battery pack designs are investigated. The effect of temperature on overpotential, current density, capacity and cycle life are also modeled. The writing is divided into four parts:


    Part 1:

    This section presents mathematical models for quick calculation that can be used in battery management systems (BMS) and battery thermal management systems (BTMS). This paper introduces two distinct models: an internal resistance (Rint) model and a physiological-chemical diffusion/Butler-Volmer-based partial differential 1-D model. The Rint model incorporates a relationship between internal resistance, state of charge (SOC), and C-rate. The investigations use thermocouples on both the battery's surface and tabs. At 4C, the battery temperature rose from 22.00°C to 47.40°C, while the tab temperature went from 22°C to 52.94°C. Simulation results are compared to experimental data at various C-rates (1, 2, 3, and 4C) at 22°C. Simulation findings indicate accurate temperature prediction using a simple Rint model. The reduced physio-chemical model with only three partial differential equations (PDEs) achieves comparable accuracy to the Rint model. The Rint model accurately predicts battery internal resistance using a Pearson curve and hyperbolic sine function, based on current and state of charge.


    Part 2:

    This section show cases three electrothermal equivalent circuit models with multiple input parameters (SOH, SOC, current, and temperature). The model allows us to estimate parameters like internal impedance using practical inputs, unlike traditional physiochemical models that rely on experimentally unavailable quantities like porosity and tortuosity. The study simulates the internal impedance resistance of a LiFePO4 battery at various ambient temperatures (5, 15, 25, 35, 45 °C), discharge rates (1, 2, 3C), and SOHs (90%, 83%, 65%). The internal impedance surface fit experimental observations with a Pearson coefficient of 0.945. Three thermal models incorporated the internal resistance surface model. The first two thermal models were 0D and did not account for the battery's thermal conductivity. The first model assumed simple heating from internal resistance and convective energy loss, while the second incorporated the Bernardi Equation Reversible heat term. The third model was a 2D model that retained the earlier heat source terms while adding a tab junction heating source term. The 2D model was solved with a basic Euler approach and finite center difference method. The 0D thermal models had R2 values of 0.9964 for simple internal resistance and 0.9962 for reversible heating. The R2 for the 2D thermal model was 0.996.


    Part 3:

    This paper reported experimental data and model results for a LiFePO4 cell at C-rates of 1C, 2C, 3C, and 4C and at an ambient temperature of approximately 23°C. During the experiment, thermocouples were installed on the battery's surface. Experiments were carried out at continuous current discharge. Temperature increased with C-rates on both the surface and tabs. At 4C, the battery temperature climbed from 22 °C to 47.40 °C, while the tab temperature increased from 22 °C to 52.94°C. Simulation results indicate that the cathode generates more heat than the anode, with electrolyte resistance being the dominant source of heat. Battery temperature was highest near tabs and within the battery’s internal space. The simulation of lithium concentration in the battery revealed that the anode had a more uniform concentration than the cathode. These findings can aid in the precise design and control of Li-ion batteries.


    Part 4:

    The experimental setup consisted of 7 Panasonic NCA cells connected in parallel, with each cell rated at 3.2Ah capacity. Individual cell capacities were measured and averaged, and the experimentally determined value was 3.11Ah. The arrangement has no BMS, and the batteries were allowed to equilibrate to a steady voltage at the end of discharge. The limiting current of the cells was low, posing less safety issues. The cooling method tested was ambient air cooling, with all trials taking place at an ambient temperature of around 25°C. The battery's thermal behavior was measured at six different discharge rates (constant current): 0.5C, 0.75C, 1C, 1.25C, 1.5C, and 1.75C.

  • Item
    Improving OLAP Workload Performance in Apache Ignite
    (University of Waterloo, 2024-09-19) Dodds, Mark
    Apache Ignite is an open-source system that is used to process database workloads. It utilizes Apache Calcite as the underlying query engine to parse SQL queries into relational algebra operators that are passed to the Ignite execution engine. This thesis investigates the performance of online analytical processing (OLAP) workloads with varying memory and data distribution settings using Apache Ignite with Apache Calcite. From empirical studies, practical strategies to decrease response times for OLAP queries are designed and implemented in Apache Ignite. Through studies using the TPC-H benchmark, it is demonstrated that each strategy yields performance improvements across multiple queries, and when all strategies are enabled, average performance improves generally for all queries in the workload.
  • Item
    Design, Synthesis, and Biological Evaluation of Novel Phenoselenazine Derivatives as Amyloid Aggregation Inhibitors
    (University of Waterloo, 2024-09-19) Abdallah, Ahmed
    One of the leading challenges of modern medicine is Alzheimer’s disease (AD), a chronic and debilitating neurodegenerative disorder that poses a global health threat with profound implications for individuals and societies. The inception of AD in 1907 can be attributed to the pioneering research conducted by a German psychiatrist, Dr. Alois Alzheimer’s, who first identified two prevalent pathological features, plaques, and tangles, in the brain of his patient. These distinct plaques are made up of an amyloid protein called beta-amyloid (Aβ), as the chief component in AD`s plaques and a principal culprit throughout the progression of AD. The recent noteworthy accomplishments in monoclonal antibodies (mAbs) have marked a pivotal milestone, ushering in a new era where treatments targeting the amyloid cascade in Alzheimer's disease have emerged as a plausible avenue. Consequently, the amyloid cascade hypothesis is known as the dominant factor to develop diagnostics and therapies for AD. Despite the scientific breakthroughs made in the last few decades, there remains a notable lack of effective treatments for impeding disease progression. Therefore, researchers are now more desperate than ever to develop amyloid-cascade targeted small molecules, aiming to pave the way toward successful outcomes in AD treatment, as small molecules have a number of advantages over biological therapies. In this regard, this thesis research presented herein aimed to design and develop novel small molecules that have the potential to reduce/prevent the disease progression through targeting the aggregation cascade of the two common forms of Aβ, known as Aβ42 and Aβ40. Besides, our ring scaffold was able to target another key factor in AD pathology, the reactive oxygen species (ROS), and has the potential to mitigate their toxicity. A library of 47 compounds based on a novel fused tricyclic ring template was designed and developed by incorporating a selenium atom as a part of its heterocyclic ring, to obtain the phenoselenazine (PSZ) derivatives. The synthesized compound libraries were evaluated as potential inhibitors of Aβ42 aggregation by carrying out fluorescence aggregation kinetics experiments, transmission electron microscopy studies, neuroprotection experiments in mouse hippocampal HT22 neuronal cells exposed to Aβ42, evaluation of their antioxidant properties, blood-brain barrier permeability experiments and computational modeling studies.
  • Item
    The importance of incidence angle for GLCM texture features and ancillary data sources for automatic sea ice mapping.
    (University of Waterloo, 2024-09-19) Pena Cantu, Fernando Jose
    Sea ice is a critical component of Earth’s polar regions. Monitoring it is vital for navigation and construction in the Arctic and crucial to understand and mitigate the impacts of climate change. Synthetic aperture radar (SAR) imagery, particularly dual polarized SAR, is commonly used for this purpose due to its ability to penetrate clouds and provide data in nearly all weather conditions. However, relying solely on HH and HV polarizations for automated sea ice mapping models has limitations, as different ice types and conditions may yield similar backscatter signatures. To enhance the accuracy of these classification models, researchers have explored the integration of additional features, including hand-crafted texture features, learned features, and supplementary data sources. This thesis makes two main contributions to the field of automated sea ice mapping. The first contribution investigates the dependence of incidence angle (IA) on gray level co-occurrence matrix texture features (GLCM) and its impact on sea ice classification. The methodology involved extracting GLCM features from SAR images in dB units and analyzing their dependence on IA using linear regression and class separability metrics. In addition, a Bayesian classifier was trained to compare the classification performance with and without incorporating the IA dependence. The results indicated that the IA effect had a minor impact on classification performance (≈ 1%), with linear regression results indicating that the IA dependence accounts for approximately less 10% of the variance in most cases. The second contribution evaluates the importance of various data inputs for automated sea ice mapping using the AI4Arctic dataset. A U-Net based model was trained with SAR imagery, passive microwave data from AMSR2, weather data from ERA5, and ancillary data. Ablation studies and the addition of individual data inputs were conducted to assess their impact on model performance. The results demonstrated that including AMSR2, time, and location data significantly increased model performance, especially for the classification accuracy of major ice types in stage of development (SOD). ERA5 data had mixed effects, as it was found not to increase performance when AMSR2 was already included. These findings are critical for the development of more accurate and efficient automated sea ice mapping systems. The minimal impact of IA dependence on GLCM features suggests that accounting for IA may not be necessary, simplifying the feature extraction process. Identifying the most valuable data inputs allows for the optimization of model performance, ensuring better resource allocation and enhanced operational capabilities in sea ice monitoring. This research provides a foundation for future studies and developments in automated sea ice mapping, contributing to more effective climate monitoring and maritime navigation safety.
  • Item
    Comparing wildfire recovery at a bog and a fen along a burn severity gradient
    (University of Waterloo, 2024-09-19) Wegener, Emma
    Wildfires are increasing in intensity and severity, emitting carbon (C) stored in soil and biomass to the atmosphere. This is of increased importance in peatlands which have deep deposits of combustible soil. Carbon in peatlands has been accumulating for millennia, due to organic matter input exceeding decomposition and combustion. The presence of saturated soils inhibits the rapid oxidation of dead organic matter, thereby limiting C losses through decomposition. Carbon accumulation in peatlands is supported by the adaptations of characteristic vegetation assemblages, which can grow quickly and in high abundance, increasing the rate of C accumulation, or reducing rates of decay. As wildfires are becoming increasingly severe, collecting data over a range of burn severities, in an array of peatland types to better characterize rates of recovery is paramount. Thus, I measured C fluxes, plant functional traits, and plant community composition at a bog and a fen along a burn severity gradient, with the aim of gaining a better understanding of the influence of burn severity on recovery in different peatland types. I found that, six to seven years following wildfire, biomass accumulation was greater at the fen than the bog, especially the moderately burned fen with nearly 12-fold the biomass of the moderately burned bog; however, the plant community composition was dominated by opportunistic plants such as Betula glandulosa that were not characteristic of the unburned treatment. Plant functional traits suggested that response to disturbance differs among plant types along the burn severity gradient at each peatland type, where LDMC is regularly decreased along the burn severity gradient, and either SLA or height are increased. Understory GEP and ER are significantly greater at the fen than the bog, although NEE was not statistically different, as sequestration and efflux balance at each site to approximately 0 g C m-2 day-1. These results should be considered alongside tree and vegetation surveys, which suggest that while ground-level fluxes may be similar, overstory contributions to each site are crucial to consider as they contribute to the C storage capacity of the unburned sites, but are missed by ground-level fluxes. There is a great amount of C held in trees at the unburned sites, less at the moderately burned sites due to competition, and a high number of tree seedlings at the severely burned sites. Methane fluxes, however, appear to recover more slowly following deeper peat combustion, with moderate burns trending toward pre-burned conditions, while the severe burns have limited efflux of CH4, which may suggest that there has been a reduction in substrate quality or that the soil microbial community that has not yet recovered. The results of this study emphasize the need for more nuanced consideration of burn severity in peatland management and research. Severe burns are becoming more common with climate change, and implementation of burn severity into global C models is necessary to ensure accurate estimates of C losses from wildfire. This study also highlights the importance of distinguishing between bogs and fens in ecological modeling, as applying the same rates of C efflux or accumulation could lead to significant inaccuracies. Understanding these differences is crucial for prediction of C dynamics in peatland ecosystems, particularly in the context of increasing wildfire frequency and intensity.
  • Item
    Knee Kinematics and Kinetics During a Dynamic Balance Task and Gait in Those With and Without Generalized Joint Hypermobility
    (University of Waterloo, 2024-09-19) Grad, Dalia
    Symptomatic generalized joint hypermobility (GJH) is a life-long condition characterized by a predisposition to joint dislocations and subluxations, disturbed proprioception, chronic pain and fatigue, degenerative joint disease, and disability. Disease burden is amplified by delayed diagnosis which is, in part, due the current reliance on an invalidated diagnostic measure of symptomatic GJH, the Beighton Score. Biomechanics has the potential to improve the identification of GJH. While no patterns have emerged that appear specific to GJH in gait, stair climbing or vertical jumping, biomechanical characteristics of postural stability appear distinct in GJH. The overall purpose of this study was to test whether performance of a dynamic balance test, the modified Star Excursion Balance Test (mSEBT), on stable and unstable surfaces, distinguishes between GJH and non-GJH in age and sex matched adults. A secondary objective was to determine the associations of performance on dynamic balance tasks with (i) the current diagnostic criteria and (ii) a measure of disease impact. It was hypothesized that maximum reach distance (MRDcomp) and maximum knee flexion angle (MKAcomp) would be smaller, and centre of pressure total excursion (COPTEcomp), dynamic knee stiffness (DKS) would be greater in those with GJH versus those without GJH. It was also hypothesized that disease impact would share a stronger association with MRDcomp than the current diagnostic criteria. This cross-sectional study design compared two age (24.6 ±4.1 years) and sex (26 females, 2 males) matched, non-athlete groups with and without GJH. From the entire sample, one participant met the criteria for symptomatic GJH. Kinematic and kinetic data were captured synchronously with research-grade motion capture (Optotrak Certus, Northern Digital Inc., Waterloo, ON, CA) and an in-ground force plate (OR6-7, Advanced Mechanical Technologies Inc., Watertown, MA, USA). First, participants performed a dynamic balance task, the mSEBT, in three conditions: stable (no foam surface), unstable (foam surface) and stable and timed. Performance on the mSEBT was measured. MKAcomp and COPTEcomp were also measured during the mSEBT. Second, DKS was averaged over five gait trials at a standardized speed (1.0 m/s). A two-way mixed analysis of variance was used to model the main effects of group and condition on for MRDcomp and MKAcomp and COPTEcomp. A Mann-Whitney U test was used to compare DKS in the non-dominant leg of both groups. Two hierarchical multiple regressions were used to determine if there is an association between (i) the current diagnostic criteria and MRDcomp, (ii) disease impact and MRDcomp, with physical activity (International Physical Activity Questionnaire) as a covariate. No significant main effect was found between MRDcomp and group (p = 0.26), showing there was no difference between GJH and non-GJH groups in MRDcomp. No significant main effect was found between COPTEcomp and group (p = 0.99), showing there was no difference between GJH and non-GJH groups in COPTEcomp. No significant main effect was found in MKAcomp between groups (p = 0.45), showing there was no difference between the amount of maximum knee flexion between non-GJH and GJH groups during the mSEBT. No significant difference was found between GJH and non-GJH groups for DKS in the timed condition (p = 0.22). The regression models identified that the diagnostic criteria (Beighton Score) (R2 = 0.07; p = 0.90) and disease impact (Bristol Impact of Hypermobility Questionnaire) (R2 = 0.08; p = 0.95) were not associated with MRDcomp. The results of this study indicate performance on the mSEBT and DKS are not different in GJH than non-GJH groups in this sample of non-athlete university graduate and undergraduate students. Additionally, a measure of disease impact does not better associate with performance on the mSEBT than the current diagnostic criteria in this study’s sample. Strengths of study include using a combination of novel clinical and biomechanical methods and measures in those with GJH. Future work on the clinical use of the mSEBT and DKS may consider recruiting those with symptomatic GJH and/or older participants with GJH.
  • Item
    Spatial Service Design for Public Services
    (University of Waterloo, 2024-09-18) Hoveida, Sina
    This study aims to design a service network in an urban area with continuous demand to maximize social welfare. We assume that customers are sensitive to travel and wait times, seeking service at a location that maximizes their utility. First, we demonstrate that the problem of spatial service design and pricing for an urban area with strategic customers is equivalent to a bilevel design problem, where customers can be explicitly assigned to service locations. This allows for the explicit assignment of customers to service locations and guarantees the existence of a service fee mechanism that satisfies the optimal assignment. We show that the urban area can be divided into a set of connected and disjoint regions such that, within each region, all customers seek service from the same location. We then derive the relationship between the optimal demand rate served at each service location and the optimal capacity (service rate) of the location. Our findings indicate that the optimal service capacity at each location depends on the capacity cost. For instance, the square root rule is optimal for linear capacity costs, but this does not hold for nonlinear costs. Furthermore, we characterize the relationship between the optimal service fee and observe that when the service capacity is fixed and cannot be adjusted, the optimal service fee is higher at locations with higher optimal demand rates. However, when capacity can be optimally allocated, the optimal service fee depends on the capacity cost. Specifically, if the cost is linear, optimal capacity allocation leads to optimal social welfare, resulting in all locations charging the same service fee. Conversely, if the capacity cost is nonlinear, the optimal fee decreases with the optimal demand rate when the service cost is strictly convex and increases when the cost is strictly concave.
  • Item
    Understanding Perspectives on Care Manager Competencies: A Multiple-Method Needs Assessment
    (University of Waterloo, 2024-09-18) Romano, Leonardo
    Background: Home-based primary care may be well situated as an alternative to conventional episodic primary care for chronically ill older adults, potentially delaying and reducing long-term care admissions while improving satisfaction with care and quality of life. Care management, which seeks to assist patients and their support systems in managing their illnesses, could play a key role in providing home-based primary care. However, little is known about the attributes and competencies a care manager should have when working in home-based primary care. Objectives: To identify necessary competencies for care managers who work with older adults, verify competencies in a home-based setting, and gain a deeper understanding of why the competencies are important. Methods: A scoping review following Arksey and O’Malley’s framework was conducted. A search string encompassing care integration, care management, and clinical competence was used to find academic literature on PubMed, CINAHL, and Scopus. The academic search string was adapted to custom Google searches and targeted website searches to identify grey literature. Extracted competencies were organized into similar groups. The sixth step of Arksey and O’Malley’s framework, consultation, was conducted using a quantitative survey to verify the literature review’s findings, and qualitative interviews to gain a greater understanding. The survey used the competencies uncovered in the scoping review and asked care managers and healthcare providers to rank the importance of competency groups with respect to one another, and rate the importance of individual competencies from -3 to +3, with -3 being very unimportant and +3 being very important, in a home-based setting. Results: The literature review identified 125 competencies from 65 academic articles and pieces of grey literature. These were categorized into 13 groups in three audience-facing facets. A total of 20 survey participants rated competencies and ranked competency groups, the averages of which were used to ascertain the degree to which they are important for care managers. The highest-rated competencies were patient-facing relationship and rapport building (2.90) and patient-facing confidentiality (2.90); the highest-ranked competency group was patient care management (2.80). Cronbach’s alpha for the survey was 0.93. A follow-up interview was conducted which provided further endorsement and nuance for some competencies, as well as a test for the feasibility of further qualitative investigation. Discussion and Conclusion: The findings provide support for the importance of competencies for care managers who work in home-based settings with older adults. The review identified competencies beyond those suggested by professional organizations; these included personality traits and caregiver support and communication. Further contextualization is needed to gain a deeper understanding of why the identified competencies are important. Practitioners should be aware of the breadth of competencies a care manager may need to possess when working in home-based settings with older adults and should prioritize training according to their importance.
  • Item
    Spatio-Temporal Analysis of Roundabout Traffic Crashes in the Region of Waterloo
    (University of Waterloo, 2024-09-18) Miyake, Ryoto
    Roundabouts are increasingly implemented as safer alternatives to stop-controlled and signalised intersections, with the goal of reducing the severity and frequency of traffic crashes. The safety performance of roundabouts, however, is influenced by their geometric design, and the effects of geometric design variables on safety can vary across different countries and regions. Despite this, there is limited research on these safety impacts within the Canadian context. This study addresses this gap by using data from the Region of Waterloo, Ontario, to develop a safety performance function (SPF) using a negative binomial regression model. The model identified significant geometric design variables affecting collision frequency, such as inscribed circle diameter (ICD), entry angle, entry lane width and number of entry lanes. The findings suggest that the safety impacts of geometric design in Canada may differ from those observed in other countries, highlighting the need for region-specific SPFs. Additionally, in areas where roundabouts are relatively new, it is expected that the safety performance of roundabouts may fluctuate over time and across different locations. However, spatio-temporal variations in roundabout safety have not been extensively studied. To fill this gap, a spatio-temporal analysis was conducted using Bayesian hierarchical models to capture spatial and temporal variations in collision frequency. The results reveal significant spatial autocorrelation, while no strong temporal patterns or novelty effect were detected within the scope of the data and modelling approach used in this analysis. This research advances the understanding of how geometric design and spatio-temporal factors influence roundabout safety, providing important insights for the planning and design of roundabouts. Moreover, it is pioneering in its application of spatio-temporal interaction effects in road safety analysis, demonstrating the potential for this approach in future studies.
  • Item
    Interpretable Machine Learning (IML) Methods: Classification and Solutions for Transparent Models
    (University of Waterloo, 2024-09-18) Ghaffartehrani, Alireza
    This thesis explores the realm of machine learning (ML), focusing on enhancing model interpretability called interpretable machine learning (IML) techniques. The initial chapter provides a comprehensive overview of various ML models, including supervised, unsupervised, reinforcement, and hybrid learning methods, emphasizing their specific applications across diverse sectors. The second chapter delves into methodologies and the categorization of interpretable models. The research advocates for transparent and understandable IML models, particularly crucial in high-stakes decision-making scenarios. By integrating theoretical insights and practical solutions, this work contributes to the growing field of IML, aiming to bridge the gap between complex IML algorithms and their real-world applications.