Theses
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6
The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.
This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)
This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)
Browse
Recent Submissions
Item Assessment of the Proposed Policies for a Carbon Capture and Storage Regulatory Framework in Ontario(University of Waterloo, 2025-02-21) Kim, DuckhoonSince 2022, Ontario has been investigating the possibility of developing a Carbon Capture and Storage (CCS) framework as they aim to reduce carbon emissions and align with the federal government’s goals of net-zero emissions by 2050. This CCS regulatory framework should focus on hard-to-abate sectors where alternative renewable energy technologies are in their early stages, or they are difficult to be transitioned. However, within the research field of CCS in Ontario from a policy perspective, there are minimal journal articles and grey-literature documents that discuss this topic. Therefore, the purpose of this thesis is to understand and analyze Ontario’s proposal of their regulatory framework for CCS and to give recommendations to the CCS framework by comparing it against the information gathered from other jurisdictions (Alberta, Saskatchewan, the United States, Europe and Australia). Key research questions are 1. How can the knowledge gained from other regions regarding CCS help Ontario's hard-to-abate sectors to understand approvals, licensing, and liability? 2. What are some other necessary policies that Ontario would need to expand upon and potentially adopt from various jurisdictions? And 3. How did companies and governments in other jurisdictions communicate to the public about the need for this technology? The thesis first developed a literature review to compare and contrast policies from other jurisdictions by researching and synthesizing various peer-reviewed journal articles and grey literature. Then, a semi-structured interview was needed to explore any unique perspectives from interviewees with expertise in CCS, and also to understand whether the results aligned with the information from the literature review. Following the interviews, the analysis of the results were accomplished by using ‘codes’ and ‘themes’, which allows for a simplified understanding of which information is unique. As a result, there were unique findings from the interviews such as ensuring proper industries are utilizing CCS, explaining the purpose of CCS, ensuring that the regulatory framework for CCS is properly developed, and the potential for CCS to utilize a carbon market through an Emissions Trading System (ETS). In November 2024, Ontario introduced Bill 228, which contains an Act called the Geologic Carbon Storage Act, 2024. This Act contains the key core components of the regulatory framework, such as ownership, liabilities, and approvals and assessments. As a result, a description and analysis of this Act was undertaken to understand how it compares against my research findings. In conclusion, to answer the first research question, the findings resulted in requiring Ontario to vest in the pore space, implement a unitization statue, implement a transfer of liabilities once certain pre-conditions are met and a post-stewardship fund to cover liability costs. As for the second research question, the other necessary policies include expanding upon environmental assessments methods, using a systems analysis approach to understand the outcomes of developing CCS, incorporating CCS into carbon pricing schemes, and Ontario’s plans on how they should utilize their CCS. The findings for the final research question recommend that the Ontario government and companies recognize the social demographic backgrounds of Ontario; ensure that Ontario is integrating and engaging with communities closely; explaining the downsides of not developing a CCS project; and respecting a community’s decision if they do not wish to engage with the project. Bill 228 is consistent with these findings, namely the inclusion of a liability transfer; a stewardship fund to cover the liabilities for the Crown; unitization of pore spaces; risk management; monitoring, measurement and verification (MMV); emergency response; and various approvals and assessments. However, the ownership of pore spaces deviates from these findings, as Ontario vests pore ownership to the surface owners but still allows the Crown to vest in the pore space when required.Item Reweighted Eigenvalues: A New Approach to Spectral Theory beyond Undirected Graphs(University of Waterloo, 2025-02-21) Tung, Kam ChuenWe develop a concept called reweighted eigenvalues, to extend spectral graph theory beyond undirected graphs. Our main motivation is to derive Cheeger inequalities and spectral rounding algorithms for a general class of graph expansion problems, including vertex expansion and edge conductance in directed graphs and hypergraphs. The goal is to have a unified approach to achieve the best known results in all these settings. The first main result is an optimal Cheeger inequality for undirected vertex expansion. Our result connects (i) reweighted eigenvalues, (ii) vertex expansion, and (iii) fastest mixing time [BDX04] of graphs, similar to the way the classical theory connects (i) Laplacian eigenvalues, (ii) edge conductance, and (iii) mixing time of graphs. We also obtain close analogues of several interesting generalizations of Cheeger’s inequality [Tre09, LOT12, LRTV12, KLLOT13] using higher reweighted eigenvalues, many of which were previously unknown. The second main result is Cheeger inequalities for directed graphs. The idea of Eulerian reweighting is used to effectively reduce these directed expansion problems to the basic setting of edge conductance in undirected graphs. Our result connects (i) Eulerian reweighted eigenvalues, (ii) directed vertex expansion, and (iii) fastest mixing time of directed graphs. This provides the first combinatorial characterization of fastest mixing time of general (non-reversible) Markov chains. Another application is to use Eulerian reweighted eigenvalues to certify that a directed graph is an expander graph. Several additional results are developed to support this theory. One class of results is to show that adding $\ell_2^2$ triangle inequalities [ARV09] to reweighted eigenvalues provides simpler semidefinite programming relaxations, that achieve or improve upon the previous best approximations for a general class of expansion problems. These include edge expansion and vertex expansion in directed graphs and hypergraphs, as well as multi-way variations of some undirected expansion problems. Another class of results is to prove upper bounds on reweighted eigenvalues for special classes of graphs, including planar, bounded genus, and minor free graphs. These provide the best known spectral partitioning algorithm for finding balanced separators, improving upon previous algorithms and analyses [ST96, BLR10, KLPT11] using ordinary Laplacian eigenvalues.Item Imagining Shared Food Futures: honouring Canada's obligations towards Anishinaabek foodways(University of Waterloo, 2025-02-20) Koberinski, JodiSustainability scholars characterize climate breakdown and biodiversity loss as converging crises tied directly to settler colonial ‘resource management’ regimes. Canada gestures toward mitigating these crises by ‘including’ Indigenous knowledges in environmental impact assessments and policy. Canada prioritizes commodity market profitability over mitigating these crises by excluding Indigenous knowledges in resource management decisions when acting on that knowledge would disrupt industry-favoured practices. One such practice is glyphosate use in forest ‘management.’ Glyphosate is a broad-spectrum agricultural herbicide repurposed to ‘manage’ regrowth after clearcutting forests. Banned by Quebec in 2001, Ontario embraced this practice. In 2013, Anishinaabek Elders along the north shore of the Great Lakes formed the Traditional Ecological Knowledge Elders to campaign for a moratorium on glyphosate use, which is counter to Anishinaabek environmental governance. Proponents claim herbicide use speeds stand regeneration, yet that regeneration converts food-bearing forests to pine plantations. Ontario legislators are not seeing the forest for the trees. This dissertation contributes to radical food geographies scholarship by characterizing the cumulative impacts of forestry policies on Indigenous foodways. Foodways include economic, material, linguistic, spiritual, intergenerational, scientific, ceremonial, and social dimensions of a culture’s food governance. This study concludes that efforts to imagine shared food futures in Canada’s settler colonial context require reframing ‘renewable’ resource extraction as Indigenous foodways disruption. Applying case study and participatory action research methods, I offer three manuscripts that together characterize the limitations of settler colonial knowledge in imagining shared food futures that meet settler treaty obligations. These three studies conclude that converting Anishinaabek food-bearing forests to pine plantations undermines the conditions required for Canada to meet treaty obligations to protect Anishinaabek foodways. In the first manuscript, I adapt Vivero Pol’s multi-governance framework to Canada’s settler colonial context to analyze customary and contemporary Indigenous food initiatives through a food commons lens. This study reveals the limitations of settler colonial frameworks for imagining shared food futures. The second manuscript seeks to overcome these limitations by centring an Anishinaabek research paradigm in collaboration with Traditional Ecological Knowledge Elders of the North Shore of Lake Huron. Our case study examining the cumulative impacts of changes to forestry legislation on Anishinaabek foodways centres TEK Elders’ efforts to stop glyphosate use in forestry. Reflecting on Ontario’s Bill 197, we characterize the limitations of settler colonial knowledge systems for understanding the impacts of forest ‘management’ decisions on settler treaty obligations. To better understand the limitations raised in the first two manuscripts, I apply participatory action research methods in the third manuscript to analyze transcripts from the Canadian Society of Ecological Economics’ bi-annual conferences I co-organized between 2019 and 2021. I ask what Indigenous knowledge holders have to say about the repackaging of Indigenous concepts by sustainability researchers within colonial knowledge systems. Despite gestures towards ‘inclusion’ of Indigenous knowledge, settler colonial frameworks depoliticize Indigenous resistance and resurgence, often reinforcing colonial narratives of land cessation and dispossession. Without addressing the underlying settler colonial assumptions and structures, sustainability scholars and settler governments relying on their research risk replicating the violence inherent in food policy frameworks built on settler supremacy. Collectively, these manuscripts identify actions settler colonial scholars have the responsibility to take up, beginning with transforming settler colonial narratives.Item On Spoken Confidence: Characteristics of Explicit Metacognition in Reasoning(University of Waterloo, 2025-02-20) Stewart, KaidenIn this thesis, I assess how explicit, subjective evaluations of confidence influence monitoring and control (i.e., metacognitive) processes in reasoning. Metacognitive processes play a crucial role in modern dual-process theories of reasoning and decision-making, the consequences of which have been implicated in numerous significant real-world decisional outcomes. It is tacitly assumed that monitoring one’s reasoning for the purpose of optimal deployment of controlled, deliberative processing functions similarly to monitoring one’s reasoning for the purpose of providing a judgment of confidence, despite evidence from other domains indicating otherwise. This thesis takes a critical step toward evaluating metacognitive theories of reasoning and their broader application by assessing the degree to which standard approaches represent realistic accounts of metacognitive processes. To aid in interpretation of the work directly testing this possibility, I first present six experiments addressing foundational issues with respect to the operation of metacognition in reasoning. Chapter 2 provides evidence for a causal relationship between confidence judgments and controlled behavior (specifically deliberation), a reality often assumed in the absence of direct evidence. I demonstrate across four experiments that processing manipulations affect confidence and influence control behavior, consistent with a causal relationship, but also that it is possible to target control behaviour without mirroring effects on confidence. Chapter 3 develops a simple predictive model of confidence that identifies heretofore unidentified, item-based predictors of confidence. This simple model allows a unique approach to testing the central question in Chapter 4. Chapter 4 investigates whether the relationship between confidence and controlled behavior partly depends on the requirement to make explicit confidence judgments. Using a paradigm adapted from research involving nonhuman primates, I compare implicit and explicit confidence conditions. Results reveal small differences in controlled behavior and substantial differences in monitoring. In the present thesis, I provide evidence of plausibly systematic influences of common measurement approaches on reasoning. To this effect, it is likely that the reasoning processes in which individuals engage in day-to-day life are reliably different than those commonly assessed in the lab. This has practical, but also theoretical implications which I discuss.Item Wideband Signal Generation at Millimeter-Wave and Sub-THz Frequencies(University of Waterloo, 2025-02-20) Su, Zi JunThe rise of sixth-generation (6G) wireless technology has created a need for wideband signal generation at high radio frequencies (RF). However, current digital-to-analog converters (DACs) face limitations, offering either wide bandwidth with low resolution or high resolution with limited bandwidth. This thesis proposes two methods that utilize multiple DACs to generate multiple narrowband sub-bands of a wideband signal, that are combined to produce the desired wideband signal. These methods employ distinct digital processing approaches tailored to specific applications, such as instrumentation or real-time Orthogonal Frequency Division Multiplexing (OFDM) signal generation. To address non-idealities in frequency-stitching-based transmitters, a frequency-domain calibration technique using multi-tone signals is introduced. Experiments at X-band (9.6 GHz) and D-band (129.6 GHz) validate these methods, demonstrating up to 8 GHz bandwidth and achieving an error vector magnitude (EVM) as low as 0.3\% for a 7.2 GHz 256-QAM OFDM signal. A comparative study of three signal generation approaches—direct Arbitrary Waveform Generator (AWG) generation, baseband in-phase and quadrature (IQ) generation with up-conversion, and frequency stitching—shows EVMs of 1.5\%, 0.8\%, and 1\%, respectively, for an 8 GHz OFDM signal. A novel architecture using phase-coherent IQ-DACs and mixers for each sub-band is also presented. Calibration using non-uniformly interleaved tones corrects IQ imbalances and distortions, enabling the generation of a 256-QAM OFDM signal with 12 GHz bandwidth at D-band (149 GHz) and achieving a peak data rate of 96 Gbps. Calibration improves EVM and normalized mean square error (NMSE) from 82.6\% and 23.8\% to below 2\% and 1\%, respectively. Additionally, D-band amplifier linearization with a 4 GHz modulation bandwidth improves adjacent channel power ratio (ACPR) from -27.8/-26 dBc to -42.8/-43.1 dBc and EVM from 8.5\% to 1.2\%. Finally, two architectures for sub-band combination are compared. One generates a wideband signal at intermediate frequency (IF) and up-converts it, while the other up-converts narrowband IF signals and combines them. The second approach demonstrates superior ACPR at high IF power levels, enhancing ACPR by up to 8 dB when generating a 1.2 GHz modulated signal at 142.5 GHz. These results highlight the efficacy of the proposed methods for generating and linearizing high-quality wideband signals, supporting advanced applications in millimeter wave and sub-THz frequency bands for 6G technologies.Item A Study of the Opportunities and Challenges of Using Edge Computing to Accelerate Cloud Applications(University of Waterloo, 2025-02-18) Qadi, HalaI explore the viability of using edge clusters to host latency-sensitive applications and to run services that can improve end-to-end communication performance across both wide area networks (WANs) and 5G environments. The study examines the viability of using edge clusters in three scenarios: accelerating TCP communications through TCP splitting in 5G deployments, hosting an entire application-level service or the latency-sensitive part of an application on an edge cluster, and deploying a TCP splitting service on edge clusters to support WAN communication. I explore these scenarios while varying packet drop rates, communication stacks, congestion control protocols, and TCP buffer sizes. My findings bring new insights about these deployment scenarios. I show that edge computing, especially through TCP splitting, can significantly improve end-to-end communication performance over the classical communication stack. TCP splitting over the 5G communication stack does not bring any benefit and can reduce throughput. This is because of the unique characteristics of the 5G communication stack. Furthermore, over the classical communication stack, TCP splitting brings higher benefit for flows larger than 64 KB. These findings provide valuable insights into how edge clusters can accelerate TCP communication in different network environments and identify high-impact research ideas for future work.Item Advanced Separator Modifications for Lithium-Sulfur Batteries: Multifunctional Organic Frameworks and Nanostructured Composites to Mitigate the Polysulfide Shuttle Effect(University of Waterloo, 2025-02-18) Fazaeli, RaziehThis thesis explores innovative approaches to addressing critical challenges in lithium-sulfur (Li-S) battery technology through the development of modified separator materials. The escalating concerns surrounding climate change, pollution, and fossil fuel depletion are propelling a global transition toward renewable energy sources like wind, solar, and hydropower. Alongside this shift is an increasing demand for efficient, high-capacity, and cost-effective energy storage systems that support these sustainable energy technologies, especially for applications in electric vehicles. Various rechargeable battery technologies, such as lithium-ion, sodium-ion, potassium-ion, magnesium-ion, zinc-ion, and aluminum-ion batteries, have garnered significant research attention due to their high efficiency, reversibility, light weight, and environmental friendliness. Although lithium-ion batteries have achieved widespread success in portable electronics and electric vehicles, they have limitations when it comes to the growing demand for energy density, long cycle life, and affordability. Consequently, next-generation batteries—particularly those based on sulfur chemistry—are being developed to meet these requirements. This thesis specifically investigates how functional materials for separator modification can address the main issues of polysulfide shuttle and conductivity in Li-S batteries, aiming to make these batteries more feasible for next-generation energy storage applications. The first study in this thesis focuses on designing a series of melamine-based porous organic frameworks (POFs) as efficient polysulfide reservoirs to modify glass fiber (GF) separators in Li-S batteries (LSBs). Despite the promising energy density of Li-S systems, the polysulfide shuttle effect—where lithium polysulfides (LiPSs) dissolve and migrate between electrodes—remains a significant barrier to achieving stable cycling and high capacity retention. To tackle this challenge, we synthesized a series of POF materials (POF-C4, POF-C8, and POF-C12) by reacting melamine with dibromoalkanes of varying chain lengths (C4, C8, and C12). The resulting POFs displayed distinct nanoscale pore sizes and solubility properties, which are critical for effective LiPS trapping and utilization. These POFs were then combined with conductive Super P (SP) and polyvinylpyrrolidone (PVP) binder to create a composite layer (POF-Cn/SP/PVP) that was coated onto GF membranes, forming modified separators that enhance the electrochemical performance of Li-S batteries. The batteries incorporating these modified separators were evaluated through various electrochemical tests, and the POF-C8/SP/PVP-modified separator, in particular, demonstrated outstanding performance. It delivered an initial specific capacity of 1392 mAh g⁻¹ at 0.1C and retained 90% capacity over 300 cycles at 0.5C. This enhanced performance can be attributed to the optimal pore structure of POF-C8 and its high nitrogen content, which work in tandem to capture soluble LiPSs and limit their migration toward the lithium anode. Furthermore, the good solubility of POF-C8 ensures uniform dispersion and strong interactions with LiPSs, enabling efficient redox reactions. This study highlights the potential of functional polymer-based separator modifications to mitigate polysulfide migration, improving the stability and longevity of Li-S batteries. The second study investigates the use of Congo Red (CR), a redox-active organic compound, in conjunction with cetyltrimethylammonium bromide (CTAB), a cationic surfactant, to modify GF separators for improved LSB performance. CR has a unique capability of engaging in redox reactions, which aids in suppressing the polysulfide shuttle by capturing LiPSs at the separator interface. The CR-CTAB/SP/PVP-modified GF separators demonstrated enhanced ion transport properties and higher sulfur utilization, addressing core issues that commonly degrade Li-S battery performance. Electrochemical performance tests revealed that LSBs with these CR-CTAB-modified separators achieved an initial specific capacity of 1161.9 mAh g⁻¹ and maintained 994.1 mAh g⁻¹ after 300 cycles at 0.5C, showing significant improvement over the baseline unmodified GF separators. The CR molecules in the separator modification layer serve as efficient adsorbents for polysulfides, while the CTAB molecules aid in stabilizing the structure and enhancing ion transport across the separator. This work emphasizes the importance of incorporating redox-active molecules into separator designs, showing that such molecules can effectively reduce the shuttle effect, enhance performance, and create more durable energy storage systems. The third study delves into the incorporation of a nanocomposite composed of CR and tin dioxide (SnO₂) nanoparticles for further improvement of polysulfide-trapping capability and redox kinetics in GF separators. The CR-SnO₂/SP/PVP-modified separators were synthesized by combining CR, SnO₂ nanoparticles, conductive SP, and PVP binder. This approach resulted in a composite layer with enhanced surface interactions and improved electron transport pathways. Structural characterization using techniques such as scanning electron microscopy (SEM), X-ray diffraction (XRD), and transmission electron microscopy (TEM) confirmed the uniform dispersion of CR and SnO₂, indicating strong cooperative interactions between these components. Electrochemical tests demonstrated that LSBs incorporating the CR-SnO₂-modified separators exhibited exceptional performance, with an initial specific capacity of 1377 mAh g⁻¹ at 0.1C and capacity retention of 91% over 300 cycles at 0.5C. The CR-SnO₂ composite material provides dual benefits: CR molecules effectively capture LiPSs, while SnO₂ nanoparticles act as catalysts, promoting redox reactions and enhancing ion transport. This synergy between CR and SnO₂ in the separator layer contributes to stable cycling performance and mitigates capacity loss due to polysulfide migration, making this composite a promising solution for improving Li-S battery stability. The forth study address the shuttle effect challenge by employing cysteine and layered double hydroxides (LDHs) as 2D materials to create an innovative 2D heterostructure (Cys/FeNi-LDH). This heterostructure serves as a robust support for immobilizing V2O5 nanoparticles (NPs). Incorporating V2O5/Cys/FeNi-LDH (VCFN) into a GF separator ensured stable electron and ion pathways, significantly enhancing long-term cycling capabilities. The use of L-cysteine, a cost-effective and readily available amino acid, played a crucial role in enhancing the Li-S battery performance. The remarkable enhancement in electrochemical performance is attributed to the synergistic effects of VCFN nanoparticles, cysteine, and SP. A Li-S battery featuring the VCFN GF separator demonstrated an impressive initial capacity of 1036.8 mAh g⁻¹ and, after 300 cycles at 0.5C, retained a capacity of 920.1 mAh g⁻¹. This thesis demonstrates that modifying the separator is a highly effective strategy for addressing the primary challenges in Li-S batteries, particularly the polysulfide shuttle effect. By tailoring the physical and chemical properties of the separator layer, significant improvements in capacity retention, cycling stability, and rate performance have been achieved. Each of the materials that used for modification of GF separators demonstrates the potential to enhance battery performance through unique mechanisms. The melamine-based POF-C8-modified separator leverages a nanoscale porous framework to trap polysulfides and improve LiPS utilization. Meanwhile, the CR-CTAB and CR-SnO₂ composites add a redox-active element to the separator, aiding in polysulfide trapping and catalyzing redox reactions at the interface. A novel composite of V₂O₅ nanoparticles on Cys/FeNiLDH sheets (VCFN) was synthesized and used to modify GF separators, enhancing the electrochemical stability of LSBs. This research contributes to the field of LSBs by providing insights into the design of multifunctional separators that simultaneously address multiple performance issues, including polysulfide retention, ion transport, and redox catalysis.Item Model Predictive Control for Systems with Partially Unknown Dynamics Under Signal Temporal Logic Specifications(University of Waterloo, 2025-02-18) Dai, Zhao FengAutonomous systems are seeing increased deployment in real-world applications such as self-driving vehicles, package delivery drones, and warehouse robots. In these applications, such systems are often required to perform complex tasks that involve multiple, possibly inter-dependent steps that must be completed in a specific order or at specific times. One way of mathematically representing such tasks is using temporal logics. Specifically, Signal Temporal Logic (STL), which evaluates real-valued, continuous-time signals, has been used to formally specify behavioral requirements for autonomous systems. This thesis proposes a design for a Model Predictive Controller (MPC) for systems to satisfy STL specifications when the system dynamics are partially unknown, and only a nominal model and past runtime data are available. The proposed approach uses Gaussian Process (GP) regression to learn a stochastic, data-driven model of the unknown dynamics, and manages uncertainty in the STL specification resulting from the stochastic model using Probabilistic Signal Temporal Logic (PrSTL). The learned model and PrSTL specification are then used to formulate a chance-constrained MPC. For systems with high control rates, a modification is discussed for improving the solution speed of the control optimization. In simulation case studies, the proposed controller increases the frequency of satisfying the STL specification compared to controllers that use only the nominal dynamics model. An initial design is also proposed that extends the controller to distributed multi-agent systems, which must make individual decisions to complete a cooperative task.Item The Philosophy of Reconstructions of Quantum Theory: Axiomatization, Reformulation, and Explanation(University of Waterloo, 2025-02-18) Oddan, JessicaThe quantum reconstruction programme is a novel research program in theoretical physics aimed at deriving the key features of quantum mechanics from fundamental physical postulates. Unlike standard interpretations of quantum theory, which take the Hilbert space formalism at face value, quantum reconstructions seek to derive this formalism from axiomatic principles. Reconstructions represent a new shift in foundations of physics away from interpreting quantum theory and towards understanding its foundational origins. The reconstruction programme has been a major focus of research in physics, beginning with Hardy (2001)’s “Quantum Theory from Five Reasonable Axioms.” However, the quantum reconstruction programme has been met with very little interest in philosophy. The goal of this project is to situate the quantum reconstruction programme in a broader philosophical context, investigating themes such as scientific methodology, explanation, the applicability of mathematics to physical theories, and theory exploration and development in the philosophy of science. I argue that reconstructions demonstrate a contemporary application of axiomatization with significant points of continuity to historical axiomatizations. I also argue that we should best understand reconstructions as provisional, practical representations of quantum theory that are conducive to theory exploration and development. Further, I contend that reconstructions function as alternative formulations of quantum theory, which is methodologically advantageous. I discuss Bokulich (2019)’s “Losing the Forest for the Ψ: Beyond the Wavefunction Hegemony” which argues that the existence of alternative formulations of quantum theory undermines our ability to literally interpret a single formulation. I argue that Bokulich (2019)’s conclusions further support the reconstructionist’s rejection of the standard interpretative project. I also argue that reconstructionists have gone beyond Bokulich (2019)’s insistence on the consideration of alternative formulations to develop a methodology that systematically constructs alternative formulations of quantum theory. Additionally, I argue that reconstructions of quantum theory are genuinely explanatory as they answer Wheeler (1971)’s “Why the quantum?” question. I contend that reconstructions are explanatory in the same spirit as Bokulich (2016)’s account of explanation in “Fiction As a Vehicle for Truth: Moving Beyond the Ontic Conception” which focuses on patterns of counterfactual dependence that correctly capture underlying dynamics. However, in order to accommodate the reconstruction case, I expand Bokulich’s account to consider theories and models as well as representations that are neither fictional nor literal interpretations. Thus, I offer an account of explanation in the reconstruction programme that is noncausal and non–interventionist, utilizing w–questions a la Woodward (2003). I conclude that reconstructions of quantum theory give us genuine insight into the structure of quantum theory via the generalized physical principles which carry physical content.Item The Power of Experimental Approaches to Social Choice(University of Waterloo, 2025-02-14) Armstrong, BenWith increasing connectivity between humans and the rise of autonomous agents, group decision-making scenarios are becoming ever more commonplace. Simultaneously, the requirements placed upon decision-making procedures grow increasingly nuanced as social choices are made in more niche settings. To support these demands, a deeper understanding of the behaviour of social choice procedures is needed. The standard theoretical approach to analyze social choice procedures is limited in the type of question it can answer. Theoretical analyses can be rigid: It may speak to the incompatibility of different properties without also providing a deeper understanding of the properties themselves, or might stop at proving the worst-case outcome of a voting rule without communicating the rule's typical behaviour. In this dissertation, we address these limitations by demonstrating that experimental analysis of social choice domains can provide an understanding of social choice which is both complementary and additional to theoretical findings. In particular, experimental approaches can form a middle ground between theory and practice: more practical than theoretical approaches in a setting more controlled than real-world application. We apply this approach to a new form of delegative voting and to a task of learning existing and novel voting rules. In each area we find results of a type and scale which are infeasible to traditional analysis. We first examine an abstract model of delegative voting -- agents use liquid democracy to transitively delegate their vote -- in a setting where the voters collectively agree on a correct outcome. Through extensive simulations we show the dynamic effects on group accuracy from varying a wide range of parameters that collectively encompass many types of human behaviour. We identify two features of this paradigm which result in improvements to group accuracy and highlight a possible explanation for their effectiveness. Subsequently, we apply this liquid democracy framework to the process of training an ensemble of classifiers. We show that the experimental findings from our simulations are largely maintained on a task involving real-world data and result in further improvements when considering a novel metric of the training cost of ensembles. Additionally, we demonstrate the creation of a robust framework for axiomatic comparison of arbitrary voting rules. Rather than proving whether individual rules satisfy particular axioms, we establish a framework for showing experimentally the degree to which rules general satisfy sets of axioms. This enables a new type of question -- degrees of axiom satisfaction -- and provides a clear example of how to compare a wide range of single and multi-winner voting rules. Using this framework, we develop a procedure for training a model to act as a novel voting rule. This results in a trained model which realizes a far lower axiomatic violation rate than most existing rules and demonstrates the possibility for new rules which provide superior axiomatic properties.Item A Multi-Phase Analysis of Gas Dynamics and Perturbations in the Galaxy Cluster Cores(University of Waterloo, 2025-02-14) Li, MuziThis thesis provides a detailed analysis of gas kinematics and their interactions across various phases within galaxy cluster cores. It examines the processes that generate gas perturbations and the factors that contribute to the thermal stability of the intracluster medium (ICM). A focus is placed on exploring the origins of multi-phase gas and the mechanisms—particularly AGN feedback—that either couple or decouple their motions. Radio-mechanical AGN feedback is identified as one of the most promising heating mechanisms that prevent the cooling of gas. However, the debate on the details of the heating transport processes has remained open. The atmospheres of 5 cool-core clusters, Abell 2029, Abell 2107, Abell 2151, RBS0533 and RBS0540, have short central cooling times but little evidence of cold gas, and jet-inflated bubbles. The amplitudes of gas density fluctuations were measured using a new statistical analysis of X-ray surface brightness fluctuations within the cool cores of these ‘spoil’ clusters in Chapter 2. The derived velocities of gas motions, typically around 100 - 200 km/s, are comparable to those in atmospheres around central galaxies experiencing energetic feedback, such as in the Perseus Cluster, and align well with the turbulent velocities expected in the ICM. Regardless of the mechanisms driving these perturbations, turbulent heating appears sufficient to counteract radiative losses in four of the five spoiler cluster cores. We thus suggest that other mechanisms, such as gas sloshing, may be responsible for generating turbulence, offering a plausible solution to suppress cooling in these structureless atmospheres. Multiphase filaments, key byproducts of AGN feedback, are frequently observed near central galaxies, with their morphologies and kinematics closely linked to bubbles. In Chapter 3, we analyzed the velocity structure functions (VSFs) of warm ionized gas and cold molecular gas, identified through [OII] emission and CO emissions observed by the Keck Cosmic Web Imager (KCWI) and the Atacama Large Millimeter/submillimeter Array (ALMA), respectively, in four clusters: Abell 1835, PKS 0745-191, Abell 262, and RXJ0820.9+0752. Excluding Abell 262, where gas forms a circumnuclear disk, the remaining clusters exhibit VSFs steeper than the Kolmogorov slope. The VSFs of CO and [OII] in RXJ0820 and Abell 262 show close alignment, whereas in PKS 0745 and Abell 1835, were differentiated across most scales, likely due to the churning caused by the radio-AGN. The large-scale consistency in Abell 1835 and RXJ0820, together with scale-dependent velocity amplitudes of the hot atmospheres obtained from Chandra X-ray data, may support the idea of cold gas condensation from the hot atmospheres. X-ray observations have previously been constrained by low energy resolution, which has impeded direct measurements of velocity fields in galaxy clusters. However, the recent release of initial data from the X-ray Imaging and Spectroscopy Mission (XRISM) provides a non-dispersive energy resolution of about 5 eV, facilitating the measurement of line broadening and shifts. In Chapter 4 of this thesis, I detail my contributions to calibrating the optical blocking filters for XRISM using synchrotron beamlines at the Canadian Light Source (CLS) and Advanced Light Source (ALS) prior to its launch, and I discuss the model-based estimation of the parameters of the calibrated filters. This capability for direct measurement of plasma velocities is expected to greatly improve our understanding of the ICM dynamics with high accuracy.Item From For To With: Towards an Allographic Approach in Architecture(University of Waterloo, 2025-02-13) Fournier, Marc-Although transformations to buildings are inevitable, architecture often aims to achieve idealized, finalized artifacts that refute the passage of time. This professional bias towards temporality – or the problem of permanence – creates and perpetuates non-reciprocal relationships between architects, users, and the built environment that often results in the exploitation and alienation of the people the discipline attempts to serve. By examining architecture's failure to account for diverse temporalities, this research sheds light on the ways in which architects overlook their potential to cultivate meaningful social interactions with the built environment. The architect’s role, therefore, needs to be redefined as a translator of collective desires and needs, as a designer of structures that promote agency and empower individuals to engage with their environments. This paradigm shift implies an inquiry into the architect’s conventional design apparatus and the expansion of its scope to include tools that embrace temporality and contingency as key variables. The thesis proposes a shift in focus from the production of artifacts to the design of architectural scores inspired by allographic arts. Allographic thinking shifts the emphasis from end product to process; forcing a renegotiation of author-designer / performer-user relationships, focusing on affordances and obstacles, favoring user agency, and embracing contingency. The context of the Habitations Jeanne-Mance, a post-war social housing in Montréal, acts as a case study for an exploration of the disciplinary problems of permanence, alienation, and non-reciprocity, as well as the testing ground for a speculative design intervention that integrates allographic thinking into architecture to create a system that promotes user participation, indeterminacy, and reciprocal relationships between residents and their built environment.Item Learning-Based Safety-Critical Control Under Uncertainty with Applications to Mobile Robots(University of Waterloo, 2025-02-13) Aali, MohammadControl theory is one of the key ingredients of the remarkable rise in robotics. Due to technological advancements, the use of automated robots, which was once primarily limited to industrial and manufacturing settings, has now expanded to impact many different parts of everyday life. Various control strategies have been developed to satisfy a wide range of performance criteria arising from recent applications. These strategies have different characteristics depending on the problem they solve. But, they all have to guarantee stability before satisfying any performance-driven criteria. However, as robotic technologies become increasingly integrated into everyday life, they introduce safety concerns. For autonomous systems to be trusted by the public, they must guarantee safety. In recent years, the concept of set invariance has been incorporated into modern control strategies to enable systematic safety guarantees. In this thesis, we aim to develop safety-critical control methods that can guarantee safety while satisfying performance-driven requirements. In the proposed strategies, we considered formal safety guarantees, robustness to uncertainty, and computational efficiency to be the highest design priorities. Each of them introduces new challenges which are addressed with theoretical contributions. We selected motion control in mobile robots as a use case for proposed controllers which is an active area of research integrating safety, stability, and performance in various scenarios. In particular, we focused on multi-body mobile robots, an area with limited research on safe operation. We provide a comprehensive survey of the recent methods that formalize safety for the dynamical systems via set invariance. A discussion on the strengths and limitations of each method demonstrates the capabilities of control barrier functions (CBFs) as a systematic tool for safety assurance in motion control. A safety filter module is also introduced as a tool to enforce safety. CBF constraints can be enforced as hard constraints in quadratic programming (QP) optimization, which rectifies the nominal control law based on the set of safe inputs. We propose a multiple CBF scheme that enforces several safety constraints with high relative degrees. Using the multi-input multi-output (MIMO) feedback linearization technique, we derive conditions that ensure all control inputs contribute effectively to safety. This control structure is essential for challenging robotic applications requiring multiple safety criteria to be met simultaneously. To demonstrate the capabilities of our approach, we address reactive obstacle avoidance for a class of multi-body mobile robots, specifically tractor-trailer systems. The lack of fast response due to poor maneuverability makes reactive obstacle avoidance difficult for these systems. We develop a control structure based on a multiple CBFs scheme for a multi-steering tractor-trailer system to ensure a collision-free maneuver for both the tractor and trailer in the presence of several obstacles. Model predictive control serves as the nominal tracking controller, and we validate the proposed strategy in several challenging scenarios. Although the CBF method has demonstrated a great potential for ensuring safety, it is a model-based method and its effectiveness is closely tied to an accurate system model. In practice, model uncertainty compromises safety guarantees and may lead to conservative safety constraints, or conversely, allow the system to operate in unsafe regions. To address this, we explore developing safety-critical controllers that account for model uncertainty. Achieving this requires combining the theoretical guarantees of model-based methods with the adaptability of data-driven techniques. For this study, we selected Gaussian processes (GPs) which bring together required capabilities. It provides bounds on the posterior distribution, enabling theoretical analysis, and producing reliable approximations even with a low amount of training data, which is common in data-driven control. The proposed strategy mitigates the adverse effects of uncertainty on high-order CBFs (HOCBFs). A particular structure of the covariance function is designed that enables us to convert the chance constraints of HOCBFs into a second-order cone constraint, which results in a convex constrained optimization as a safety filter. A discussion on the feasibility of the resulting optimization is presented which provides the necessary and sufficient conditions for feasibility. In addition, we consider an alternative approach that uses matrix variate GP (MVGP) to approximate unknown system dynamics. A comparative analysis is presented which highlights the differences and similarities of both methods. The proposed strategy is validated on adaptive cruise control and active suspension systems, common applications in mobile robots. This study next explores the safety of switching systems, focusing on cases where system stability is assured through control Lyapunov functions (CLFs) and CBFs are applied for safety. We show that the effect of uncertainty on the safety and stability constraint forms piecewise residuals for each switching surface. We introduce a batch multi-output Gaussian process (MOGP) framework to approximate these piecewise residuals, thereby mitigating the adverse effects of uncertainty. We show that by leveraging a specific covariance function, the chance constrained safety filter can be converted to a convex optimization, that is solvable in real-time. We analyze the feasibility of the resulting optimization and provide the necessary and sufficient conditions for feasibility. The effectiveness of the proposed strategy is validated through a simulation of a switching adaptive cruise control system.Item Long-term biophysical conditions and carbon dynamics of a temperate swamp in Southern Ontario, Canada(University of Waterloo, 2025-02-13) Afolabi, Oluwabamise LanreIn Canada, wetlands cover a land area of 1.5 106 km2 and store ~129 Pg C. However, the carbon (C) cycling of swamps has been understudied even though they store substantial quantity of C in their biomass and can also accumulate peat. In particular, southern Ontario swamps are estimated to hold ~1.1 Pg C under distinct hydroclimatic conditions. Previous studies on temperate swamp C fluxes were mostly based on short-term (<5 years) field measurements that limit our understanding of the multi-decadal dynamics that exist between this ecosystem’s C flux and biophysical conditions. To elucidate the long-term interactions and feedbacks that are important to temperate swamp C dynamics, a process-based model (CoupModel) was used to simulate plant processes, energy, water and C fluxes in one of the most well-preserved swamps in southern Ontario over 78-year period (1983–2060). CoupModel reasonably simulated the C flux and controlling variables when validated with compiled historic field measurements (1983–2023) with coefficient of determination (R2) values of 0.60, 0.95 & 0.61 for soil respiration, surface soil temperature (0–5 cm) and water table level (WTL). Systematic calibration of the initialized model for Beverly Swamp with the Generalized Likelihood Uncertainty Estimation (GLUE) approach moderately reduced the uncertainty associated with modelling processes and assisted in identifying the important parameters that greatly influence temperate swamp C flux simulations. Plant-related processes and hydrological variables exerted the strongest control on the simulation of carbon dioxide (CO2) efflux through soil respiration. The forcing of the GLUE calibrated CoupModel with an ensemble of climate projections downscaled from earth system models (ESMs) under shared socio-economic pathway (SSP5) by mid-century (2060) produced a decline in the swamp’s C uptake capacity as net ecosystem exchange (NEE) of CO2. Relative to the reference period of 1983–2002, the projected increase in mean air temperature (4.3 ± 0.8 oC) and precipitation (0.2 ± 0.1 mm) by 2050s triggered increase in 5 cm deep soil temperature, vapor pressure deficit, and evapotranspiration at Beverly Swamp. These changes to the swamp’s thermal and hydrological conditions dropped its WTL and VMC. Consequently, drier and warmer conditions raised the swamp’s CO2 efflux through ecosystem respiration, while its GPP moderately increased. These bidirectional feedbacks contributed to a reduction in the swamp’s net C uptake (NEE) by the 2050s but it mostly still maintained its net C sink role. While uncertainty in future climate projections and model fit limit our confidence in the precise estimate of future carbon exchange, it was clear that seasonal timing of warming and precipitation played an important role in the swamp response, with coincident declines in precipitation and warming temperatures in summer that caused water stress for plants. Results from this long-term study will help improve our understanding of the important ecohydrological interactions and feedbacks that drive the C cycle of temperate swamps, and their contributions to regional terrestrial C and water cycles. This will help inform decision making on the role of swamp peatlands as nature-based climate solutions through improved understanding of their net C exchange with the atmosphere.Item The Architecture of Grief: Representing the Evolution of Shia Mourning Spaces and Contributions to Islamic Architecture(University of Waterloo, 2025-02-13) Rizvi, Inam ZehraIn Shia Islam, commemorative mourning rituals for the martyrdom of Imam Hussain, the grandson of the Islamic Prophet Muhammad, at the Battle of Karbala in 680 CE, have undergone a process of evolution over the past 1,400 years. In documenting the evolution of the typology of the Shia Mosque, we find that its programs are directly related to the mourning rituals and symbolic icons that they house. This evolution is marked by the migration of material and visual forms to new lands, and its resultant replications vary in their scale and in their accuracy, often interacting and absorbing the cultural underpinnings of the region it occupies. This process reflects the spatiotemporal re-imagining of the phenomenology of “parallel pilgrimages” that captivates generations of Muslims. This thesis aims to explore these practices by focusing on ritual architectural events such as craft-making, mosaic arts, processions, and the creation of replica shrines. With the aim to demystify the current Shia practices and their distinctions from universal mosque spaces, a design approach focused on religious and cultural contributions on this form of collective grief and remembrance can have an opportunity to provide a space for clarity and education for what is a heavily stigmatized practice.Item Advancing Causal Representation Learning: Enhancing Robustness and Transferability in Real-World Applications(University of Waterloo, 2025-02-13) Shirahmad Gale Bagi, ShayanConventional supervised learning methods heavily depend on statistical inference, often assuming that data is identically and independently distributed (i.i.d). However, this assumption rarely holds in real-world scenarios, where environments or domains frequently shift, posing significant challenges to model robustness and generalization. Moreover, statistical models are typically treated as black boxes, with their learned representations remaining opaque and challenging to interpret. My research addresses these issues through a causal learning perspective, aiming to enhance the interpretability and adaptability of machine learning models in dynamic and uncertain environments. I have developed innovative methods for learning causal models that are applicable to a wide range of machine learning tasks, including transfer learning, out-of-distribution generalization, reinforcement learning, and action classification. The first method introduces a generative model tailored to learn causal variables in scenarios where the causal graph is known, such as Human Trajectory Prediction. By incorporating domain knowledge, this approach models the underlying causal mechanisms, leading to improved performance on both synthetic and real-world datasets. The results demonstrate that this generative model outperforms traditional statistical models, particularly in out-of-distribution contexts. The second method targets the more challenging scenario where the causal structure is unknown. I have explored various conditions and assumptions that facilitate the discovery of causal relationships without prior knowledge of the causal graph. This method combines advanced techniques in causal inference and machine learning to uncover the underlying causal graph and variables from observed data. Evaluations on both real-world and synthetic datasets show that this method not only surpasses existing approaches in causal representation learning but also brings AI systems closer to practical, real-world applications by enhancing reliability and interpretability. Overall, my research contributes significant advancements to the field of causal learning, providing novel solutions that improve model interpretability and robustness. These methods lay a strong foundation for developing AI systems capable of adapting to diverse and evolving real-world conditions, thereby broadening the scope and impact of machine learning across various domains.Item Deployment of Piezoelectric Disks in Sensing Applications(University of Waterloo, 2025-02-12) Abdelrahman, MohamedMicro-electromechanical Systems (MEMS) have revolutionized the way we approach sensing and actuation, offering benefits like low power usage, high sensitivity, and cost efficiency. These systems rely on various sensing mechanisms such as electrostatic, piezoresistive, thermal, electromagnetic, and piezoelectric principles. This thesis focuses on piezoelectric sensors, which stand out due to their ability to generate electrical signals without needing an external power source. Their compact size and remarkable sensitivity make them highly attractive. However, they’re not without challenges—their performance can be affected by temperature changes, and they can’t measure static forces. These limitations call for advanced signal processing and compensation techniques. Piezoelectric sensors, which operate based on the direct and inverse piezoelectric effects, find use in a wide range of applications, from measuring force and acceleration to detecting gases. This research zooms in on two key applications of piezoelectric sensors: force sensing and gas detection. For force sensing, the study focuses on developing smart shims that measure forces between mechanical components, which helps prevent structural failures. The experimental setup includes an electrodynamic shaker, a controller, and custom components like a glass wafer read-out circuit and a 3D-printed shim holder. During tests, the system underwent a frequency sweep from 10 Hz to 500 Hz, and a resonance was detected at about 360 Hz, matching the structural resonance. Some inconsistencies in the sensor’s output were traced back to uneven machining of the shim’s holes and variations in circuit attachment. To address these issues, the study suggests improving the machining process and redesigning the shim holder for better circuit alignment. Future work will include testing for bending moments, shear forces, and introducing a universal joint in the design to study moment applications more effectively. On the gas sensing side, the research examines a piezoelectric disk with a Silver- Palladium electrode for detecting methane. Using the inverse piezoelectric effect, the sensor’s natural frequency was found to be around 445 kHz. When coated with a sensitive material—PANI doped with ZnO—the disk exhibited a frequency shift of 2.538 kHz, indicating successful methane detection. The setup for this experiment included a gas chamber with precise control over gas flow and displacement measurements. Interestingly, after methane was replaced with nitrogen, the natural frequency returned to its original value, demonstrating the sensor’s reversible detection capability. Future research will expand to test other gases and sensitive materials, broadening the scope of applications. In summary, this thesis pushes the boundaries of piezoelectric MEMS sensors by tackling key design and performance challenges. Through detailed experimental methods, results, and suggested improvements, it lays a solid foundation for further research aimed at enhancing the reliability and versatility of piezoelectric sensors in real-world applications.Item Gentle Densification: Strategies for Integrating Low-Rise, Medium-Density Housing into Toronto’s Yellowbelt Neighbourhoods(University of Waterloo, 2025-02-12) Mok, LaurenThis thesis explores how alternative housing typologies can serve as viable solutions to increase development in low-rise neighbourhoods. Toronto’s current zoning only permits limited forms of densification in single-family areas, such as laneway suites, garden suites, and multiplexes up to four units, which is insufficient to address the city’s growing housing demand. The limited scope and complexity of these densification efforts highlight the need for more ambitious reforms that streamline processes, reduce costs, and promote a wider range of higher-density housing types. Gentle densification can be implemented in Toronto neighbourhoods in the form of low-rise, medium-density typologies such as multiplexes up to eight units, laneway or garden apartments and townhouses, and mixed-use apartments to increase housing options while making use of existing infrastructure. These new typologies provide suggestions for unintrusive densification by adding multi-unit buildings to single-family properties while utilizing laneways and yard space, reducing the need for the deconstruction of existing houses. The incorporation of additional public, community, and retail programs in neighbourhoods is also proposed. To allow for increased densification in single-family areas, new changes must be put forward for the zoning bylaws to enable more efficient typologies of medium-density housing and expand housing stock in neighbourhoods. This thesis focuses specifically on integrating gentle densification into the three neighbourhoods of East Willowdale, Leaside, and North Riverdale, chosen to encompass diversity in terms of existing housing types, property sizes, and household statistics. Feasibility, costs, and development scenarios for low-rise, medium-density housing are also investigated.Item A High-Order, Flow-Alignment-Based Compartmental Modelling Method(University of Waterloo, 2025-02-11) Alexandru, VasileIndustrially-relevant chemical engineering processes, such as stirred tank bioreactors in the pharmaceutical sector, inherently operate across multiple scales and involve complex, multiphysics, and multiphase interactions. Modelling of these systems is essential for their design, optimization, control, and operational troubleshooting; these processes are often too intricate for experimental approaches alone, with trial runs proving prohibitively costly or key metrics being difficult or impossible to measure. Traditionally, modelling such systems has relied on simplified design equations or idealized models, such as the continuously stirred tank reactor (CSTR). However, these approaches lack the explanatory power required to capture real-system outcomes, such as concentration gradient formation. With advancements in computational capabilities, computational fluid dynamics (CFD) simulations have become standard for investigating specific questions within these systems. Nonetheless, certain critical applications, such as extended simulations of microorganism growth or real-time predictive control, remain impractical due to their high computational demands. Reduced Order Models (ROMs) offer a middle ground between the simplistic CSTR models and the computationally intensive CFD simulations. ROMs trade off some of the generality and accuracy of CFD simulations in exchange for a substantial reduction in computational cost, often by several orders of magnitude. This work focuses exclusively on a specific type of ROM: Compartmental Models (CMs). They are underpinned by the assumption of one-way coupling between the hydrodynamics and mass transport of reactive species. CMs are constructed through a two-step process. First, the domain is divided into non-overlapping compartments using a set of criteria; next, each compartment is represented by one or more simplified models. This network of models decouples mass transport from hydrodynamics and reduces the number of degrees of freedom on which the conservation of mass of the reactive species needs to be solved. This reduction is particularly important for bioreactors, where hundreds of coupled nonlinear reactions are common. Current compartmental modelling methods exhibit several limitations, such as a disconnect between the criteria used for compartment identification and their subsequent modelling, an assumption that each compartment is well-mixed, a reliance on manual compartmentalization or manual intervention, and a non-prescriptive framework that is challenging to adapt to new geometries. This work introduces a novel compartmental modelling method based on flow alignment. The velocity field is analyzed and split into compartments within which the flow is unidirectional. Each compartment is then modelled as a series of 1D Plug Flow Reactors (PFRs). Benchmarking this method against the state-of-the-art method demonstrates that it yields more accurate results while achieving computational speeds that are orders of magnitude faster than traditional CFD simulations. Further, many current CM approaches simplify three-dimensional geometries by either modelling two-dimensional cross-sections and relying on rotational symmetry or by using a uniform grids of compartments. The developed method is extended to fully three-dimensional two-phase stirred tank systems without using these assumptions. It successfully compartmentalizes the distinct recirculatory regions generated by the impellers, eliminating the manual ad hoc intervention required by past methods. Mixing time and concentration predictions at probe locations are validated against CFD simulations, other CMs, and experimental data. The proposed general method performs as well or better than past CMs which were tailor made for the stirred tank geometry. Further, the model's capability to handle complex spatially varying reactions is demonstrated by simulating oxygen dissolution into the liquid phase, accurately capturing spatial gradients in dissolved oxygen concentration. Lastly, a significant limitation in previous compartmental modelling work is the reliance on a single velocity snapshot or a time-averaged steady-state velocity field. For instance, in the case of vortex shedding from a cylinder in the laminar flow regime, neither time-averaged velocity-based CM nor an ensemble of CMs based on discrete velocity snapshots accurately captures the impact of the inherently non-stationary flow topology. The non-stationary nature of such flow fields is addressed by employing projection mappings to cycle through a series of compartmental models, allowing dynamically updating their shape, number, location, and connections. This approach successfully captures the oscillation period of the flow and demonstrates promise in representing non-stationary flow behaviours accurately. In summary, this work advances the field of compartmental modelling by unlocking their the application to complex, industrially-relevant systems by developing a generalized, alignment-based method. This method extends the capability of CMs to handle both time-varying and fully three-dimensional multiphase flows without requiring manual intervention. The approach is validated through benchmarking against CFD simulations, other CM approaches, and experimental data, demonstrating improvements in computational efficiency and accuracy.Item A Comprehensive Framework Incorporating Hybrid Deep Learning Model, Vi-Net, for Wildfire Spread Prediction and Optimized Safe Path Planning(University of Waterloo, 2025-02-11) Dhindsa, Manavjit SinghForest fires are becoming more prevalent than ever, and their intensity and frequency are expected only to increase owing to climate change and environmental degradation. These fires severely threaten the economy, human lives, and infrastructure. Therefore, effective management of wildfires is of utmost importance, and accurately predicting the wildfire spread lies at the core of it. Reliable predictions of fire spread not only provide insights about the at-risk regions but also help in planning several mitigation activities including resource allocation and evacuation planning. This thesis introduces Vi-Net, an innovative hybrid deep learning model, which integrates the localized precision of U-Net with the global contextual awareness of Vision Transformers (ViT) to predict next-day wildfire spread with unprecedented accuracy. This study utilizes an extensive multimodal dataset that accumulates data from different sources across the United States from 2012 to 2020 incorporating critical factors such as topographical, meteorological, anthropological (population density), and vegetation indices. These elements are vital for modeling the complex dynamics of wildfire spread. A significant challenge in this domain is the class imbalance as the fire points are generally quite less compared to non-fire points. The dataset used in this study had fire regions less than 5% of the total data. To address this issue, advanced loss functions, including Focal Tversky Loss (FTL), are employed, prioritizing accurate segmentation of fire-prone regions while minimizing false negatives. FTL modifies the focus towards hard-to-predict regions and crucial boundaries, thereby enhancing the model's predictive accuracy and reliability in practical scenarios. Vi-Net addresses the complexities in modeling fire dynamics by synergizing the strengths of U-Net and ViT. Integrating U-Net and ViT in Vi-Net allows for a comprehensive analysis that ensures high precision and recall, effectively balancing the sensitivity and specificity needed in wildfire predictions. This dual approach allows the model to process detailed local information and extensive contextual data, making it exceptionally capable of identifying and predicting fire spread across diverse landscapes. Experimental results highlight the superiority of Vi-Net over traditional models, achieving an F1 Score of 97.25% and an Intersection over Union (IoU) of 94.15% on the test dataset. These metrics highlight its capability to accurately capture localized fire patches and long-range dependencies while avoiding overprediction. These advancements validate the model's potential to offer more nuanced predictions, capturing the interplay between micro and macro-level environmental dynamics. In addition to predictive modeling, this research extends its practical applicability by integrating the predicted fire masks into an optimized A* algorithm for safe path planning. This step ensures actionable insights for emergency response teams, facilitating efficient evacuation routes and resource allocation while avoiding high-risk fire regions. Qualitative and quantitative analyses confirm the hybrid model's efficacy, with visualizations demonstrating Vi-Net’s ability to preserve spatial detail while capturing broad environmental contexts, and path planning results illustrating the model's robustness and reliability. This research not only sets a new benchmark for wildfire prediction models but also demonstrates the potential of hybrid deep learning systems in environmental science applications. By providing a robust framework for real-time wildfire management, Vi-Net could significantly influence future strategies in disaster response and resource allocation. Future enhancements could include integrating real-time data feeds to further improve the adaptability and predictive capabilities of the model, potentially revolutionizing wildfire management practices globally.