UWSpace
UWSpace is the University of Waterloo’s institutional repository for the free, secure, and long-term home of research produced by faculty, students, and staff.
Depositing Theses/Dissertations or Research to UWSpace
Are you a Graduate Student depositing your thesis to UWSpace? See our Thesis Deposit Help and UWSpace Thesis FAQ pages to learn more.
Are you a Faculty or Staff member depositing research to UWSpace? See our Waterloo Research Deposit Help and Self-Archiving pages to learn more.

Communities in UWSpace
Select a community to browse its collections.
- The University of Waterloo institution-wide UWSpace community.
Recent Submissions
Optimal Sensor Protection and Measurement Corruption Detection in Safety-Critical Systems
(University of Waterloo, 2025-05-21) Zhang, Ben
This thesis addresses the detection and mitigation of sensor faults in safety-critical systems through secure estimation techniques. Sensor faults, whether accidental or adversarial, pose significant risks in autonomous vehicles, aviation, robotics, and other domains heavily reliant on accurate sensor data for safe operation. Traditional fault tolerance methods typically depend on hardware redundancy or ad-hoc designs for specific systems, approaches that can be prohibitively costly, or simplified assumptions about fault conditions that may not hold in practice. Recent advances in secure estimation provide a general framework with provable guarantees against sensor faults; however, their central requirement—sparse observability—is a worst-case scenario analysis, limiting their applicability in practical systems.
To address this limitation, this thesis introduces the concept of sensor protection, explicitly modeling selected sensors as immune to faults. This serves as an initial step toward capturing the practical scenario where some sensors are more fault-tolerant than others.
Although prior studies have implicitly assumed sensor protection by restricting potential fault locations, explicit modeling of sensor protection and its theoretical implications for fault tolerance have not been formally explored. This thesis extends the sparse observability framework to include protected sensors, broadening the applicability of secure estimation's theoretical guarantees. Additionally, a metric termed the safety factor is introduced to quantify a system's resilience to sensor faults, enabling targeted enhancements in robustness under practical resource constraints.
Further, this thesis adapts secure state-reconstruction methods to develop a robust fault detection algorithm suitable for nonlinear systems through linearization. We validate our methods extensively through simulation studies, retrospective analysis of a real-world autonomous vehicle racing incident, and practical implementation on a skid-steer robot. Results demonstrate significant improvements in real-time fault detection and operational safety under diverse fault conditions.
Overall, this work bridges theoretical advances in secure estimation with real-world deployment considerations, providing a structured methodology to enhance the reliability and safety of autonomous systems.
Type Resolution in C∀
(University of Waterloo, 2025-05-21) Yu, Fangren
C∀ (C-for-all) is an evolutionary extension of the C programming language, which introduces many modern programming language features to C.
C∀ has a type system built around parametric polymorphism, and the polymorphic functions are prefixed by a "forall" declaration of type parameters, giving the language its name.
This thesis presents a series of work I did on type resolution in C∀.
Every function, including the built-in C operators, can be overloaded in C∀, therefore resolving function overloads and generic type parameters is at the heart of C∀ expression analysis.
This thesis focuses on the interactions of various C∀ language features such as reference and generic types in type resolution, analyzes the known issues and presents improvements to the type system that fix several of the problems.
Ideas for future work are also given for further improving the consistency of the C∀ type-system at a language design level.
OasisDB: An Oblivious and Scalable System for Relational Data
(University of Waterloo, 2025-05-21) Ahmed, Haseeb
We present OasisDB, an oblivious and scalable RBDMS framework designed to securely manage relational data while protecting against access and volume pattern attacks. Inspired by plaintext RDBMSs, OasisDB leverages existing oblivious key value stores (KV-stores) as storage engines and securely scales them to enhance performance. Its novel multi-tier architecture allows for independent scaling of each tier while supporting multi-user environments without compromising privacy. We demonstrate OasisDB’s flexibility by deploying it with two distinct oblivious KV-stores, PathORAM and Waffle, and show its capability to execute a variety of SQL queries, including point and range queries, joins, aggregations, and (limited) updates. Experimental evaluations on the Epinions dataset show that OasisDB scales linearly with the number of machines. When deployed with a plaintext KV-store, OasisDB introduces negligible overhead in its multi-tier architecture compared to a plaintext database, CockroachDB. We also compare OasisDB with ObliDB, an oblivious RDBMS, highlighting its advantages with scalability and multi-user support.
Towards Explainable Neural Networks for Mathematical Programming
(University of Waterloo, 2025-05-21) Beylunioglu, Fuat Can
Mathematical programming has primarily been a study of numerical optimization where the solution is obtained following a procedure recursively until convergence. Recent applications of Neural Networks (NN) as surrogate models have challenged this view by treating optimization problems as approximating unobserved functions that map input parameters to optimal solutions, which is referred to as \textit{solution functions}. In this thesis, we investigate properties of the solution function, and develop NN based methods for deriving explicit formulae of this mapping. Drawing from the principles of the Universal Approximation Theorem, we investigate the sources of NN approximation errors that arise when training models on dataset consists of input parameters and optimal solutions for previously solved problems.
Drawing on insights from multiparametric programming (MP), which explores piecewise-linear (PWL) properties of quadratic programs with linear constraints (QP), we demonstrate that a NN with ReLU activation functions is a general form of the solution function of QPs. Despite this fact, we show that achieving perfect representations of this function proves difficult when employing black-box models and standard NN training methods. We propose a semi-supervised NN model that learns explicit parameters of each linear segment of the QP solution function from problem coefficients analytically, but is only trained for a few number of model parameters to construct the solution function. Using IEEE DC optimal power flow test sets, we show that this approach accurately learns to represent the QP solution function with minimal error, ensuring optimal solutions for any right-hand side (RHS) parameters.
Building on the semi-supervised model, we further propose a closed-form NN model and a learning via discovery algorithm that learns an exact representation of the solution function without training, by applying algebraic operations to derive all model weights from problem coefficients without the need for training data. Our proposed learning algorithm begins with an optimal solution to initiate the NN model, then discovers each linear segment of the PWL solution function and expands the model parameters iteratively without using solvers. The proposed procedure yields a NN model that acts as the closed-form solution to QP with linear constraints, ensuring that optimal solutions for the discovered regions are both feasible and optimal. It is shown that the proposed CF-NN model offer significant advantages. The model takes seconds to learn the solution function and can seamlessly generalize outside the training distribution. Results on IEEE DC-OPF test cases significantly outperforms traditional deep NN training methods, achieving optimal solutions with near-zero calculation errors in all discovered critical regions.
In this thesis, we primarily focused on methodologies for uncertain parameters added to the right hand side of equality constraints. Further work is needed to examine the effect of uncertainty on inequality constraints, improve the scalability of the approach to larger systems by ensuring the learning algorithm to visit all critical regions of the feasible domain, reducing floating point precision errors, and addressing the issues related to degeneracy.
A data-centric view of LQR algorithms in continuous time
(University of Waterloo, 2025-05-21) Song, Christopher
Many control theorists are interested in how measurements of input and state trajectories can determine the properties of a control system, in lieu of the differential or difference equation models that usually play this role. In situations of practical interest, such models may be wholly or partially unknown, while input and state data are readily available. In the discrete- and continuous-time linear quadratic regulator (LQR) settings, this area of research has produced a proliferation of algorithms that use input and state data to solve the LQR problem, the system identification problem, or both. In different algorithms, these data and the requirements imposed on them take different forms that are not directly comparable, making it difficult to assess the relative efficiencies with which different algorithms makes use of data.
In [30], the authors show that the LQR and system identification problems are essentially equivalent in the discrete-time setting. In this thesis, we extend this result to the continuous-time setting, showing that, assuming input and state data are collected on intervals, every algorithm that solves the LQR problem requires at least as much data as system identification. From this, we show that the map from the data to the optimal gain defined by these algorithms is continuous, establishing a connection between interval data and sampled data algorithms. The possibility of using sampled data in place of interval data leads to a weaker convergence criterion on sampled data approximations, and a natural connection with numerical integration. We do some numerical experiments that show the critical importance of choosing when to make input and state measurements, and emphasize the possibility of doing so without knowledge of the system or its optimal gain.