Electrical and Computer Engineering

This is the collection for the University of Waterloo's Department of Electrical and Computer Engineering.

Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).

Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.

Browse

Recent Submissions

Now showing 1 - 20 of 1987
  • Item
    Towards Scalable Fully Automatic Program Verification
    (University of Waterloo, 2024-07-25) Vediramana Krishnan, Hari Govind
    Formal verification is a rigorous way to establish software quality. After more than 50 years of research, we are at a point where formal verification is starting to be used in industrial applications. To encourage more adoption, we need to automate formal verification techniques as much as possible so that even users who are not familiar with formal verification can use it to ensure software quality. Many automatic verification tasks are reducible to the problem of satisfiability checking of Constrained Horn Clauses (CHCs) modulo background theories. The list includes safety verification of imperative and functional programs, regression verification, static analysis, refinement type inference, and many more. Motivated by the number of applications, we explore advancements to CHC solving. We make three contributions to improving the state of the art in CHC solving. First, we study the problem of lemma generalization inside CHC solvers. State of the art CHC solvers construct solutions to CHCs by learning several lemmas, each of which block parts of the search space, and together form a complete solution. However, the generalization strategies that CHC solvers employ during lemma learning are local: they are unaware of the solution that the CHC solver is constructing. This causes CHC solvers to diverge and worse, be susceptible to syntactic changes in the input. We mitigate the effects of local generalization by identifying patterns in lemmas learned during CHC solving and enforcing course correction during CHC solving via a process called global guidance. We instantiate global guidance inside the Spacer CHC solver and show that it improves Spacer on a variety of benchmarks and makes Spacer agnostic to local generalization strategies. In the second part of the thesis, we look at CHC solving in the context of safety verification of programs manipulating user defined recursive datatypes like lists and trees. The invariants of such programs typically require recursive functions as well. This domain is particularly hard for CHC solvers because the presence of recursive functions make even validating solutions to CHCs undecidable. We propose a new approach that sidesteps the undecidability by abstracting recursive functions using uninterpreted functions when validating solutions. We show the typical approach to overcome this undecidability, that of eliminating recursive functions by encoding them as a set of CHCs, looses solutions and is therefore infeasible for many CHC solvers. In the third part of the thesis, we go deeper inside CHC solvers and propose algorithmic improvements to the solutions to two important subproblems encountered during CHC solving: modular solving and approximations of quantifier elimination.
  • Item
    Cybersecurity in the Hardware Supply Chain
    (University of Waterloo, 2024-07-15) Sankararaman, Ahalya
    Globalization in the hardware supply chain has increased sophisticated hardware security breaches. Once considered safe and secure in the early phases of the manufacturing life cycle, the semiconductor supply chain has become a convenient target for malicious attackers to take control of the hardware to execute cyber-attacks. Adopting a product integration model with multiple tiers of suppliers, contractors, and distributors spread across the globe optimizes cost and efficiency for original equipment manufacturers (OEM). Although economically beneficial, this model has increased hardware's exposure to supply chain vulnerabilities and challenges, such as counterfeiting, hardware trojan insertion, backdoors, intentional sabotage, and tampering attacks. After the manufacturing phase, the product is deployed in safety-critical industries such as automotive, military, healthcare, aerospace, and defense industries. These sectors' entire ecosystem greatly depends on a trust-based supply chain. They are largely unaware of the new cyber threats to hardware that might compromise the product's quality, security, and safety. Compromised hardware with substandard components or malicious modifications deployed in mission-critical sectors is prone to failures before its average life cycle, leading to loss/injury to life, disruption of infrastructure, financial losses, damage to the company's reputation and legal scrutiny. The cycle of trust placed on suppliers and vendors within the supply chain can be disrupted by educating manufacturers about the risks posed by new threat actors, indicators of compromise and implementing the necessary preventive measures to avoid cyber-attacks in the hardware supply chain. Several training programs have been widely adopted to raise awareness of cybersecurity risks. Gamification has proven effective in conveying complex security concepts using innovative and engaging in-game elements to deliver educational content. This thesis aims to educate about the different types of cyberattacks in a hardware supply chain by suggesting a gamified tabletop exercise (TTX). This interactive approach is designed to educate about the risks of a trust-based supply chain in a way that actively involves the organizations exposed to such risks, allowing them to deploy necessary countermeasures well in advance to lower the impact of the attacks, ensure business continuity by evaluating and implementing the required policies.
  • Item
    A Novel Framework of Board-Level Failure Localization in Optical Transport Networks
    (University of Waterloo, 2024-06-21) Jiao, Yan
    Optical transport networks (OTNs) serve as a pivotal role in Internet backbones thanks to their support for multi-tenant and multi-service environments with high reliability and low cost. A failure event may affect one or multiple boards in OTN that ignite a vast number of alarms, which significantly boosts the complexity of failure localization and alarm analysis. Accordingly, there is an urgent need for a systematic framework that harnesses the known network state and received alarms to achieve effective failure localization. Alarm correlation has been considered as a representative approach to identifying the dependencies among alarms, aiming at eliminating as many descendent alarms as possible, thereby fulfilling failure localization with much decreased complexity. Nevertheless, existing methods of alarm correlation are subject to the following issues. Firstly, they ignore the fact that alarm propagation mostly takes place along certain connections and that the network topology and traffic distribution may solidly underpin the required alarm correlation process. Secondly, they necessitate heuristically setting initial parameters but lack a general rule that adjusts their values according to various network characters. Lastly, they are deficient in generality to versatile network environments, where the obtained result grounded in a specific network state may not be migrated to another. Enlightened by its significance and stringent requirements, this thesis proposes a novel framework of board-level failure localization in OTN, called Failure-Alarm Correlation Tree based Failure Localization (FACT-FL). It aims to construct one or multiple FACTs that achieve both failure localization and alarm correlation, where each FACT takes a failed board and its associated alarms as the tree root and leaves, respectively. We have designed three methodologies to obtain viable FACTs. A scheme named FACT-FL-Heuristic is firstly attempted via a learned binary classifier that intelligently captures the historical correlations in the form of board → alarm and alarm → alarm, followed by heuristically creating the feasible FACT(s). To further improve FACT-FL-Heuristic’s performance, a method termed FACT-FL-Chain treats each FACT as a suite of correlation chains with different order values and generates viable FACT(s) by elegantly solving an integer linear programming (ILP) problem. Moreover, to reduce the computational complexity incurred by enumerating all chain candidates with FACT-FL-Chain, an approach dubbed FACT-FL-GNN leverages graph neural network (GNN) for evaluating the edge weights of potential FACT(s), which facilitates formulating an alternative simplified ILP to yield the most likely FACT(s). The above three methods share the same functional blocks including feature extraction, binary classifier training, and FACT formation, while each method realizes each functional block with different strategies. Extensive case studies are conducted to unveil the proposed methods’ advantage over their counterparts in terms of the metrics assessing the recognized failed boards/root alarms. We also explore their performance in volatile environmental variations such as diverse failure scenarios, network topologies, traffic distributions, and noise alarms.
  • Item
    Image Quality Assessment and Refocusing with Applications in Whole Slide Imaging
    (University of Waterloo, 2024-06-18) Wang, Zhongling
    In the rapidly evolving field of general digital imaging and whole slide imaging, Image Quality Assessment (IQA) plays a crucial role in determining the perceptual quality of images and guiding image restoration. State-of-the-art IQA models are computationally expensive due to the use of complex deep learning architectures. The high computational cost poses a significant challenge in high-throughput Whole Slide Image (WSI) scanning platforms, which is both time-sensitive and power-limited. Moreover, most IQA models, while varied in design, often exhibit biases towards specific types of image content or dis- tortions, a consequence of their underlying design principles or training data. To improve the quality of WSIs, we need to address the defocus problem, which is the most common distortion for a WSI. The transparency and uneven surface of tissue samples further com- plicate the restoration process for methods that lack an understanding of the 3D tissue radiance. These issues emphasize the limitations and challenges faced by existing IQA and restoration models. This thesis proposes three novel and flexible approaches to mitigate these problems. Addressing the efficiency concerns in whole slide imaging, this thesis presents a highly efficient model for Focus Quality Assessment (FQA). Among the distortions that degrade the quality of digital slides, out-of-focus blur is the most common one. Different from pho- tographic images, WSIs have much bigger dimensions, making most deep-learning based FQA models computationally infeasible. Based on prior knowledge of the WSI and its imaging process, we developed a lightweight model named FocusLiteNN that is 10, 000 times more efficient than SOTA deep learning-based ones without compromising accu- racy. Furthermore, we introduce the first open-source, expert annotated FQA dataset TCGA@Focus, offering a comprehensive platform for developing and evaluating new FQA models. However, most FQA models, or IQA models in general, often exhibit biases towards specific types of image content or distortions due to their different design principles or training data. This poses a challenge for users when choosing the best quality assessment model for their needs. A practical approach is to fuse the results of multiple existing IQA models into a more robust one. Following this idea, we developed a novel framework for IQA core fusion that is able to select the best combination of models according to the uncertainty in each image and the overall uncertainty of each model. This requires the model to be equipped with both fine-grained uncertainty analysis at the content level and coarse-grained uncertainty analysis at the model level, respectively. Existing models either lack content-level uncertainty estimation or have limited generalizability due to supervised training. Our method employs an unsupervised approach using deep Maximum a Posteriori (MAP) estimation, which can be trained on a combination of multiple datasets without the need for Mean Opinion Score (MOS). This greatly improves the generalizability of the model. The above two works address different problems in quality assessment. In practice, detected bad-quality images are either rejected or recollected. In digital pathology, rec- ollecting the biosample causes additional suffering for the patient. Consequently, defocus restoration is a possible solution. Deblurring assumes that there exists a sharp image in which all pixels are in-focus, which is commonly referred to as a All-In-Focus (AIF) image. Although this assumption is true for natural images, it might not hold for WSIs due to its transparency, uneven surface and the microscope’s shallow Depth of Field (DOF). Since the target does not exist, WSI deblurring becomes an undefined task. We propose an alternative approach to address the defocus problem, which is virtual refocusing. It aims to simulate and surpass the traditional experience of one continuously adjusting the focus of a microscope, allowing for a comprehensive examination of tissue structures at varying depths without the need for physical slide presence. By implicitly learning a continuous 3D radiance representation from the sparse inputs, the proposed model can refocus each pixel to any focus plane according to a focus map. As far as we know, this is the first work on WSI virtual refocusing. This thesis makes significant contributions to IQA and image restoration with applica- tions in WSI. The introduction of the FocusLiteNN model boosts computational efficiency while the score fusion model addresses the bias issue. Additionally, the virtual refocusing model extends these improvements by tackling the defocus problem in WSI through precise adjustment of focus on a per-pixel basis.
  • Item
    High-Q On-chip Passive Components and Low-Phase-Noise CMOS Bridge Oscillator Design
    (University of Waterloo, 2024-05-30) Shuvra, Shaeera Rabbanee
    High-Q passives are the key design components for a proposed resonator circuit. This thesis focuses on designing and optimizing passive components (transmission lines, capacitors, and inductors) implemented in a production 22-nm SOI-CMOS technology. Custom-designed inductors with inductances of 1 nH and 2 nH are employed, achieving quality factors (Q) greater than 30 at 10 GHz. These inductors are realized through the utilization of BEOL Stack 11 and Stack 19 configurations in the 22-nm technology with varying shielding patterns. Additionally, high-Q capacitors are implemented exhibiting Q factors surpassing 500 at 10 GHz, and an operational range validated by measurements up to 60 GHz. Balanced transmission lines with Q factor greater than 150 at 25 GHz are also designed and implemented. This high-Q value is explained with a new lumped-element equivalent circuit model developed in this work. 
 A state-of-the-art, fully-integrated bridge resonator with a quality factor (Q) exceeding 50 at 8.5 GHz is introduced in this thesis. This level of performance is realized through circuit design techniques and the integration of the custom inductor and capacitor designs. Furthermore, the potential for application of this high-Q bridge resonator in the design of a low-phase-noise RF oscillator is demonstrated, with post-layout simulations predicting a phase noise below -131 dBc/Hz at 1 MHz offset for 8-GHz carrier frequency.
  • Item
    Compensation of Impairments in Millimeter-Wave/Sub-THz Power Amplifier and Frequency Multiplier Based Transmitters
    (University of Waterloo, 2024-05-27) BEN AYED, Ahmed
    The ever-growing demand for increased network capacity and higher data rates has instigated, in recent years, a shift towards harnessing millimeter-wave (MMW) and sub-THz frequency bands, offering abundant and untapped spectrum resources. Nevertheless, transmitting wideband signals at these frequencies faces formidable challenges due to significant propagation path loss and the limitations of semiconductor devices. This doctoral thesis adopts a comprehensive approach to tackle these challenges by integrating advanced modeling and analysis methodologies with the development of innovative system-level architectures and leveraging advanced signal processing techniques and algorithms. To mitigate the significant path loss challenge inherent to MMW/sub-THz wireless links, beamforming arrays are commonly employed, where a high count of closely spaced antenna elements are used to achieve high effective radiated power. These arrays are deliberately operated within their optimal operational range to maximize radiated power while concurrently promoting environmentally sustainable communication links. Yet, the deployment of closely spaced antenna elements brings forth new challenges e.g., antenna mutual coupling, which precipitates a degradation in array linearity. This thesis introduces a seminal closed-form analysis elucidating the nuanced interplay between array nonlinearity and array beam steering. Building upon this theoretical foundation, a novel optimization algorithm is formulated to meticulously design tapering coefficients aimed at minimizing variation in the array nonlinearity, due to antenna mutual coupling, across a wide range of array beam steering angles. Comprehensive numerical and experimental validation are provided, substantiating the capacity of the proposed scheme to enhance the linearizability of beamforming arrays when beam steering. The thesis further addresses the challenge of implementing the transmitter observation receiver (TOR) in beamforming arrays needed for array calibration and linearization training. Traditionally, the TOR feedback mechanisms in single-antenna systems have been reliant on couplers situated at the output of power amplifiers (PAs). However, the integration of such couplers within MMW/sub-THz beamforming transmitters presents formidable technical hurdles. This thesis propounds a pioneering paradigm shift, advocating for the utilization of near-field (NF) probes intricately embedded within the array architecture itself. Different algorithms are presented in this thesis that enable harnessing the information received from these NF probes, enabling accurate estimation and compensation of the array's linear and nonlinear impairments. The proposed TOR and enabling algorithms, facilitate in-situ calibration and/or digital-predistortion training, thereby engendering a marked enhancement in system performance. Furthermore, the thesis proposed an active array calibration scheme that leverages the signals received from the NF probes to enable simultaneous calibration of the array without resorting to element-wise sequential calibration or a far-field receiver. Extensive experimental validation on two beamforming arrays operating at 28 GHz and 39 GHz are presented showcasing the capacity of the proposed schemes. The generation of wideband vector-modulated signals in the MMW/sub-THz bands poses a formidable technical challenge, especially at frequencies where PA based transmitters have limited performance. This thesis endeavors to surmount this obstacle through the exploration of frequency multipliers (FXs), which are characterized by the capacity to generate signals that can surpass the transistor unity-power-gain frequency (fmax). The thesis presents linearization algorithms that can mitigate the inherent nonlinearity associated with FXs thereby enabling the generation of wideband vector-modulated signals with significantly increased bandwidths compared to existing methods. Furthermore, the thesis delves into the scalability of these linearization algorithms for FX-based array systems, thereby underscored by their inherent resilience to antenna mutual coupling vis-a-vis PA-based arrays. These algorithms and findings are validated in simulation and experimentally on different single channel FXs and FX-based arrays with different multiplication orders and operation frequencies. In tandem with these innovations, this thesis introduces a pioneering novel measurement system tailored for comprehensive testing of components and systems operating at sub-THz frequencies. Leveraging frequency extenders traditionally reserved for continuous-wave (CW) characterization, the proposed measurement system enables components and systems under both CW and modulated signal excitation conditions. The enabling algorithms used by the proposed measurement system are also developed and presented. Specifically, an iterative learning control linearization algorithm and a receiver signal stitching algorithm that enable the linearization and capture of wideband signals using narrowband receivers, are presented. The efficacy of the proposed measurement system and associated algorithms are rigorously substantiated through experimental validation. In summary, this thesis represents a seminal contribution toward mitigating hardware limitations endemic to MMW/sub-THz radio frequency systems. By espousing a holistic approach that seamlessly integrates advanced modeling and analysis techniques, novel system-level architectures, and state-of-the-art signal processing techniques, this work lays the groundwork for the development of future-proof, high-capacity, and high-data-rate radio systems poised to thrive within these transformative frequency bands.
  • Item
    Optical Method for Antenna Near-Field Sensing
    (University of Waterloo, 2024-05-24) Ding, Shuyu
    This thesis explores the application of Thin Film Lithium Niobate (TFLN) in advancing the fabrication of optical devices for electric field (E-field) sensing within phased-array antenna systems. The research primarily focuses on the deployment of fully dielectric TFLN-based Mach-Zehnder Interferometer (MZI) waveguides to detect E-field variations, representing a shift from conventional metallic-based sensing approaches. By leveraging the electro-optic (EO) effect inherent in TFLN, this study assesses the capability of these waveguides to passively modulate the power of an optical signal in response to E-field alterations, with the goal to contribute to the enhancement of E-field sensing technology. The methodology incorporated a comprehensive phase of design, simulation, fabrication, and experimental validation. Simulations were conducted using HFSS for analyzing antenna performance and Lumerical to measure optical device behaviour, establishing the fundamental groundwork for device optimization. The fabrication stage was elaborately engineered, utilizing electron beam lithography with HSQ, ZEP, and SiO2 processing techniques, among which the ZEP process method was identified as providing optimal outcomes in producing precise optical device structures with minimal surface defects. The experimental component was operated in the Advanced Optical Lab at University of Waterloo, validated the responsiveness of the TFLN waveguide to E-field variations emitted by antenna elements. Notable experimental findings included a recorded 2% reduction in the waveguide's output power following antenna E-field activation, which was slightly lower than the 5% decrease predicted by simulation models. This 3% difference illustrates the challenges associated with applying theoretical models to practical implementations, highlighting the necessity for rigorous experimental validation. Further observations revealed that when the input power of antennas was maintained below 10 dBm, the power variance at the waveguide's output was less than 1%. This sensitivity to E-field fluctuations suggests that TFLN-MZI waveguides could effectively function as detectors for electromagnetic field variations, potentially serving as a valuable tool for synchronous diagnostics and calibration of antenna arrays. In conclusion, this thesis proposes an innovative approach for E-field sensing with TFLN-based optical devices, specifically in the context of phased-array antennas. Through a synthesis of theoretical analysis, fabrication techniques, and empirical validation, the study advances the knowledge of integrating TFLN waveguides into antenna systems for enhanced diagnostics and calibration. The findings suggest paths for further research, particularly in optimizing the sensitivity and integration efficiency of TFLN-based optical devices for telecommunications and sensing applications.
  • Item
    Towards Effectively Testing Sequence-to-Sequence models from White-Box Perspectives
    (University of Waterloo, 2024-05-22) Shao, Hanying
    In the field of Natural Language Processing (NLP), which encompasses diverse tasks such as machine translation, question answering, and others, there has been notable advancement in recent years. Despite this progress, NLP systems, including those based on sequence-to-sequence models, confront various challenges. To tackle these, metamorphic testing methods have been employed across different NLP tasks. These methods entail task-specific adjustments at the token or sentence level. For example, in machine translation, this approach might involve replacing a single token in the source sentence to generate variants, whereas in question answering, adjustments might include altering or adding sentences within the question or context. By evaluating the system’s responses to these alterations, potential deficiencies in the NLP systems can be identified. Determining the most effective modifications, particularly, especially in terms of which tokens or sentences contribute to system instability, is an essential and continuous aspect of metamorphic testing research. To tackle this challenge, we introduce two white-box methods to detect sensitive tokens in the source text, alterations to which could potentially trigger errors in sequence-to-sequence models. The initial method, termed GRI, leverages GRadient Information for identifying these sensitive tokens, while the second method, WALI, utilizes Word ALignment Information to pinpoint the unstable tokens. We assess these approaches using a Transformer-based model for translation and question answering tasks, comparing them against datasets used by benchmark methods. When applying white-box approaches to machine translation testing and using them to generate test cases, the results show that both GRI and WALI can effectively improve the efficiency of the black-box testing strategies for revealing translation bugs. Specifically, our approaches can always outperform state-of-the-art automatic testing approaches from two aspects: (1) under a certain testing budget (i.e., number of executed test cases), both GRI and WALI can reveal a larger number of bugs than baseline approaches, and (2) when given a predefined testing goal (i.e., number of detected bugs), our approaches always require fewer testing resources (i.e., a reduced number of test cases to execute). Additionally, we explore the application of GRI and WALI in test prioritization and evaluate their performance in QA software testing. The results show that GRI can effectively prioritize test cases that are highly likely to generate bugs and achieve a higher percentage of fault detection given the same execution budget. WALI, on the other hand, exhibits results similar to baseline approaches, suggesting that while it may not enhance prioritization as significantly as GRI, it maintains a comparable level of effectiveness.
  • Item
    Investigating the Root Causes of the Low Electroluminescence Stability of Upright Blue Quantum Dot Light-Emitting Devices
    (University of Waterloo, 2024-05-21) Ghorbani Koltapeh, Atefeh
    Quantum dot light-emitting devices (QLEDs) are promising candidates for use in the next-generation flat panel displays. QLEDs operate based on the electroluminescence (EL) from quantum dots (QDs) as the emissive layer. QLEDs have gained attraction due to the QDs' intriguing properties such as high photoluminescence quantum yield (PLQY) of nearly 100%, narrow emission full-width at half maximum (FWHM < 30 nm) which provides a wide color gamut, the tunability of their peak luminescence wavelengths across the entire visible spectrum, and their solution-processability which makes them compatible to low-cost and flexible fabrication techniques. These features make QLEDs superior to their currently commercialized organic light-emitting devices (OLEDs) rivals, as they provide more natural images. However, QLEDs EL stability for the three red, green, and blue primary colors are not similar. Particularly, blue QLEDs (B-QLEDs) are the least stable which makes it the bottleneck for QLEDs commercialization. Despite the massive efforts on the QLEDs' development and progress in achieving efficient devices, the B-QLEDs still suffer from poor EL stability. Developing effective strategies to improve the EL stability of B-QLEDs requires that the underlying degradation mechanisms must first be identified. Although high EL stability refers to maintaining the EL level when the device is under electrical bias, i.e. electrical stability, it is imperative that the high EL stability first requires that the materials do not change with time in the absence of bias, i.e. high shelf stability. Unfortunately, there is no clear consensus in the field on the fundamental factors limiting the B-QLEDs EL stability, and the roles of electrical bias versus just the temporal stability of the devices remain entangled. The main focus of this thesis is to pinpoint the underlying reasons governing the B-QLEDs EL stability and explain the possible mechanisms for the B-QLEDs EL loss. This work utilizes the upright QLED structure, in which the light output is through the anode, which consists of an organic hole transport layer (HTL), CdSe-based QDs, and an inorganic ZnMgO electron transport layer (ETL). The focus of this work is B-QLEDs although red and green QDs are also used as a reference for comparison at some point. To systematically study the EL stability of B-QLEDs as the objective of this work, the B-QDs PLQY stability is monitored in storage (shelf life) and under electrical stress with different scenarios. Firstly, the PLQY stability of blue QDs (B-QDs) as a thin film is studied over time. The B-QDs are placed in contact with each of the ETL or HTL as in the B-QLEDs structure to determine if interactions between the different materials in the device stack influence the shelf life of the B-QDs PLQY. It is found that B-QDs PLQY is stable intrinsically or in contact with the HTL. However, the ETL is found to negatively affect the B-QDs PLQY over time, an effect that arises from the diffusion of species from the ETL into the QD layer as well as morphological changes in the ETL that both result in the QDs PLQY drop. Next, the effect of bias is investigated, first focusing on its effect on the B-QDs PLQY. Single carrier devices are fabricated to be able to exclusively study the potential role of positive (holes) and negative (electrons) carriers upon electrical stress. The results show that HTL undergoes degradation upon hole current flow leading to the B-QDs PLQY decay, whereas the electron current flow has minimal effect on the B-QDs PLQY. The organic HTL is replaced with a more robust thermally crosslinked HTL. Although the new HTL sustained the QDs PLQY, it is not advantageous for improving the B-QLEDs EL stability whereas it can improve the green QLEDs EL stability with the same structure by a factor of two. The results indicate that neither HTL degradation nor B-QDs PLQY drop is the predominant reason for the B-QLEDs' fast EL loss. The study is further continued by probing the electrical characteristics of the single-carrier devices. The results show that charge injection efficiency in B-QLEDs changes over time due to electrical aging. The results show that the changes worsen the initial charge balance condition during the device operation and thereby a significant EL quenching leads to the EL loss in B-QLEDs. However, the changes are observed to be partially reversible so that driving the B-QLEDs under pulsed current instead of constant current doubles the electrical stability of B-QLEDs. Ultimately, the investigations continued, and it is found that B-QLEDs suffer from holes leaking into the ZnMgO layer and it causes a significant degradation to the ZnMgO layer. The findings indicate that the damage to ZnMgO layer induced by holes results in more defect density of states in the ETL and worsens the ETL electron injection capacity. This also serves as another contributing factor to the B-QLEDs poor EL stability.
  • Item
    Learning While Bidding in Real Time Auctions with Multiple Item Types and Unknown Price Distribution
    (University of Waterloo, 2024-05-14) Boreiri, Amirhossein
    A Real-Time Bidding (RTB) network is a real-time auction market, primarily used for advertising space sales. Within this environment, clients participate by bidding on preferred items and subsequently purchasing them upon winning. This thesis addresses the problem of optimal real-time bidding within a second-price Vickrey auction setting, where the distribution of prices is unknown. Our focus centers on second-price auction mechanisms, which offers unique properties that enable the development of compelling algorithms. We introduce the concept of a demand-side platform (DSP), acting as an intermediary representing clients in the auction market. With no prior knowledge of typical prices, the DSP must determine optimal bidding strategies for each item and distribute won items among clients to fulfill their contracts while minimizing expenses. When the distribution of the prices of items is known, this optimal bidding problem can be solved by classic convex optimization algorithms such as ADMM. However, market properties may vary over time, and access to competitor behavior or bidding information is limited. Consequently, the DSP must continually update its information about the price distribution, while adapting bidding estimations in real-time. Our primary contribution lies in devising efficient online optimization algorithms that accurately find the optimal bids. To tackle this, we employ tools from convex optimization analysis, including duality, along with stochastic optimization algorithms, notably stochastic approximation. Moreover, techniques such as projection and penalty term methods are utilized to enhance algorithm performance.
  • Item
    Radar-based Human Pulse Sensing
    (University of Waterloo, 2024-05-10) Riad, Michael
    This thesis presents a novel approach where a superstrate is used to re-purpose a commercial off-the-shelf mm-wave 60 GHz radar by Infineon (BGT60TR13C) for near-field human pulse sensing. The superstrate is realized through low-cost printed circuit board (PCB) technology. it consists of couplers, a microstrip line, and a transverse slot in the ground plane. The superstrate was designed using full-wave electromagnetics (EM) simulations, along with a numerical-based model to predict the output of the radar in the presence of the superstrate. The superstrate was fabricated, and the combined sensor was tested on human participants. The preliminary measurement results match the expected theoretical pulse waveform. Hence, proving the feasibility of the proposed approach in near-field pulse sensing applications. In comparison to the state-of-the-art pulse radar-based wearable sensors, the proposed sensor is more compact due to the superstrate protecting the radar from the detrimental effect of direct skin contact. Moreover, the sensing mechanism changed from capturing the phase changes for the range bin to monitoring the amplitude of the intermediate frequency (IF) spectrum peak. This was possible due to the enhanced signal-to-noise ratio (SNR). The measured pulse waveforms were compared to a commercial-grade electrocardiogram (ECG) from Frontier. There was an excellent agreement, particularly in the period and timing of the pulses. Therefore, the proposed sensor is feasible for heart rate monitoring and heart rate variability. Moreover, computer-aided-design-software (CAD)-generated digital twins have shown potential in generating datasets that could be fed to machine learning algorithms for detection and classification. This thesis demonstrates the feasibility of using the Shooting and Bouncing Ray method (SBR+) simulation to model pulse sensing using a commercial mm-wave radar. The time-domain reconstructed pulse waveforms had very good agreement with the input waveform, hence, this demonstrates the feasibility of using mm-wave radars for pulse sensing and the potential of using digital twins for generating diverse datasets to train radars on the signatures of various diseases and health conditions.
  • Item
    Monolithic Amorphous-Selenium/CMOS Small-Pixel-Effect-Enhanced Single-Photon-Counting Imagers for Dedicated Breast Computed Tomography
    (University of Waterloo, 2024-05-02) Mohammadi, Reza
    Breast cancer is the most common cause of cancer-related death in women in the world. However, early and reliable detection of breast cancer can reduce its high fatality rate. Diagnosis hinges on medical X-ray imaging techniques like screen-film mammography and digital mammography, which are burdened by false positives, increased radiation exposure, and patient discomfort due to breast compression. Conversely, three-dimensional (3-D) imaging modalities, like magnetic resonance imaging and digital breast tomosynthesis, aim to reduce false positives but have several limitations, including high cost, patient discomfort, and difficulties imaging dense breast tissue. Alternatively, dedicated breast computed tomography (DBCT) with a single-photon-counting (SPC) detector offers 3-D imaging without breast compression and lower radiation exposure, but faces challenges related to the fabrication of the X-ray sensor. Commonly-used X-ray-sensor materials for SPC detectors are silicon (Si), cadmium zinc telluride (CZT), and cadmium telluride (CdTe). However, Si has low detection efficiency and polycrystalline CZT and CdTe face yield issues due to the bump-bonding process needed for integration with a complementary metal-oxide-semiconductor (CMOS) readout integrated circuit (ROIC). An alternative to these materials for monolithic X-ray SPC imagers is amorphous selenium (a-Se), a well-established X-ray-sensitive material. While the use of an a-Se sensor is more cost-effective for large-area deposition, the major drawbacks of a-Se are its limited temporal and energy resolution, preventing its use in detectors for DBCT. In this thesis, we tackle the intrinsic limitations of a-Se and demonstrate, for the first time, the potential of a monolithic a-Se/CMOS X-ray imager to meet the demanding DBCT requirements. First, we demonstrate single-photon counting with an a-Se sensor, monolithically integrated on a CMOS ROIC, for the first time. We report the electrical characterization of a previously-designed CMOS SPC imager (Chip 1) and outline the limitations in achieving the electrical performance necessary for photon counting. Subsequently, we present circuit simulations and measurements that identify these limitations and suggest methods for overcoming the associated electrical barriers. Additionally, we discuss the technical challenges hindering the integration of a-Se on Chip 1 and propose our solution to address these issues. Next, we present the first measured transient X-ray response from a monolithic a-Se/CMOS imager and the first measured pulse-height spectroscopy results using our designed CMOS ROIC (Chip 2). We also experimentally demonstrate the small-pixel effect (SPE) with an a-Se/CMOS detector for the first time and show its potential to dramatically improve energy resolution and temporal response. Additionally, with device-level simulations, we analyze the impact of carrier movement in an a-Se sensor on SPE. We then demonstrate, for the first time, photon-counting results using new a-Se/CMOS pixel (Chip 3), which is able to meet the stringent DBCT requirements. To achieve this, we design and verify a novel large-area scalable 92~ $\times$ 92~$\mu$m$^2$ pixel having a sub-pixel SPE-enhancement technique. We also present a novel area-efficient foreground calibration circuit for SPC pixels, which employs a new area-efficient current-steering calibration digital-to-analog converter. Finally, we demonstrate a photon-counting pixel (Chip 4) that features a new adaptive common-mode leakage-compensation circuit with offset correction to further improve the maximum pixel count rate for medical imaging.
  • Item
    Development of a Simulation Framework for 2D-Material MOSFETs: Investigating Cryogenic Behaviors and Enhancing Performance Optimization
    (University of Waterloo, 2024-05-01) Zhao, Yiju
    The relentless pursuit of miniaturization in semiconductor technology has ushered in the era of nanotransistors, which operate at the cutting edge of physical limits. This thesis presents significant advancements in nanoelectronics, focusing on the development of a numerical simulation tool for two-dimensional (2D) material-based Metal-Oxide- Semiconductor Field-Effect Transistors (MOSFETs) operating at cryogenic temperatures and exploring the potentials of materials like germanene (GeH) and hafnium disulfide (HfS2) for future Complementary Metal-Oxide-Semiconductor (CMOS) technology. Central to this work is an advanced simulation framework capable of accurately predicting and characterizing 2D-based MOSFET behavior under cryogenic conditions, pivotal for cutting-edge applications such as control circuits for quantum computing. Furthermore, my research during PhD involves a comprehensive multi-level simulation approach, addressing substantial challenges in cryogenic device operation and laying a foundational framework for the design and optimization of electronic devices in extreme temperature environments while also advancing the understanding of material-device-circuit interplay for the efficient design of next-generation nanoelectronic circuits. The simulation framework in the thesis utilizes the non-equilibrium Green’s function (NEGF) formalism, incorporating temperature-dependent electrostatics, scattering, dissipation effects, carrier freeze-out, and band tail phenomena. This methodology provides a detailed understanding of the physical processes governing 2D materials in cryogenic environments. Through a rigorous alignment with experimental data, particularly with HfS2 MOSFETs, the framework validates our model and underscores its potential to enhance cryogenic electronic device performance. This significant achievement marks a stride in using 2D materials for quantum computing applications, offering a sophisticated tool for their design and optimization. In addition, the thesis explores a holistic multi-level simulation approach tailored for 2D material-based nanoelectronics, featuring an in-depth case study on GeH MOSFETs. This segment covers device simulation, physics-based compact modeling, and circuit benchmarking, seamlessly bridging the gap between nanomaterial properties and circuit behavior. By revising the virtual source model to reflect the unique characteristics of 2D-material FETs and conducting HSPICE simulations, we have demonstrated substantial improvements in the energy-delay product by optimizing power supply and threshold voltages. This multi-faceted simulation process deepens the understanding of the synergy between materials, devices, and circuits and paves the way for the efficient design of futuristic nanoelectronic circuits. Overall, this thesis underscores the critical role of cryogenic simulations in leveraging the full potential of 2D materials for advanced electronic applications, setting the stage for a new era in cryo-CMOS and beyond. The insights from this research are expected to spark further investigations and developments, pushing the boundaries of what is achievable in nanoelectronics.
  • Item
    Architectural Design and Test Case Analysis of a Simulator for Failure Localization
    (University of Waterloo, 2024-05-01) Lu, Xiangzhu
    Failure localization involves identifying the suspicious locations of failures by analyzing the reported alarms recorded in the OTN control plane. To expedite the development of failure localization algorithms and reduce costs, a simulator is essential to replicate alarm propagation behaviors across various scenarios. This thesis presents the design and implementation of a simulator comprising the following components: a rule database, a topology generator, a failure generator, and an alarm generator. The topology generator produces network topologies to simulate various network conditions, while the failure generator generates simulated failures. Subsequently, the alarm generator utilizes the rule database to generate corresponding alarm data. The generated data structures include failures/alarms, alarm flows, alarm chains, and alarm correlation trees. Additionally, two post-processing methods are introduced to illustrate the derivation of new data structures from existing data. To validate the accuracy of the simulator, five test cases are introduced, featuring different topology settings, varying numbers of failures, and including a specific scenario involving noisy alarms.
  • Item
    Resource Constrained Linear Estimation in Sensor Scheduling and Informative Path Planning
    (University of Waterloo, 2024-04-30) Dutta, Shamak
    This thesis studies problems in resource constrained linear estimation with a focus on sensor scheduling and informative path planning. Sensor scheduling concerns itself with the selection of the best subsets of sensors to activate in order to accurately monitor a linear dynamical system over a fixed time horizon. We consider two problems in this setting. First, we study the general version of sensor scheduling subject to resource constraints modeled as linear inequalities. This general form captures a variety of well-studied problems including sensor placement and linear quadratic control (LQG) control and sensing co-design. Second, we study a special case of sensor placement where only k measurements can be taken in a spatial field which finds applications in precision agriculture and environmental monitoring. In informative path planning, an unknown target phenomena, modeled as a stochastic process, is estimated using a subset of measurements in a spatial field. We study two problems in this setting. First, we consider constraints on robot operation such as tour length or number of measurements with the goal of producing accurate estimates of the target phenomena. Second, we consider the dual version where robots must minimize resources used while ensuring the resulting estimates have low uncertainty or expected squared estimation error. Our solution approaches exploit the problem structure at hand to give either exact formulations as integer programs, approximation algorithms, or well-designed heuristics that yield high quality solutions in practice. We develop algorithms that combine ideas from combinatorial optimization, stochastic processes, and estimation.
  • Item
    PACS: Private and Adaptive Computational Sprinting
    (University of Waterloo, 2024-04-25) Wu, Simon
    Computational sprinting is a class of mechanisms that enables a chip to temporarily exceed its thermal limits to enhance performance. Previous approaches model the optimization of computational-sprinting strategies as a mean-field game and calculate the game's static equilibrium strategies using dynamic programming. This approach has three main limitations: (i) a requirement for a priori knowledge of all system parameters; (ii) inflexibility in equilibrium strategies, necessitating recalculation for any system parameter change; and (iii) the need for users to disclose precise characteristics of their applications without data privacy guarantees. To address these issues, we propose PACS, a private and adaptive mechanism that enables users to independently optimize their sprinting strategies. Our experiments in a simulated environment demonstrate that PACS achieves adaptability, ensures data privacy, and provides comparable performance to state-of-the-art methods. Specifically, PACS outperforms existing approaches for certain applications while incurring at most a 10% performance degradation for others, all without having prior knowledge of system parameters.
  • Item
    Salus: Stackelberg Games for Malware Detection with Microarchitectural Events
    (University of Waterloo, 2024-04-24) Khodaei, Elaheh
    Microarchitectural events have been the subject of previous investigations for malware detection. While some studies assert the effectiveness of utilizing hardware events in detecting malware, others contend that they may not be beneficial for this purpose. We argue and empirically show that the efficacy of using hardware events for malware detection relies on accurately selecting hardware events during detector training. Through rigorous analysis, we demonstrate that the conventional approach of selecting a single subset of hardware events for training a malware detection model is insufficient for creating a robust system capable of effectively handling all types of malware, even when using a ensemble of powerful classifiers. Accordingly, we propose the use of multiple subsets of hardware events, each dedicated to training a distinct malware detection model. Since only a single subset of events can be monitored at any given time, we adopt a game-theoretic approach to determine the optimal strategy for selecting the subset of hardware events to be monitored. In addition to the theoretical analysis of our approach, we empirically demonstrate its effectiveness by comparing it to other baselines.
  • Item
    Hydrogel-Based Dye-sensitized Solar Cells
    (University of Waterloo, 2024-04-23) Jamali, Hussain
    Utilizing PVA-PANI hydrogel as a quasi-solid state electrolyte in DSSC enhances device stability and efficiency. The hydrogel matrix provides mechanical robustness and facilitates ion transport, improving charge transfer kinetics. This novel approach holds promise for advancing the performance of dye-sensitized solar cells.
  • Item
    LSTM Based Remaining Useful Life Prediction for Lithium-Ion EV Batteries
    (University of Waterloo, 2024-04-19) Pandey, Sapna
    Lithium-ion batteries are commonly used in electric vehicles (EVs) because of their high energy density, ability to provide good efficiency, and being lightweight. Predicting the Remaining Useful Life (RUL) is critical in lithium-ion batteries as it helps optimize efficiency and timely replacement of these batteries. To optimize the battery performance, it is critical to predict RUL and lithium-ion batteries' End of Life (EOL). There are several approaches for RUL estimation in lithium-ion batteries, such as model-based, data-driven, and hybrid approaches. Out of all the approaches, data-driven approaches, such as, Recurrent Neural Networks (RNN), Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) have gained popularity due to their less complexity and adaptability. In this study, we have investigated the LSTM networks for RUL estimation in lithium-ion batteries.\\ The first part of the study shows the effect of different parameter changes, such as hidden nodes and window size, in LSTM networks. The investigation reveals that increasing the number of hidden nodes before encountering overfitting improves prediction accuracy and lowers the Root Mean Square Error (RMSE) from 0.2 to 0.03. The experiments to find the ideal window size for the LSTM model used in this study illustrate that the model shows improvement with a higher window size up to a maximum of 14. Moreover, in the second part of this study, we propose an incremental LSTM model for time series forecasting that incorporates the newly available data at each time step and updates itself to make better predictions of future capacity values. The proposed incremental LSTM model improves the RMSE by 17.6\% compared to the baseline LSTM model. For further analysis of real-time RUL estimation where there is limited data, and the user wants to predict the RUL at any moment, another LSTM model that considers these assumptions is proposed. The models are trained and tested with the help of two publicly available datasets: The lithium-ion battery aging dataset by NASA Ames Prognostics Center of Excellence (PCoE) and the battery dataset by the Center for Advanced Life Cycle Engineering (CALCE). This research proposes LSTM models that are useful for accurately estimating RUL in lithium-ion batteries, which is critical for Electric Vehicles (EVs).
  • Item
    Impact of data quality on ML models: Improving data quality with Outlier Detection
    (University of Waterloo, 2024-04-15) Sharma, Rakshit
    In the dynamic landscape of Machine Learning (ML) applications, data quality comes out to be an important factor that impacts the performance of ML models. Through this thesis, we present a study that proposes innovative methods for enhancing data quality through an iterative data recapture approach. This research primarily focuses on univariate time-series data where specific patterns can be extracted. We start by discussing existing data capture methods, where the data is collected manually or using some hardware devices. The proposed methods, namely Sessionized Recapture Strategy (SRS) and Robust Single Capture Method (RSCM), are meticulously detailed, offering distinct strategies for iterative data recapture. The Single Capture Method (SCM) and Recapture and Visualize Method (RVM) serve as the two baseline methods, with their data capture time and a consequential False Positive Rate (FPR). SRS is the enhancement of RVM, and RSCM is the enhancement of SCM. This thesis also introduces an outlier detection algorithm named Outlier detection through ParameterlEss Robust Algorithm (OPERA), which, when added with SCM and RVM, results in SRS and RSCM, respectively. Compared with the baseline methods, the proposed methods show promising results and improvement in the data quality of the captured data. The experiments are performed on two datasets: one dataset is captured in the Embedded Systems Lab on one of the ANVIL products for Future Technology Devices International (FTDI) chips, and the second dataset is Electrocardiogram (ECG), provided by PhysioNet and is publicly available. The research concludes with synthesizing key findings and recommendations for practitioners seeking to optimize model performance through enhanced data quality.