Electrical and Computer Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9908
This is the collection for the University of Waterloo's Department of Electrical and Computer Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Browsing Electrical and Computer Engineering by Title
Now showing 1 - 20 of 2024
- Results Per Page
- Sort Options
Item 0.42 THz Transmitter with Dielectric Resonator Array Antenna(University of Waterloo, 2019-07-23) Holisaz, HamedOff chip antennas do not occupy the expensive die area, as there is no limitation on their building material, and can be built in any size and shape to match the system requirements, which are all in contrast to on-chip antenna solutions. However, integration of off-chip antennas with Monolithic-Microwave-Integrated Chips (MMIC) and designing a low loss signal transmission from the signal source inside the MMIC to the antenna module is a major challenge and trade off. High resistivity silicon (HRS), is a low cost and extremely low loss material at sub-THz. It has become a prevailing material in fabrication of passive components for THz applications. This work makes use of HRS to build an off-chip Dielectric Resonator Antenna Array Module (DRAAM) to realize a highly efficient transmitter at 420 GHz. This work proposes novel techniques and solutions for design and integration of DRRAM with MMIC as the signal source. A proposed scalable 4×4 antenna structure aligns DRRAM on top of MMIC within 2 μm accuracy through an effortless assembly procedure. DRAAM shows 15.8 dB broadside gain and 0.85 efficiency. DRAs in the DRAAM are differentially excited through aperture coupling. Differential excitation not only inherently provides a mechanism to deliver more power to the antenna, it also removes the additional loss of extra balluns when outputs are differential inside MMIC. In addition, this work proposes a technique to double the radiation power from each DRA. Same radiating mode at 0.42 THz inside every DRA is excited through two separate differential sources. This approach provides an almost loss-less power combining mechanism inside DRA. Two 140_GHz oscillators followed by triplers drive each DRA in the demonstrated 4×4 antenna array. Each oscillator generates 7.2 dBm output power at 140 GHz with -83 dBc/Hz phase noise at 100 KHz and consumes 25 mW of power. An oscillator is followed by a tripler that generates -8 dBm output power at 420 GHz. Oscillator and tripler circuits use a smart layer stack up arrangement for their passive elements where the top metal layer of the die is grounded to comply with the planned integration arrangement. This work shows a novel circuit topology for exciting the antenna element which creates the feed element part of the tuned load for the tripler circuit, therefore eliminates the loss of the transition component, and maximizes the output power delivered to the antenna. The final structure is composed of 32 injection locked oscillators and drives a 4×4 DRAAM achieves 22.8 dBm EIRP.Item 17-21 GHz Low-Noise Amplifier with Embedded Interference Rejection(University of Waterloo, 2023-01-06) Jodhka, TejasviThe ever-growing demand for high performance wireless connectivity has led to the development of fifth-generation (5G) wireless communication standards as well as satellite communication (Satcom). Both 5G wireless communications and Satcom use higher carrier frequencies than traditional standards such as 4G and WiFi. While the higher carrier frequencies allow for larger bandwidths and faster data rates, they come with the cost of high free-space path loss. This high loss necessitates the use of active phased array antennas, which can require hundreds of integrated circuits (ICs) designed in Complimentary Metal-Oxide Semiconductor (CMOS) processes. Furthermore, in a future world with ubiquitous 5G wireless base stations and Satcom users, it is conceivable that Satcom receivers can be jammed by high-power Satcom transmitters and 5G signals. Therefore, Satcom phased arrays must be designed for resilience against these sources of interference while supporting high data rates. One of the key components in a Satcom receiver is the low-noise amplifier (LNA). It is responsible for amplifying the weak, noisy signal received from the satellite into a signal with sufficiently high signal-to-noise ratio for demodulation. One possible solution for making the phased array resilient to sources of interference is to embed filtering in the LNA. This thesis presents two LNA designs that employ embedded filtering for resiliency to interference from 5G wireless signals and Satcom transmitters. First, the circuit-level specifications of a 17.7 - 21.2 GHz (K-band) LNA for satellite communication phased array beamformers are derived from the system requirements. Next, the LNA designs are presented. The first LNA is designed to have out-of-band filtering at 24-30 GHz, which corresponds to the bands containing both 5G and Satcom transmitter interferers. The second LNA is designed to have out-of-band filtering at 27-30 GHz, which addresses a different scenario where the Satcom transmitter is the sole source of interference. Both LNAs are implemented in the Global Foundries 130nm 8XP Silicon-Germanium Bipolar CMOS (SiGe BiCMOS) process. A novel transformer feedback notch is introduced that enhances the filtering capabilities of the amplifier. The full electromagnetic simulation of the first LNA shows a peak gain of 28.8 dB, a minimum noise figure of 1.85 dB, and and input 1 dB compression point (IP1dB) greater than -17 dBm between 24 and 30 GHz. The second LNA shows a peak gain of 27.9 dB, a minimum noise figure of 1.78 dB, and an IP1dB greater than -15 dBm between 27 and 30 GHz. Both LNAs meet specifications sufficient for a Satcom receiver at the same time as having resiliency to out-of-band interference sources.Item 22-32 GHz Low-Noise Amplifier Design in 22-nm CMOS-SOI Technology(University of Waterloo, 2019-01-29) Cui, BolunThis thesis explores the use of a 22-nm CMOS-SOI technology in the design of a two-stage amplifier which targets wide bandwidth, low noise and modest linearity in the 28 GHz band. A design methodology with a transformer-coupled, noise-matching interstage is presented for minimizing the noise factor of the two-stage amplifier. Furthermore, benefits of interstage noise matching are discussed. Next, a transistor layout for minimizing noise and maintaining sufficient electromigration reliability is described. It is followed by an analysis of transformer configurations and a transformer layout example is depicted. To verify the design methodology, two amplifier prototypes with noise-matching interstage were fabricated. Measurement shows that the first design achieves a peak gain of 20.7 dB and better-than-10-dB input and output return losses within a frequency range of 22.5 to 32.2 GHz. The lowest noise figure of 1.81 dB is achieved within the frequency range. Input IP3 of -13.4 dBm is achieved with the cost of 17.3 mW DC power consumption. When the bias at the back-gate is lowered from 2 V to 0.62 V, the power consumption is decreased to 5.6 mW and the peak gain drops down to 17.9 dB. Minimum noise figure increases from 1.81 to 2.13 dB and input IP3 drops to -14.4 dBm. The folded output stage in the second design improves the input IP3 to -6.7 dBm at the cost of 35 mW total power consumption. The peak gain of the second design is 20.1 dB, and the lowest noise figure of 1.73 dB within a frequency range of 23.8 to 32.4 GHz. Both designs occupy about 0.05 mm2 active area.Item 2D Digital Filter Implementation on a FPGA(University of Waterloo, 2011-08-31T18:27:59Z) Tsuei, Danny Teng-HsiangThe use of two dimensional (2D) digital filters for real-time 2D data processing has found important practical applications in many areas, such as aerial surveillance, satellite imaging and pattern recognition. In the case of military operations, real-time image pro-cessing is extensively used in target acquisition and tracking, automatic target recognition and identi cation, and guidance of autonomous robots. Furthermore, equal opportunities exist in civil industries such as vacuum cleaner path recognition and mapping and car collision detection and avoidance. Many of these applications require dedicated hardware for signal processing. It is not efficient to implement 2D digital filters using a single processor for real-time applications due to the large amount of data. A multiprocessor implementation can be used in order to reduce processing time. Previous work explored several realizations of 2D denominator separable digital filters with minimal throughput delay by utilizing parallel processors. It was shown that regardless of the order of the filter, a throughput delay of one adder and one multiplier can be achieved. The proposed realizations have high regularity due to the nature of the processors. In this thesis, all four realizations are implemented in a Field Programming Gate Array (FPGA) with floating point adders, multipliers and shift registers. The implementation details and design trade-offs are discussed. Simulation results in terms of performance, area and power are compared. From the experimental results, realization four is the ideal candidate for implementation on an Application Specific Integrated Circuit (ASIC) since it has the best performance, dissipates the lowest power, and uses the least amount of logic when compared to other realizations of the same filter size. For a filter size of 5 by 5, realization four can produce a throughput of 16.3 million pixels per second, which is comparable to realization one and about 34% increase in performance compared to realization one and two. For the given filter size, realization four dissipates the same amount of dynamic power as realization one, and roughly 54% less than realization three and 140% less than realization two. Furthermore, area reduction can be applied by converting floating point algorithms to fixed point algorithms. Alternatively, the denormalization and normalization stage of the floating point pipeline can be eliminated and fused together in order to save hardware resources.Item A 37-40 GHz Dual-Polarized 16-Element Phased-Array Antenna with Near-Field Probes(University of Waterloo, 2022-09-21) He, ZiranWith the development of fifth-generation (5G) communication networks, in order to meet the growing demand for high-speed and low-latency wireless communication services, channel capacity has become the main driving force for choosing millimeter wave (mm-wave) over over-crowded sub-6 GHz frequency bands. Recently, beamforming phased array attracts significant research efforts as it is a promising solution and unique in its ability to overcome the high path-loss at high frequency, provide fast beam steering and deliver better user-ends experience. However, to alleviate the issues that associated with beamforming phased array, such as imbalance between array elements and non-linearity caused by power-amplifiers (PAs) in beamforming channels, far-field (FF) based array calibration and digital pre-distortion (DPD) need to be performed, which is not practical in real world scenario. This thesis presents a low-cost 16-element dual-polarized mm-wave antenna-on-printed circuit board (PCB) transmitter RF beamforming array with embedded near-field probes (NFPs) at 37-40 GHz. The elements are orthogonal, proximity-coupled feed dual-polarized patch antenna with a spacing of 0.5λ within 2x2 subarray and 0.6λ between 2x2 subarray at 38.5 GHz, resulting in maximum 17.7 dB gain with a scan angle of +/-50◦, +/-20◦ in azimuth and +/-20◦, +/-50◦ in elevation for vertical polarization and horizontal polarization, respectively. Without affecting phased array performance, the NFPs achieve flat and comparable coupling magnitude and group delay to the closet RF chain for both polarizations, across operating frequency range. This ensures the quality of received output signal from phased array to implement array calibration and DPD. The configuration of embedded NFPs maintains the scalability of phased array and eliminate the needs of impractical FF reference probe for array calibration and DPD.Item A 39GHz Balanced Power Amplifier with Enhanced Linearity in 45 nm SOI CMOS(University of Waterloo, 2022-09-20) Ma, HaienWith the high data rate communication systems that come with fifth-generation (5G) mobile networks, the shift of operation to millimeter-wave frequency becomes inevitable. The expected data rate in 5G is significantly improved over 4G by utilizing the large available channel bandwidth at millimeter wave frequencies and complex data modulation schemes. With this increase in operation frequency, many new challenges arise and research efforts are made to tackle them. Among them, the phased array system is one of the hottest topics as it can be made use of to improve the link budget and overcome the path loss challenge at these frequencies. As the last circuit component in the transmitter's front-end right before the antenna, the power amplifier (PA) is one of the most crucial components with significant effects on overall system performance. Many of the traditional challenges of CMOS PA design such as output power and efficiency, are now compounded with the additional challenges that are imposed on complementary metal-oxide semiconductor (CMOS) PAs in millimeter wave phased array systems. This thesis presents a balanced power amplifier design with enhanced linearity in GlobalFoundries' 45nm silicon-on-insulator (SOI) CMOS technology. By using the balanced topology with each stage terminating with a differential 2-stacked architecture, the PA achieves saturated output power of over 21 dBm. Each of the two identical sub-PAs in the balanced topology uses 2-stage topology with driver and PA co-design method. The linearity is enhanced through careful choice of biasing point and a strategic inter-stage matching network design methodology, resulting in amplitude-to-phase distortion below 1 degree up to the output 1dB compression level of over 19 dBm. The balanced amplifier topology significantly reduces the PA performance variation over mismatched load impedance at the output, thus improving the PA performance over different antenna active impedance caused by varying phased array beam-steering angles. In addition to this, the balanced topology also optimizes the PA input and output return loss, giving a better matching than -20 dB at both input and output, and minimizing the risk of potential issues and performance degradation in the system integration phase. Lastly, the compact transformer based matching networks and quadrature hybrids reduce the chip area occupation of this PA, resulting in a compact design with competitive performance.Item 45-nm SOI CMOS Bluetooth Electrochemical Sensor for Continuous Glucose Monitoring(University of Waterloo, 2018-08-21) Muthreja, AmanDue to increasing rates of diabetes, non-invasive glucose monitoring systems will become critical to improving health outcomes for an increasing patient population. Bluetooth integration for such a system has been previously unattainable due to the prohibitive energy consumption. However, enabling Bluetooth allows for widespread adoption due to the ubiquity of Bluetooth-enabled mobile devices. The objective of this thesis is to demonstrate the feasibility of a Bluetooth-based energy-harvesting glucose sensor for contact-lens integration using 45~nm silicon-on-insulator (SOI) complementary metal-oxide-semiconductor (CMOS) technology. The proposed glucose monitoring system includes a Bluetooth transmitter implemented as a two-point closed loop PLL modulator, a sensor potentiostat, and a 1st-order incremental delta-sigma analog-to-digital converter (IADC). This work details the complete system design including derivation of top-level specifications such as glucose sensing range, Bluetooth protocol timing, energy consumption, and circuit specifications such as carrier frequency range, output power, phase-noise performance, stability, resolution, signal-to-noise ratio, and power consumption. Three test chips were designed to prototype the system, and two of these were experimentally verified. Chip 1 includes a partial implementation of a phase-locked-loop (PLL) which includes a voltage-controlled-oscillator (VCO), frequency divider, and phase-frequency detector (PFD). Chip 2 includes the design of the sensor potentiostat and IADC. Finally, Chip 3 combines the circuitry of Chip 1 and Chip 2, along with a charge-pump, loop-filter and power amplifier to complete the system. Chip 1 DC power consumption was measured to be 204.8~$\mu$W, while oscillating at 2.441 GHz with an output power $P_{out}$ of -35.8 dBm, phase noise at 1 MHz offset $L(1\text{ MHz})$ of -108.5 dBc/Hz, and an oscillator figure of merit (FOM) of 183.44dB. Chip 2 achieves a total DC power consumption of 5.75~$\mu$W. The system has a dynamic range of 0.15~nA -- 100~nA at 10-bit resolution. The integral non-linearity (INL) and differential non-linearity (DNL) of the IADC were measured to be -6~LSB/$\pm$0.3~LSB respectively with a conversion time of 65.56~ms. This work achieves the best duty-cycled DC power consumption compared to similar glucose monitoring systems, while providing sufficient performance and range using Bluetooth.Item 5G Fixed Wireless Access for Bridging the Rural Digital Divide(University of Waterloo, 2021-09-22) Lappalainen, AndrewDespite the ubiquitous level of mobile and fixed broadband (FB) connectivity that exists for many people today, the availability of high quality FB services in rural communities is generally much lower than in urban communities, which has led to a digital divide. At the same time, rural communities in Canada have a high level of 4G LTE coverage and the mobile digital divide between urban and rural communities is much smaller compared to the FB divide. Traditionally, FB and mobile services were offered over separate technologies by different operators, and evolved separately from one another. However, recently, a convergence between mobile and FB has started to emerge via 4G Fixed Wireless Access (FWA), which has made it possible to take advantage of the high level of cellular coverage in rural communities to offer (limited) FB at lower costs than traditional wired FB. To bridge the digital divide, rural FWA must be able to provide the same end-to-end experience as urban FB. In in this regard, 4G FWA has been inadequate; however, the recent emergence of 5G, which brings new spectrum, a more efficient radio interface, and multi-user massive MIMO, can make a difference. In the first half of this thesis we outline a vision for how 5G could fix the rural connectivity gap by truly enabling FWA in rural regions. We examine new and upcoming improvements to each area of the 5G network architecture and how they can benefit rural users. Despite those advancements, 5G operators will face a number of challenges in planning and operating rural FWA networks. Therefore, we also draw attention to a number of open research challenges that will need to be addressed. In the latter half of this thesis, we study the planning of a rural 5G multi-user massive MIMO FWA TDD system to offer fixed broadband service to homes. Specifically, we aim to determine the user limit, i.e., the maximum number of homes that can simultaneously receive a target minimum bit rate (MBR) on the downlink (DL) and a target MBR on the uplink (UL) given a set of network resources (e.g., bandwidth, power, antennas) and given a radius. To attain that limit, we must understand how resources should be shared between the DL and UL and how user selection (as well as stream selection since both the base-station (BS) and the homes are multi-antenna), precoding and combining, and power distribution should be performed. To simplify the problem, we use block diagonalization and propose a static user grouping strategy that organizes homes into fixed groups in the DL and UL (we use different groups for the two directions); then we develop a simple process to find the user limit by determining the amount of resources required to give groups the MBRs. We study the impact of group sizes and show that smaller groups use more streams and enable more homes to receive the MBRs when using a 3.5~GHz band. We then show how the user limit at different cell radii is impacted by the system bandwidth, the number of antennas at the BS and homes, the BS power, and the DL and UL MBRs. Lastly, we offer insight into how the network could be operated for an arbitrary number of homes.Item 60 Watts Broadband Push Pull RF Power Amplifier Using LTCC Technology(University of Waterloo, 2013-09-27T15:39:59Z) Jundi, AymanThe continuous increase in wireless usage forces an immense pressure on wireless communication in terms of increased demand and spectrum scarcity. Service providers for communication services had no choice but to allocate new parts of the spectrum and present new communication standards that are more spectrally efficient. Communication is not only limited to mobile phones but recently attention has been given to intelligent transportation systems (ITS) where cars will be given a significant place in the communication network. Vehicular Ad-Hoc Network (VANET) is already assigned a slice of the spectrum at 5.9GHz using the IEEE802.11p standard also known as Dedicated Short-Range Communication (DSRC); however, this assignment will have limited range and functionality at first, and users are expected to depend on existing wireless mobile channels for some services such as video streaming and car entertainment. Therefore, it is essential to integrate existing wireless mobile communication standards into the skeleton of ITS at launch and most probably permanently. An investigation was carried out regarding the existing communication standards including wireless local area networks (WLAN) and it was found that frequency bands from 400MHz up to 6GHz are being used in various regions around the world. It is also noted that current state of the art transceivers are composed of several transmitter front-ends targeting certain bands and standards. However, the more standards to be supported the more components to be added and the higher the cost not to mention the limited space in mobile devices. Multimode Multiband (MMMB) transmitters are therefore proposed as a potential solution to the existing redundancy in the number of front-end paths in modern transmitters. Broadband amplifiers are an essential part of any MMMB transmitter and they are also among the most challenging especially for high power requirements. This work explains why single ended topologies with efficiencies higher than 50% have a fundamental bandwidth limit such that the highest frequency of operation must be lower than twice the lowest frequency of operation. Hence, Push-Pull amplifier topology is being proposed as it was found that it has inherent broadband capabilities exceeding those of other topologies with comparable efficiency. The major advantage of Push-Pull power amplifiers is its capability of isolating the even harmonics present in the even mode operation of a Push-Pull amplifier from the less critical odd mode harmonics and the fundamental frequency. This separation between even and odd signals comes from the inclusion of a Balun at the output of push-pull amplifiers. Such separation makes it possible to operate amplifiers beyond the existing limit of single ended power amplifiers. To prove the concept, several Baluns were designed and tested and a comparison was made between different topologies in terms of balance, bandwidth and odd and even mode performances; moreover, to illustrate the concept a Push-Pull power amplifier design was implemented using the multilayer Low Temperature Co-fired Ceramics (LTCC) technology with a bandwidth ratio of more than 100%.Item A Novel PLC Front-haul for 5G IoT Indoor Communication using Split C-RAN Architecture(University of Waterloo, 2024-08-23) Ibrahim, MaiThe demand for efficient telecommunications in the era of Fifth Generation (5G) and Internet of Things (IoT) necessitates innovative approaches to network architecture and communication technologies. Recently, split Centralized Radio Access Network (C-RAN) architecture, characterized by Central Unit (CU), geographically dispersed Distributed Unit (DU), and indoor Radio Units (RUs), has presented opportunities for optimizing communication links in indoor environments. Yet, the adaptation of this innovative architecture to enable massive indoor IoT applications is still deemed inefficient due to the associated cost of deployment. Accordingly, this research investigates Power-Line Communication (PLC) as a cost-efficient alternative solution for C-RAN front-haul. Specifically, the focus is on exploring the utilization of indoor low-voltage power lines in the context of 5G New Radio (NR) indoor IoT applications. First, to ensure that standard protocols like Common Public Radio Interface (CPRI) and Enhanced Common Public Radio Interface (eCPRI) can run on PLC, we introduce two novel patented components to the architecture, namely the CPRI-PLC-Gateway (CPG) and Enhanced CPRI-PLC-Gateway (eCPG). These are a plug and play components that come in pairs. They are used to create a virtual PLC front-haul link ensuring transparent transportation of unmodified CPRI or eCPRI frames between DU and RUs, even under challenging PLC channel conditions. As such, they set the foundation for optimizing the PLC front-haul and help resolve various challenges, including PLC time-varying nature and susceptibility to additive white Gaussian noise (AWGN). Furthermore, investigations are extended to study the impact of the proposed PLC based split C-RAN system in the context of the Radio access network (RAN). For that, an indoor multi-story service building that houses a large number of air-interfaced IoT devices is considered. To ensure that the reported results apply to real-life applications, we consider a PLC network that encompasses typical indoor low-voltage 3-phase power lines and follows TN-S earthing configuration. Accordingly, it is shown that through the incorporation of the CPG and eCPG components, the implementation of In-band full-duplex (IBFD) communication over the multiple Input - multiple output (MIMO) PLC channel, and the integration of the hybrid circuit-based isolation, the system can support a considerable number of air interfaced IoT devices at standardized rates. It is also shown that the self-interference over the power line segment is mitigated which ensures robust bidirectional communication in the system. Moreover, a significant aspect of the thesis revolves around conducting a comprehensive performance analysis for the proposed PLC front-haul for IoT indoor communications. Mathematical models, rooted in queuing theory, Markov modelling, and stochastic geometry, are developed to assess the end-to-end delay performance of the indoor front-haul solution. Analytical expressions are derived for various performance metrics, including radio coverage probability, the number of served devices, and system delay. Wireless spatial models, path-loss models, and interference considerations are meticulously analysed in terms of multiple factors such as the number of wireless IoT devices, radio and PLC bandwidth, and transmission technology, in regard to the delay performance of the proposed system. These models are rigorously validated through extensive simulations, demonstrating compliance with stringent 5G, CPRI, and eCPRI bit error rate (BER) and delay requirements. Last, as the thesis further aims to examine the optimization challenge of maximizing throughput in a split-RAN system that includes a PLC front-haul link within a multi-story building. The goal is to optimize the number of fulfilled IoT devices while simultaneously satisfying their quality of service (QoS) criteria. The optimization problem is defined as a mixed-integer non-convex problem, which includes several objectives: maximizing the number of satisfied devices, minimizing operating cost, minimizing device transmit power, and minimizing PLC access delay. The thesis further explores the application of an Evolutionary Multi Objective Optimization (EMO) algorithm, specifically Non-dominated Sorting Genetic Algorithm II (NSGA-II), to address the issue of conflicting objectives in communication systems. The method operates by systematically generating successive iterations of solutions using tournament selection, single-point binary and simulated binary crossovers, and polynomial mutation operators. The system outcomes present a Pareto front consisting of non-dominated solutions for the issue defined using multi-objective optimization (MOO) showing a trade-off between the system objectives.Item A Quantum Repeater Sandbox with Warm Atomic Memories and Quantum Dot Photon Sources(University of Waterloo, 2024-09-23) Li, MichaelQuantum communication is known to offer many advantages, including opportunities for distributed quantum computing and more secure information transfer through quantum key distribution. This thesis provides background on how a quantum communication network can be achieved using quantum repeaters and how these components can be constructed with a hybrid system involving a quantum dot source and warm atomic memory. It also presents three experimental projects to realize critical components to the repeater: (1) The quantum dot photon source characterizations and tuning. (2) A compact 3D printed opto-mechanical laser locking board. (3) Optical memory observed in room temperature Cs vapor cells with an EIT memory scheme. These projects have built the basic foundation to create a repeater node using room temperature vapor cells and open the doors to future investigations of warm cell experiments.Item a-Si:H-Silicon Hybrid Low Energy X-ray Detector(University of Waterloo, 2014-09-15) Shin, Kyung-WookLow energy X-ray (< 20 keV) detection is a key technological requirement in applications such as protein crystallography or diffraction imaging. Silicon based optical cameras based on CCDs or CMOS imaging chips coupled to X-ray conversion scintillators have become a mainstay in the field. They are attractive because of fast readout capability and ease of integrated circuit implementation due to modern semiconductor fabrication technology. More recently, hydrogenated amorphous silicon (a-Si:H) thin film technology, that had enabled a huge influx of large area display products into the commercial display market, has been introduced to digital imaging in the form of active matrix flat panel imagers (AMFPIs). Although thin film technology can enable large area X-ray imaging at a potentially lower cost, the existing technology lacks spatial resolution requirements for higher performance crystallography and diffraction imaging applications. This work introduces a high resolution direct conversion silicon X-ray detector integrated with large area thin film silicon technology for sub-20 keV photon X-ray imagers. A prototype pixel was fabricated in-house using a fabrication facility (G2N) utilizing plasma enhanced chemical vapor deposition (PECVD), reactive ion etching (RIE), photo-lithography, and metal sputtering technologies. Unlike most active matrix display products, top-gate staggered a-Si:H thin film transistor (TFT) were implemented to take advantage of a novel thin film silicon pixel amplification device architecture. The detector performance was evaluated with an iron 55 isotope gamma ray source to mimic low energy X-ray exposure. I-V and C-V measurement techniques indicate that the hybrid pixel functions as expected and is promising for low cost, high resolution, large area X-ray imaging (< 20 keV) applications. We also performed a noise spectrum investigation to estimate the lowest detection signal level limit and proposed a model rooted in device physics for the pixel output and gain.Item Abstraction and Refinement Techniques for Ternary Symbolic Simulation with Guard-value Encoding(University of Waterloo, 2022-05-20) Yang, BoWe propose a novel encoding called guard-value encoding for the ternary domain {0, 1, X}. Among the advantages it has over the more conventional dual-rail encoding, the flexibility of representing X with either of <0, 0> or <0, 1> is especially important. We develop data abstraction and memory abstraction techniques based on the guard-value encoding. Our data abstraction reduces much more of the state space than conventional ternary abstraction's approach of over-approximating a set of Boolean values with a smaller set of ternary values. We also show how our data abstraction can enable bit-width reduction which helps further simplify verification problems. Our memory abstraction is applicable to any array of elements which makes it much more general than the existing memory abstraction techniques. We show how our memory abstraction can effectively reduce an array to just a few elements even when existing approaches are not applicable. We make extensive use of symbolic indexing to construct symbolic ternary values which are used in symbolic simulation. Lastly, we give a new perspective on refinement for ternary abstraction. Refinement is needed when too much information is lost due to use of the ternary domain such that the property is evaluated to the unknown X. We present a collection of new refinement approaches that distinguish themselves from existing ones by modifying the transition function instead of the initial ternary state and ternary stimulus. This way, our refinement either preserves the abstraction level or only degrades it slightly. We demonstrate our proposed techniques with a wide range of designs and properties. With data abstraction, we usually observe at least 10X improvement in verification time compared to Boolean verification algorithms such as Boolean Bounded Model Checking (BMC), as well as usually at least 2X and often 10X improvement over conventional ternary abstraction. Our memory abstraction significantly improves how the verification time scales with the design parameters and the depth (the number of cycles) of the verification. Our refinement approaches are also demonstrated to be much better than existing ones most of the time. For example, when verifying a property of a synthetic example based on a superscalar microprocessor's bypass paths, with our data abstraction, it takes 505 seconds while both of ternary abstraction and BMC time out at 1800 seconds. The bit-width reduction can further save 44 seconds and our memory abstraction can save 237 seconds. This verification problem requires refinement. If we substitute our refinement with an existing approach, the verification time with the data abstraction doubles.Item Abstraction Mechanism on Neural Machine Translation Models for Automated Program Repair(University of Waterloo, 2019-09-23) Wei, MoshiBug fixing is a time-consuming task in software development. Automated bug repair tools are created to fix programs with little or no human effort. There are many existing tools based on the generate-and-validate (G&V) approach, which is an automated program repair technique that generates a list of repair candidates then selects the correct candidates as output. Another approach is learning the repair process with machine learning models and then generating the candidates. One machine learning-based approach is the end-to-end approach. This approach passes the input source code directly to the machine learning model and generates the repair candidates in source code as output. There are several challenges in this approach such as the large size of vocabulary, high rate of out-of-vocabulary(OOV) tokens and difficulties on embedding learning. We propose an abstraction-and-reconstruction technique on top of end-to-end approaches that convert the training source code to templates to alleviate the problems in the traditional end-to-end approach. We train the machine learning model with abstracted bug-fix pairs from open source projects. The abstraction process converts the source code to templates before passing it to the model. After the training is complete, we use the trained model to predict the fix templates of new bugs. The output of the model is passed to the reconstruction layer to get the source code patch candidates. We train the machine learning model with 470,085 bug-fix pairs collected from 1000 top python projects from Github. We use the QuixBugs dataset as the test set to evaluate the result. The fix of the bug in the QuixBugs is verified by the test cases provided by the QuixBugs dataset. We choose the traditional end-to-end approach as the baseline and comparing it with the abstraction model. The accuracy of generating correct bug fixes increase from 25% to 57.5% while the training time reduces from 5.7 hours to 1.63 hours. The overhead introduced by the reconstruction model is 218 milliseconds on average or 23.32%, which is negligible comparing to the time saved in the training, which is 4.07 hours or 71.4%. We performed a deep analysis of the result and identified three reasons that may explain why the abstraction model outperforms the baseline. Compared to existing works, our approach has the complete reconstruction process which converts templates to the source code. It shows that adding a layer of abstractions increase the accuracy and reduces the training time of machine-learning-based automated bug repair tool.Item Accelerating Mixed-Abstraction SystemC Models on Multi-Core CPUs and GPUs(University of Waterloo, 2014-04-28) Kaushik, Anirudh MohanFunctional verification is a critical part in the hardware design process cycle, and it contributes for nearly two-thirds of the overall development time. With increasing complexity of hardware designs and shrinking time-to-market constraints, the time and resources spent on functional verification has increased considerably. To mitigate the increasing cost of functional verification, research and academia have been engaged in proposing techniques for improving the simulation of hardware designs, which is a key technique used in the functional verification process. However, the proposed techniques for accelerating the simulation of hardware designs do not leverage the performance benefits offered by multiprocessors/multi-core and heterogeneous processors available today. With the growing ubiquity of powerful heterogeneous computing systems, which integrate multi-processor/multi-core systems with heterogeneous processors such as GPUs, it is important to utilize these computing systems to address the functional verification bottleneck. In this thesis, I propose a technique for accelerating SystemC simulations across multi-core CPUs and GPUs. In particular, I focus on accelerating simulation of SystemC models that are described at both the Register-Transfer Level (RTL) and Transaction Level (TL) abstractions. The main contributions of this thesis are: 1.) a methodology for accelerating the simulation of mixed abstraction SystemC models defined at the RTL and TL abstractions on multi-core CPUs and GPUs and 2.) An open-source static framework for parsing, analyzing, and performing source-to-source translation of identified portions of a SystemC model for execution on multi-core CPUs and GPUs.Item Access Network Selection in a 4G Networking Environment(University of Waterloo, 2008-01-18T21:08:16Z) Liu, YangAn all-IP pervasive networking system provides a comprehensive IP solution where voice, data and streamed multimedia can be delivered to users at anytime and anywhere. Network selection is a key issue in this converged heterogeneous networking environment. A traditional way to select a target network is only based on the received signal strength (RSS); however, it is not comprehensive enough to meet the various demands of different multimedia applications and different users. Though some existing schemes have considered multiple criteria (e.g. QoS, security, connection cost, etc.) for access network selection, there are still several problems unsettled or not being solved perfectly. In this thesis, we propose a novel model to handle this network selection issue. Firstly, we take advantage of IEEE 802.21 to obtain the information of neighboring networks and then classify the information into two categories: 1) compensatory information and 2) non-compensatory information; secondly, we use the non-compensatory information to sort out the capable networks as candidates. If a neighboring network satisfies all the requirements of non-compensatory criteria, the checking of the compensatory information will then be triggered; thirdly, taking the values of compensatory information as input, we propose a hybrid ANP and RTOPSIS model to rank the candidate networks. ANP elicit weights to compensatory criteria and eliminates the interdependence impact on them, and RTOPSIS resolves the rank reversal problem which happens in some multiple criteria decision making (MCDM) algorithms such as AHP, TOPSIS, and ELECTRE. The evaluation study verifies the usability and validity of our proposed network selection method. Furthermore, a comparison study with a TOPSIS based algorithm shows the advantage and superiority of the proposed RTOPSIS based model.Item Accommodating a High Penetration of PHEVs and PV Electricity in Residential Distribution Systems(University of Waterloo, 2015-04-22) ElNozahy, MohamedGlobal warming is threatening the world’s delicate ecosystems to the point where the extinction of numerous species is becoming increasingly likely. Experts have determined that avoiding such a disaster requires an 80% reduction in the 1990 levels of global greenhouse gas emissions by 2050. The problem has been exacerbated by the booming demand for electrical energy. This situation creates a complex dilemma: on the one hand, energy sector emissions must be decreased; on the other, electrical energy production must be increased to meet the growing demand. The use of renewable emission-free sources of electrical energy offers a feasible solution to this dilemma. Solar energy in particular, if properly utilized, would be an effective means of meeting worldwide electricity needs. Another viable component of the solution is to replace gasoline-powered vehicles with plug-in hybrid electric vehicles (PHEVs) because of their potential for significantly reducing greenhouse gas emissions from the transportation sector. It was once believed that integrating solar electricity into distribution systems would be relatively straightforward; however, when the penetration level of photovoltaic (PV) systems began to increase, power utilities faced new and unexpected problems, which arose primarily due to the weak chronological coincidence between PV array production and the system peak demand. PV arrays produce their peak output at noon, during low demand periods, resulting in individual instances when the net PV production exceeds the system net demand. Power then flows from low voltage (LV) to medium voltage (MV) networks. Such reverse power flow results in significant over voltages along distribution feeders and excessive power losses. For PHEVs, the situation is the direct opposite because peak demand periods coincide closely with the hours during which the majority of vehicles are parked at residences and are thus probably being charged. This coincidence causes substantial distribution equipment overloading, hence requiring costly system upgrades. Although extensive research has been conducted with respect to the individual impacts of PV electricity and PHEVs on distribution networks, far too little attention has been paid to studying the interaction between these two technologies or the resulting aggregated impacts when both operate in parallel. The goal of the research presented in this thesis is to fill this gap by developing a comprehensive benchmark that can be used to analyze the performance of the distribution system under a high penetration of both PV systems and PHEVs. However, the uncertainties associated with existing electrical loads, the PHEV charging demand, and the PV array output complicate the achievement of this goal and necessitate the development of accurate probabilistic models to express them. The establishment of such models and their use in the development of the proposed benchmark represent core contributions of the research presented in this thesis. Assessing the anticipated impacts of PHEVs and PV electricity on distribution systems is not the only challenge confronting the electricity sector. Another issue that has been tackled by numerous researchers is the formulation of solutions that will facilitate the integration of both technologies into existing networks. The work conducted for this thesis presents two different solutions that address this challenge: a traditional one involving the use of energy storage systems (ESSs), and an innovative one that hinges on a futuristic novel bilayer (AC-DC) distribution system architecture. In the first solution, the author proposes using ESSs as a possible means of mitigating the aggregated impacts of both PV electricity and PHEVs. This goal can be achieved by storing PV electricity generated during low demand periods, when reverse power flow is most likely to occur, in small-scale dispersed ESSs located at secondary distribution transformers. Thereafter, this energy is then reused to meet part of the PHEV charging demand during peak periods when this demand is most likely to overload distribution equipment. While this solution would kill two birds with one stone, the uncertainties inherent in the system make its implementation difficult. In this respect, a significant contribution of the work presented in this thesis is the use of the previously developed probabilistic benchmark to determine the appropriate sizes, locations, and operating schedules of the proposed ESSs, taking into account the different sources of uncertainty in the system. In the second solution, the author proposes a novel bilayer (AC-DC) architecture for residential distribution systems. With the proposed architecture, the distribution system becomes a bilayer system composed of the traditional AC layer for interfacing with existing system loads, plus an embedded DC layer for interfacing with PV arrays and PHEVs. A centralized bidirectional converter links the two layers and controls the power flow between them. The proposed solution offers a reasonable compromise that enables existing networks to benefit from both AC and DC electricity, thus metaphorically enjoying the best of both worlds. As with the first solution, the uncertainties that characterize the distribution system also create obstacles to the implementation of the proposed architecture. Another important contribution of the research presented in this thesis is the design and validation of the proposed bilayer system, with consideration of these different uncertainties. Finally, the author compares the strengths and weaknesses of both solutions to determine the better alternative.Item Accommodating a High Penetration of Plug-in Electric Vehicles in Distribution Networks(University of Waterloo, 2014-05-12) Shaaban, MostafaThe last few decades have seen growing concern about climate change caused by global warming, and it now seems that the very future of humanity depends on saving the environment. With recognition of CO2 emissions as the primary cause of global warming, their reduction has become critically important. An effective method of achieving this goal is to focus on the sectors that represent the greatest contribution to these emissions: electricity generation and transportation. For these reasons, the goal of the work presented in this thesis was to address the challenges associated with the accommodation of a high penetration of plug-in electric vehicles (PEVs) in combination with renewable energy sources. Every utility must consider how to manage the challenges created by PEVs. The current structure of distribution systems is capable of accommodating low PEV penetration; however, high penetration (20 % to 60 %) is expected over the next decades due to the accelerated growth in both the PEV market and emission reduction plans. The energy consumed by such a high penetration of PEVs is expected to add considerable loading on distribution networks, with consequences such as thermal overloading, higher losses, and equipment degradation. A further consideration is that renewable energy resources, which are neither exhaustible nor polluting, currently offer the only clean-energy option and should thus be utilized in place of conventional sources in order to supply the additional transportation-related demand. Otherwise, PEV technology would merely transfer emissions from the transportation sector to the electricity generation sector. As a means of facilitating the accommodation of high PEV penetration, this thesis proposes methodologies focused on two main themes: uncontrolled and coordinated charging. For uncontrolled charging, which represents current grid conditions, the proposal is to utilize dispatchable and renewable distributed generation (DG) units to address the high PEV penetration in a way that would not be counterproductive. This objective is achieved through three main steps. First, the benefits of allocating renewable DG in distribution systems are investigated, with different methodologies developed for their evaluation. The benefits are defined as the deferral of system upgrade investments, the reduction in the energy losses, and the reliability improvement. The research also includes a proposal for applying the developed methodologies for an assessment of the benefits of renewable DG in a planning approach for the optimal allocation of the DG units. The second step involves the development of a novel probabilistic energy consumption model for uncontrolled PEV charging, which includes consideration of the drivers’ behaviors and ambient temperature effect associated with vehicle usage. The final step integrates the approaches and models developed in the previous two steps, where a long-term dynamic planning approach is developed for the optimal allocation of renewable and dispatchable DG units in order to accommodate the rising penetration of PEV uncontrolled charging. The proposed planning approach is multi-objective and includes consideration of system emissions and costs. The second theme addressed in this thesis is coordinated PEV charging, which is dependent on the ongoing development of a smart grid communication infrastructure, in which vehicle-grid communication is feasible via appropriate communication pathways. This part of the work led to the development of a proposed coordinated charging architecture that can efficiently improve the performance of the real-time coordinating PEV charging in the smart grid. The architecture is comprised of two novel units: a prediction unit and an optimization unit. The prediction unit provides an accurate forecast of future PEV power demand, and the optimization unit generates optimal coordinated charging/discharging decisions that maximize service reliability, minimize operating costs, and satisfy system constraints.Item Accounting for the Effects of Power System Controllers and Stability on Power Dispatch and Electricity Market Prices(University of Waterloo, 2005) Kodsi, SamehRecently, the widespread use of power system controllers, such as PSS and FACTS controllers, has led to the analysis of their effect on the overall stability of power systems. Many studies have been conducted to allocate FACTS controllers so that they achieve optimal power flow conditions in the context of Optimal Power Flow (OPF) analysis. However, these studies usually do not examine the effect of these controllers on the voltage and angle stability of the entire system, considering that the types of these controllers and their control signals, such as reactive power, current, or voltage, have significant effect on the entire system stability.
Due to the recent transition from government controlled to deregulated electricity markets, the relationship between power system controllers and electricity markets has added a new dimension, as the effect of these controllers on the overall power system stability has to be seen from an economic point of view. Studying the effect of adding and tuning these controllers on the pricing of electricity within the context of electricity markets is a significant and novel research area. Specifically, the link among stability, FACTS controllers and electricity pricing should be appropriately studied and modelled.
Consequently, in this thesis, the focus is on proposing and describing of a novel OPF technique which includes a new stability constraint. This technique is compared with respect to existent OPF techniques, demonstrating that it provides an appropriate modelling of system controllers, and thus a better understanding of their effects on system stability and energy pricing. The proposed OPF technique offers a new methodology for pricing the dynamic services provided by the system's controllers. Moreover, the new OPF technique can be used to develop a novel tuning methodology for PSS and FACTS controllers to optimize power dispatch and price levels, as guaranteeing an adequate level of system security. All tests and comparisons are illustrated using 3-bus and 14-bus benchmark systems.Item Achievable Rate Regions of Two-Way Relay Channels(University of Waterloo, 2017-01-19) Dong, LiangWith the fast development of communication networks, cooperative communication has been more widely used in many different fields, such as satellite networks, broadcast networks, internet and so on. Therefore relay channels have been playing a pivotal role since their definitions were proposed by Van-der Meulen. However, the general achievable rate region of a relay channel is still unknown which inspires more people to persistently work on. There are several different kinds of coding schemes proposed by people after relay channels came into our lives. Until now, the two most commonly used coding strategies of relay channels are Decode-and-Forward and Compress-and-Forward. In this thesis we will provide a way to obtain the achievable rate region for two-way relay channels by using decode-and-forward coding. With the knowledge of basic information theory and network information theory, we will focus our study on the achievable rates of relay channels. Most of the previous study of relay channels are aiming to find a more general achievable rate region. In this thesis, an intuitional way will be used to study four-terminal relay channels. This method makes a good use of the information from three-terminal relay channels by separating a four-terminal relay channel into two parts: (1). a three-terminal relay channel; (2). a common end node. The final achievable rate region is obtained by combing together the separate achievable rates of the two parts. We split the complex model to two easier ones, this idea may give help for doing researches on more complicated channels. Eliminating interferences is also a difficulty in the study of relay channels. Comparing with the achievable rate regions of two-way two-relay channels which have already been proved, we found that it is feasible to separate a two-way two-relay channel into a three-terminal relay channel and an common end node. Therefore, we apply this method to all two-way four-terminal relay channels. After fixing two different source nodes, all of the possible transmission schemes are presented in this thesis. However not all of the four-terminal channels can be separated into two parts. By studying the schemes failed to be decomposed to a three-terminal relay channel and a common end node, we found that these schemes are infeasible for message transmission. Thus our method can still be used to study on feasible two-way relay channels.