Quantum Compression and Quantum Learning via Information Theory
MetadataShow full item record
This thesis consists of two parts: quantum compression and quantum learning theory. A common theme between these problems is that we study them through the lens of information theory. We first study the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially higher communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. We show that an ensemble given by Jain, Radhakrishnan, and Sen (ICALP'03) cannot be compressed by more than a constant number of qubits without shared entanglement, while in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. Next, we study the task of quantum state redistribution, the most general version of compression of quantum states. We design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to connect quantum state redistribution and Markov chains and gives an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. In the last part, we focus on quantum algorithms for learning Boolean functions using quantum examples. We consider two commonly studied models of learning, namely, quantum PAC learning and quantum agnostic learning. We reproduce the optimal lower bounds by Arunachalam and de Wolf (JMLR’18) for the sample complexity of either of these models using information theory and spectral analysis. Our proofs are simpler than the previous ones and the techniques can be possibly extended to similar scenarios.
Cite this version of the work
Shima Bab Hadiashar (2020). Quantum Compression and Quantum Learning via Information Theory. UWSpace. http://hdl.handle.net/10012/16588
Showing items related by title, author, creator and subject.
Corona Ugalde, Paulina (University of Waterloo, 2017-09-21)This thesis is concerned with advancing the confrontation between relativistic quantum information (RQI) and experiment. We investigate the lessons that some present-day experiments can teach us about the relationship ...
Topics on the information theoretic limits of quantum information processing and its implementation Raeisi, Sadegh (University of Waterloo, 2015-02-24)Recent advances in quantum technologies enabled us to make large quantum states and pushed towards examining quantum theory at the macroscopic level. However observation of quantum e ects at a macroscopic level still ...
Ouyang, Yingkai (University of Waterloo, 2013-05-01)Transmitting quantum information across quantum channels is an important task. However quantum information is delicate, and is easily corrupted. We address the task of protecting quantum information from an information ...