Quantum Compression and Quantum Learning via Information Theory
dc.contributor.author | Bab Hadiashar, Shima | |
dc.date.accessioned | 2020-12-21T20:23:45Z | |
dc.date.available | 2021-04-21T04:50:15Z | |
dc.date.issued | 2020-12-21 | |
dc.date.submitted | 2020-12-21 | |
dc.description.abstract | This thesis consists of two parts: quantum compression and quantum learning theory. A common theme between these problems is that we study them through the lens of information theory. We first study the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially higher communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. We show that an ensemble given by Jain, Radhakrishnan, and Sen (ICALP'03) cannot be compressed by more than a constant number of qubits without shared entanglement, while in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. Next, we study the task of quantum state redistribution, the most general version of compression of quantum states. We design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to connect quantum state redistribution and Markov chains and gives an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. In the last part, we focus on quantum algorithms for learning Boolean functions using quantum examples. We consider two commonly studied models of learning, namely, quantum PAC learning and quantum agnostic learning. We reproduce the optimal lower bounds by Arunachalam and de Wolf (JMLR’18) for the sample complexity of either of these models using information theory and spectral analysis. Our proofs are simpler than the previous ones and the techniques can be possibly extended to similar scenarios. | en |
dc.identifier.uri | http://hdl.handle.net/10012/16588 | |
dc.language.iso | en | en |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | quantum information theory | en |
dc.subject | quantum computing | en |
dc.subject | quantum learning theory | en |
dc.subject | quantum compression | en |
dc.subject | quantum state redistribution | en |
dc.subject | quantum Markov chains | en |
dc.subject | quantum conditional mutual information | en |
dc.subject | PAC learning | en |
dc.subject | agnostic learning | en |
dc.subject | sample complexity | en |
dc.subject | entanglement | en |
dc.title | Quantum Compression and Quantum Learning via Information Theory | en |
dc.type | Doctoral Thesis | en |
uws-etd.degree | Doctor of Philosophy | en |
uws-etd.degree.department | Combinatorics and Optimization | en |
uws-etd.degree.discipline | Combinatorics and Optimization (Quantum Information) | en |
uws-etd.degree.grantor | University of Waterloo | en |
uws-etd.embargo.terms | 4 months | en |
uws.contributor.advisor | Nayak, Ashwin | |
uws.contributor.affiliation1 | Faculty of Mathematics | en |
uws.peerReviewStatus | Unreviewed | en |
uws.published.city | Waterloo | en |
uws.published.country | Canada | en |
uws.published.province | Ontario | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |