Show simple item record

dc.contributor.authorBab Hadiashar, Shima 20:23:45 (GMT) 04:50:15 (GMT)
dc.description.abstractThis thesis consists of two parts: quantum compression and quantum learning theory. A common theme between these problems is that we study them through the lens of information theory. We first study the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially higher communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. We show that an ensemble given by Jain, Radhakrishnan, and Sen (ICALP'03) cannot be compressed by more than a constant number of qubits without shared entanglement, while in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. Next, we study the task of quantum state redistribution, the most general version of compression of quantum states. We design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to connect quantum state redistribution and Markov chains and gives an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. In the last part, we focus on quantum algorithms for learning Boolean functions using quantum examples. We consider two commonly studied models of learning, namely, quantum PAC learning and quantum agnostic learning. We reproduce the optimal lower bounds by Arunachalam and de Wolf (JMLR’18) for the sample complexity of either of these models using information theory and spectral analysis. Our proofs are simpler than the previous ones and the techniques can be possibly extended to similar scenarios.en
dc.publisherUniversity of Waterlooen
dc.subjectquantum information theoryen
dc.subjectquantum computingen
dc.subjectquantum learning theoryen
dc.subjectquantum compressionen
dc.subjectquantum state redistributionen
dc.subjectquantum Markov chainsen
dc.subjectquantum conditional mutual informationen
dc.subjectPAC learningen
dc.subjectagnostic learningen
dc.subjectsample complexityen
dc.titleQuantum Compression and Quantum Learning via Information Theoryen
dc.typeDoctoral Thesisen
dc.pendingfalse and Optimizationen and Optimization (Quantum Information)en of Waterlooen
uws-etd.degreeDoctor of Philosophyen
uws-etd.embargo.terms4 monthsen
uws.contributor.advisorNayak, Ashwin
uws.contributor.affiliation1Faculty of Mathematicsen

Files in this item


This item appears in the following Collection(s)

Show simple item record


University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages