Quantum Compression and Quantum Learning via Information Theory

Loading...
Thumbnail Image

Date

2020-12-21

Authors

Bab Hadiashar, Shima

Advisor

Nayak, Ashwin

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

This thesis consists of two parts: quantum compression and quantum learning theory. A common theme between these problems is that we study them through the lens of information theory. We first study the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially higher communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. We show that an ensemble given by Jain, Radhakrishnan, and Sen (ICALP'03) cannot be compressed by more than a constant number of qubits without shared entanglement, while in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. Next, we study the task of quantum state redistribution, the most general version of compression of quantum states. We design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to connect quantum state redistribution and Markov chains and gives an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. In the last part, we focus on quantum algorithms for learning Boolean functions using quantum examples. We consider two commonly studied models of learning, namely, quantum PAC learning and quantum agnostic learning. We reproduce the optimal lower bounds by Arunachalam and de Wolf (JMLR’18) for the sample complexity of either of these models using information theory and spectral analysis. Our proofs are simpler than the previous ones and the techniques can be possibly extended to similar scenarios.

Description

Keywords

quantum information theory, quantum computing, quantum learning theory, quantum compression, quantum state redistribution, quantum Markov chains, quantum conditional mutual information, PAC learning, agnostic learning, sample complexity, entanglement

LC Subject Headings

Citation