Less is More: Restricted Representations for Better Interpretability and Generalizability
dc.contributor.author | Jiang, Zhiying | |
dc.date.accessioned | 2023-08-22T14:18:23Z | |
dc.date.available | 2023-08-22T14:18:23Z | |
dc.date.issued | 2023-08-22 | |
dc.date.submitted | 2023-07-18 | |
dc.description.abstract | Deep neural networks are prevalent in supervised learning for large amounts of tasks such as image classification, machine translation and even scientific discovery. Their success is often at the sacrifice of interpretability and generalizability. The increasing complexity of models and involvement of the pre-training process make the inexplicability more imminent. The outstanding performance when labeled data are abundant while prone to overfit when labeled data are limited demonstrates the difficulty of deep neural networks' generalizability to different datasets. This thesis aims to improve interpretability and generalizability by restricting representations. We choose to approach interpretability by focusing on attribution analysis to understand which features contribute to prediction on BERT, and to approach generalizability by focusing on effective methods in a low-data regime. We consider two strategies of restricting representations: (1) adding bottleneck, and (2) introducing compression. Given input x, suppose we want to learn y with the latent representation z (i.e. x→z→y), adding bottleneck means adding function R such that L(R(z)) < L(z) and introducing compression means adding function R so that L(R(y)) < L(y) where L refers to the number of bits. In other words, the restriction is added either in the middle of the pipeline or at the end of it. We first introduce how adding information bottleneck can help attribution analysis and apply it to investigate BERT's behavior on text classification in Chapter 3. We then extend this attribution method to analyze passage reranking in Chapter 4, where we conduct a detailed analysis to understand cross-layer and cross-passage behavior. Adding bottleneck can not only provide insight to understand deep neural networks but can also be used to increase generalizability. In Chapter 5, we demonstrate the equivalence between adding bottleneck and doing neural compression. We then leverage this finding with a framework called Non-Parametric learning by Compression with Latent Variables (NPC-LV), and show how optimizing neural compressors can be used in the non-parametric image classification with few labeled data. To further investigate how compression alone helps non-parametric learning without latent variables (NPC), we carry out experiments with a universal compressor gzip on text classification in Chapter 6. In Chapter 7, we elucidate methods of adopting the perspective of doing compression but without the actual process of compression using T5. Using experimental results in passage reranking, we show that our method is highly effective in a low-data regime when only one thousand query-passage pairs are available. In addition to the weakly supervised scenario, we also extend our method to large language models like GPT under almost no supervision --- in one-shot and zero-shot settings. The experiments show that without extra parameters or in-context learning, GPT can be used for semantic similarity, text classification, and text ranking and outperform strong baselines, which is presented in Chapter 8. The thesis proposes to tackle two big challenges in machine learning --- "interpretability" and "generalizability" through restricting representation. We provide both theoretical derivation and empirical results to show the effectiveness of using information-theoretic approaches. We not only design new algorithms but also provide numerous insights on why and how "compression" is so important in understanding deep neural networks and improving generalizability. | en |
dc.identifier.uri | http://hdl.handle.net/10012/19736 | |
dc.language.iso | en | en |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | interpretability | en |
dc.subject | generalizability | en |
dc.subject | information bottleneck | en |
dc.subject | information distance | en |
dc.subject | kolmogorov complexity | en |
dc.subject | compressor-based method | en |
dc.subject | generative model | en |
dc.title | Less is More: Restricted Representations for Better Interpretability and Generalizability | en |
dc.type | Doctoral Thesis | en |
uws-etd.degree | Doctor of Philosophy | en |
uws-etd.degree.department | David R. Cheriton School of Computer Science | en |
uws-etd.degree.discipline | Computer Science | en |
uws-etd.degree.grantor | University of Waterloo | en |
uws-etd.embargo.terms | 0 | en |
uws.contributor.advisor | Lin, Jimmy | |
uws.contributor.affiliation1 | Faculty of Mathematics | en |
uws.peerReviewStatus | Unreviewed | en |
uws.published.city | Waterloo | en |
uws.published.country | Canada | en |
uws.published.province | Ontario | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |