UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Expanding the Scope of Random Feature Models: Theory and Applications

Loading...
Thumbnail Image

Date

2023-12-07

Authors

Saha, Esha

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Data, defined as facts and statistics collected together for analysis is at the core of every inference or decision made by any living organism. Right from the time we are born, our brain collects data from everything that is happening around us and helps us to make decisions based on past experiences. With the advent of technology, humans have been trying to develop methods that can learn from data and generalize well based on past information. While this attempt has been greatly successful with the development of the machine learning community, one parallel field that also developed along with it is the need to have a theoretical understanding of these methods. It is important to understand the workings of the algorithms to be able to quantify the cause and nature of the error they can make so that informed decisions can be made using these results, especially for sensitive applications such as in the medical field. At the heart of these methods lies the mathematical formulation and analysis of such learning algorithms. One such method that particularly caught the attention of researchers recently is the random feature model (RFM), introduced for reducing the complexity and faster computation of kernel methods in large-scale machine learning algorithms. These classes of methods can provide theoretical interpretations and have the potential to perform well numerically, thus being more reliable than black box methods such as deep neural networks. This thesis aims to explore RFMs by expanding their theory and applications in the machine-learning community. We begin our exploration by developing a fast algorithm for high dimensional function approximation using a random feature-based surrogate model. Assuming the target function is a lower-order additive function, we incorporate sparsity as a side information within our model to get numerical results that are better (or comparable) to other well-known methods and also provide risk and error bounds for our model. Extending the idea of learning functions, we build a model to learn and predict the dynamics of an epidemic from incomplete and scarce data. This model combines the idea of random feature approximation with the use of Takens' delay embedding theorem on the given input data. RFMs have majorly been explored in a form that resembles a shallow neural network with fixed hidden parameters. In our third project, motivated to work on the idea of multiple layers in an RFM, we propose an interpretable RFM whose architecture is inspired by diffusion models. We make the model interpretable by providing error bounds on the sampled data from its true distribution and show numerically that the proposed model is capable of generating images from data as well as denoising it.

Description

Keywords

machine learning, random feature models, compressive sensing, high dimension function approximation, learning dynamical systems from incomplete data, diffusion models

LC Keywords

Citation