Learning from Partially Labeled Data: Unsupervised and Semi-supervised Learning on Graphs and Learning with Distribution Shifting
Abstract
This thesis focuses on two fundamental machine learning problems:unsupervised learning, where no label information is available, and semi-supervised learning, where a small amount of labels are given in
addition to unlabeled data. These problems arise in many real word applications, such as Web analysis and bioinformatics,where a large amount of data is available, but no or only a small amount of labeled data exists. Obtaining classification labels in these domains is usually quite difficult because it involves either manual labeling or physical experimentation.
This thesis approaches these problems from two perspectives:
graph based and distribution based.
First, I investigate a series of graph based learning algorithms that are able to exploit information embedded in different types of graph structures. These algorithms allow label information to be shared between nodes
in the graph---ultimately communicating information globally to yield effective unsupervised and semi-supervised learning.
In particular, I extend existing graph based learning algorithms, currently based on undirected graphs, to more general graph types, including directed graphs, hypergraphs and complex networks. These richer graph representations allow one to more naturally
capture the intrinsic data relationships that exist, for example, in Web data, relational data, bioinformatics and social networks.
For each of these generalized graph structures I show how information propagation can be characterized by distinct random walk models, and then use this characterization
to develop new unsupervised and semi-supervised learning algorithms.
Second, I investigate a more statistically oriented approach that explicitly models a learning scenario where the training and test examples come from different distributions.
This is a difficult situation for standard statistical learning approaches, since they typically incorporate an assumption that the distributions for training and test sets are similar, if not identical. To achieve good performance in this scenario, I utilize unlabeled data to correct the bias between the training and test distributions. A key idea is to produce resampling weights for bias correction by working directly in a feature space and bypassing the problem
of explicit density estimation. The technique can be easily applied to many different supervised learning algorithms, automatically adapting their behavior to cope with distribution shifting between training and test data.
Collections
Cite this version of the work
Jiayuan Huang
(2007).
Learning from Partially Labeled Data: Unsupervised and Semi-supervised Learning on Graphs and Learning with Distribution Shifting. UWSpace.
http://hdl.handle.net/10012/3165
Other formats
Related items
Showing items related by title, author, creator and subject.
-
Asking for Help with a Cost in Reinforcement Learning
Vandenhof, Colin (University of Waterloo, 2020-05-15)Reinforcement learning (RL) is a powerful tool for developing intelligent agents, and the use of neural networks makes RL techniques more scalable to challenging real-world applications, from task-oriented dialogue systems ... -
Multi-Agent Reinforcement Learning in Large Complex Environments
Ganapathi Subramanian, Sriram (University of Waterloo, 2022-07-15)Multi-agent reinforcement learning (MARL) has seen much success in the past decade. However, these methods are yet to find wide application in large-scale real world problems due to two important reasons. First, MARL ... -
Optimal Learning Theory and Approximate Optimal Learning Algorithms
Song, Haobei (University of Waterloo, 2019-09-12)The exploration/exploitation dilemma is a fundamental but often computationally intractable problem in reinforcement learning. The dilemma also impacts data efficiency which can be pivotal when the interactions between the ...