UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Optimal Learning Theory and Approximate Optimal Learning Algorithms

Loading...
Thumbnail Image

Date

2019-09-12

Authors

Song, Haobei

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

The exploration/exploitation dilemma is a fundamental but often computationally intractable problem in reinforcement learning. The dilemma also impacts data efficiency which can be pivotal when the interactions between the agent and the environment are constrained. Traditional optimal control theory has some notion of objective criterion, such as regret, maximizing which results in optimal exploration and exploitation. This approach has been successful in multi-armed bandit problem but becomes impractical and mostly intractable to compute for multi-state problems. For complex problems with large state space when function approximation is applied, exploration/exploitation during each interaction is in practice generally decided in an ad hoc approach with heavy parameter tuning, such as ε-greedy. Inspired by different research communities, optimal learning strives to find the optimal balance between exploration and exploitation by applying principles from optimal control theory. The contribution of this thesis consists of two parts: 1. to establish a theoretical framework of optimal learning based on reinforcement learning in a stochastic (non-Markovian) decision process and through the lens of optimal learning unify the Bayesian (model-based) reinforcement learning and the partially observable reinforcement learning. 2. to improve existing reinforcement learning algorithms in the optimal learning view and the improved algorithms will be referred to as approximate optimal learning algorithms. Three classes of approximate optimal learning algorithms are proposed drawing from the following principles respectively: (1) Approximate Bayesian inference explicitly by training a recurrent neural network en- tangled with a feed forward neural network; (2) Approximate Bayesian inference implicitly by training and sampling from a pool of prediction neural networks as dynamics models; (3) Use memory based recurrent neural network to extract features from observations. Empirical evidence is provided to show the improvement of the proposed algorithms.

Description

Keywords

reinforcement learning, machine learning, exploration, exploitation, optimal learning, Bayesian reinforcement learning, model based reinforcement learning, neural network

LC Keywords

Citation