Optimal Learning Theory and Approximate Optimal Learning Algorithms
Abstract
The exploration/exploitation dilemma is a fundamental but often computationally intractable problem in reinforcement learning. The dilemma also impacts data efficiency which can be pivotal when the interactions between the agent and the environment are constrained. Traditional optimal control theory has some notion of objective criterion, such as regret, maximizing which results in optimal exploration and exploitation. This approach has been successful in multi-armed bandit problem but becomes impractical and mostly intractable to compute for multi-state problems. For complex problems with large state space when function approximation is applied, exploration/exploitation during each interaction is in practice generally decided in an ad hoc approach with heavy parameter tuning, such as ε-greedy. Inspired by different research communities, optimal learning strives to find the optimal balance between exploration and exploitation by applying principles from optimal control theory.
The contribution of this thesis consists of two parts: 1. to establish a theoretical framework of optimal learning based on reinforcement learning in a stochastic (non-Markovian) decision process and through the lens of optimal learning unify the Bayesian (model-based) reinforcement learning and the partially observable reinforcement learning. 2. to improve existing reinforcement learning algorithms in the optimal learning view and the improved algorithms will be referred to as approximate optimal learning algorithms.
Three classes of approximate optimal learning algorithms are proposed drawing from the following principles respectively:
(1) Approximate Bayesian inference explicitly by training a recurrent neural network en- tangled with a feed forward neural network;
(2) Approximate Bayesian inference implicitly by training and sampling from a pool of prediction neural networks as dynamics models;
(3) Use memory based recurrent neural network to extract features from observations. Empirical evidence is provided to show the improvement of the proposed algorithms.
Collections
Cite this version of the work
Haobei Song
(2019).
Optimal Learning Theory and Approximate Optimal Learning Algorithms. UWSpace.
http://hdl.handle.net/10012/15042
Other formats
Related items
Showing items related by title, author, creator and subject.
-
Asking for Help with a Cost in Reinforcement Learning
Vandenhof, Colin (University of Waterloo, 2020-05-15)Reinforcement learning (RL) is a powerful tool for developing intelligent agents, and the use of neural networks makes RL techniques more scalable to challenging real-world applications, from task-oriented dialogue systems ... -
Multi-Agent Reinforcement Learning in Large Complex Environments
Ganapathi Subramanian, Sriram (University of Waterloo, 2022-07-15)Multi-agent reinforcement learning (MARL) has seen much success in the past decade. However, these methods are yet to find wide application in large-scale real world problems due to two important reasons. First, MARL ... -
Learning From Almost No Data
Sucholutsky, Ilia (University of Waterloo, 2021-06-15)The tremendous recent growth in the fields of artificial intelligence and machine learning has largely been tied to the availability of big data and massive amounts of compute. The increasingly popular approach of training ...