Reinforcement Learning for Determining Spread Dynamics of Spatially Spreading Processes with Emphasis on Forest Fires
MetadataShow full item record
Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a Spatially Spreading Process (SSP) which requires many parameters to be set precisely to model the dynamics, spread rates and directional biases of the elements which are spreading. We introduce a novel approach for learning in SSP domains such as wild fires using Reinforcement Learning (RL) where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading into any point in the 3 $\times$ 3 grid around it (including not spreading). This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP) is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatially-spreading process. Rewards are provided for correctly classifying which cells are on fire or not compared to satellite and other related data. We use 3 demonstrative domains to prove the ability of our approach. The first one is a popular online simulator of a wildfire, the second domain involves a pair of forest fires in Northern Alberta which are the Fort McMurray fire of 2016 that led to an unprecedented evacuation of almost 90,000 people and the Richardson fire of 2011, and the third domain deals with historical Saskatchewan fires previously compared by others to a physics-based simulator. The standard RL algorithms considered on all the domains include Monte Carlo Tree Search, Asynchronous Advantage Actor-Critic (A3C), Deep Q Learning (DQN) and Deep Q Learning with prioritized experience replay. We also introduce a novel combination of Monte-Carlo Tree Search (MCTS) and A3C algorithms that shows the best performance across different test domains and testing environments. Additionally, some other algorithms like Value Iteration, Policy Iteration and Q-Learning are applied on the Alberta fires testing domain to show the performances of these simple model based and model free approaches. We also compare to a Gaussian process based supervised learning approach and discuss relation to state-of-the-art methods from forest wildfire modelling. The results show that we can learn predictive, agent-based policies as models of spatial dynamics using RL on readily available datasets like satellite images which are at least as good as other methods and have many additional advantages in terms of generalizability and interpretability.
Cite this version of the work
Sriram Ganapathi Subramanian (2018). Reinforcement Learning for Determining Spread Dynamics of Spatially Spreading Processes with Emphasis on Forest Fires. UWSpace. http://hdl.handle.net/10012/13148