Fictitious Mean-field Reinforcement Learning for Distributed Load Balancing

Loading...
Thumbnail Image

Authors

Fardno, Fatemeh

Advisor

Zahedi, Seyed Majid

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

In this work, we study the application of multi-agent reinforcement learning (RL) in distributed systems. In particular, we consider a setting in which strategic clients compete over a set of heterogeneous servers. Each client receives jobs at a fixed rate. For each job, clients choose a server to run the job. The objective of each client is to minimize its average wait time. We model this setting as a Markov game and theoretically prove that the game becomes in the limit a Markov potential game (MPG). We further propose a novel mean-field reinforcement learning algorithm, combining mean-field Q-learning and fictitious play. Through rigorous experiments, we show that our algorithm outperforms naive deployment of single-agent RL, and in some cases, performs comparably to the Nash Q-learning, while being less complex in terms of memory and computation. We also empirically analyze the convergence of our proposed algorithm to a Nash equilibrium and study its performance in four benchmark examples.

Description

Keywords

LC Subject Headings

Citation