Obedience-based Multi-Agent Cooperation for Sequential Social Dilemmas

Loading...
Thumbnail Image

Date

2020-05-14

Authors

Gupta, Gaurav

Advisor

Hoey, Jesse

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

We propose a mechanism for achieving cooperation and communication in Multi-Agent Reinforcement Learning (MARL) settings by intrinsically rewarding agents for obeying the commands of other agents. At every timestep, agents exchange commands through a cheap-talk channel. During the following timestep, agents are rewarded both for taking actions that conform to commands received as well as for giving successful commands. We refer to this approach as obedience-based learning. We demonstrate the potential for obedience-based approaches to enhance coordination and communication in challenging sequential social dilemmas, where traditional MARL approaches often collapse without centralized training or specialized architectures. We also demonstrate the flexibility of this approach with regards to population heterogeneity and vocabulary size. Obedience-based learning stands out as an intuitive form of cooperation with minimal complexity and overhead that can be applied to heterogeneous populations. In contrast, previous works with sequential social dilemmas are often restricted to homogeneous populations and require complete knowledge of every player's reward structure. Obedience-based learning is a promising direction for exploration in the field of cooperative MARL.

Description

Keywords

Reinforcement Learning, Cooperation, Multi-Agent Reinforcement Learning, Intrinsic Reward, Cheap-Talk Communication

LC Keywords

Citation