Learning to communicate efficiently with multi-agent reinforcement learning for distributed control applications. 01/11/2020 - 25/02/2023

Abstract

In recent years, there has been increased interest in the field of multi-agent reinforcement learning. For tasks where cooperation between agents is required, researchers are looking towards techniques to allow the agents to learn to communicate while simultaneously learning how to act in the environment. Current state-of-the-art techniques often use broadcast communication. However, this is not scalable to real world applications. Therefore, I want to develop methods to make this communication more efficient. The goal of this research project is to reduce the amount of messages that are sent, while still maintaining the same performance. To reach this goal, I will look at techniques to communicate with a variable amount of agents, at techniques to limit communication using relevance metrics and signatures and at techniques to encourage hopping behavior in agents. The methods proposed in this research project are essential to be able to create scalable control applications by distributing them in combination with scalable learned communication. The developed methods will be validated on simulations of traffic light control.

Researcher(s)

Research team(s)