In order to further automate tasks, processes and utilities across a variety of application domains, there is a need for more advanced control systems. Especially, as existing systems are often preprogrammed for a fixed purpose or cannot cope with large, complex and dynamic contexts. In contrast, by utilizing the increased computational power and the large availability of data and different sensors, we research and develop such new control methods based on Artificial Intelligence and Machine Learning.

In particular, we focus on Reinforcement Learning, an approach in which an AI agent is capable of collecting its own dataset through trial-and-error learning. An agent will explore the environment and use its observations, together with a feedback signal, in order to make better decisions in the future. This approach is most interesting with very complex problems, where manual solutions are hard to design. We apply this technique in different use cases including, among others, autonomous vehicles (e.g., vessels or drones), robot control, process control in industry, building appliances control (e.g., HVAC), and wireless communications.

Research examples

  • A first research line uses model-based reinforcement learning (MBRL) for control applications. MBRL has the ability to learn an intuition about the problem it is trying to solve. This means that it is very efficient for problems with enormous search spaces.  At IDLab, we use MBRL in control applications for complex cyber-physical systems ranging from autonomous navigation (e.g., for vessels or drones) to process control (e.g., in HVAC or chemistry applications). 
  • Second, in order to handle more complex tasks and dynamic environments we are researching methods that achieve a better generalization and can fast adapt to novel (unseen) tasks. Here, in particular, we are focusing on hierarchical reinforcement learning methods. So far, we have combined these model-free methods successfully with, among others, natural language and intrinsic motivation, leading to faster transfer of knowledge across tasks and the learning of task-independent skills.
  • Third, often multiple AI systems or agents will need to work together to solve a problem or process the input of an entire scene. To enable this collaboration, these different agents need to be able to communicate with each other and jointly perform actions and take decisions. Hence, we investigate, among others, how these agents can learn to communicate and exchange both raw information and learned knowledge.

A selection of publications

  • Louis Bagot, Kevin Mets, Steven Latré, Learning Intrinsically Motivated Options to Stimulate Policy Exploration, ICML 2020 4th Lifelong Learning Workshop (LifelongML), 2020
  • Matthias Hutsebaut-Buysse, Kevin Mets, Steven Latré, Language Grounded Task-Adaptation in Reinforcement Learning, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2020), 2020.
  • Matthias Hutsebaut-Buysse, Kevin Mets, Steven Latré, Pre-trained Word Embeddings for Goal-conditional Transfer Learning in Reinforcement Learning, ICML 2020 Language in Reinforcement Learning Workshop (LaRel), 2020.
  • Jakob Struye, Steven Latré, Hierarchical temporal memory and recurrent neural networks for time series prediction: An empirical validation and reduction to multilayer perceptions, Neurocomputing, vol 396, p291-301, 2020.
  • Jakob Struye, Kevin Mets, Steven Latré; HTMRL: Biologically Plausible Reinforcement Learning with Hierarchial Temporal Memory, Arxiv preprint, 2020.
  • Vanneste et al. Learning to communicate with multi-agent reinforcement learning using value-decomposition networks.

A selection of projects

Involved Faculty