In order to further automate tasks, processes and utilities across a variety of application domains, there is a need for more advanced control systems. Especially, as existing systems are often preprogrammed for a fixed purpose or cannot cope with large, complex and dynamic contexts. In contrast, by utilizing the increased computational power and the large availability of data and different sensors, we research and develop such new control methods based on Artificial Intelligence and Machine Learning.

In particular, we focus on Reinforcement Learning, an approach in which an AI agent is capable of collecting its own dataset through trial-and-error learning. An agent will explore the environment and use its observations, together with a feedback signal, in order to make better decisions in the future. This approach is most interesting with very complex problems, where manual solutions are hard to design. We apply this technique in different use cases including, among others, autonomous vehicles (e.g., vessels or drones), robot control, process control in industry, building appliances control (e.g., HVAC), and wireless communications.

Research examples

  • A first research line uses model-based reinforcement learning (MBRL) for control applications. MBRL has the ability to learn an intuition about the problem it is trying to solve. This means that it is very efficient for problems with enormous search spaces.  At IDLab, we use MBRL in control applications for complex cyber-physical systems ranging from autonomous navigation (e.g., for vessels or drones) to process control (e.g., in HVAC or chemistry applications). 
  • Second, in order to handle more complex tasks and dynamic environments we are researching methods that achieve a better generalization and can fast adapt to novel (unseen) tasks. Here, in particular, we are focusing on hierarchical reinforcement learning methods. So far, we have combined these model-free methods successfully with, among others, natural language and intrinsic motivation, leading to faster transfer of knowledge across tasks and the learning of task-independent skills.
  • Third, often multiple AI systems or agents will need to work together to solve a problem or process the input of an entire scene. To enable this collaboration, these different agents need to be able to communicate with each other and jointly perform actions and take decisions. Hence, we investigate, among others, how these agents can learn to communicate and exchange both raw information and learned knowledge.

Involved Faculty