Using Model-Based Reinforcement Learning combined with Monte-Carlo Tree Search to optimize Neural Networks for Embedded Devices. 01/11/2020 - 31/10/2022


Currently, most AI systems are being run in cloud environments. For some systems, like real-time systems, this can be troublesome, and moving these AI algorithms to the edge can provide a solution to these problems. The aim of my research is to use reinforcement learning techniques to design neural networks with performance rivalling that of modern, state-of-the-art systems, while reducing the resource consumption of these systems to a level that is manageable for edge devices. In order to achieve this goal, my work is split into 3 large components: multi-objective optimization, hardware embeddings and model-based reinforcement learning (MBRL) using monte carlo tree search (MCTS). The first component of my research, will deal with the scalarization of a multi-objective reward function, into a scalar reward. This is necessary for reinforcement learning systems, since they take a single reward value as feedback. For the second component of my research I will try to find a way to represent a certain piece of hardware, in a neural-network friendly manner. This is necessary for our system to be able to be able to exploit the architectural features of a specific piece hardware. Finally, I will introduce MBRL using MCTS to the field of neural architecture search. In this component, I will utilize the developed scalarization techniques and hardware representation from the first two components and a MBRL system to generate neural network architectures targeted at specific devices.


Research team(s)