SAFIR - Stability-aware Adaptable Framework for Industrial Reinforcement Learning. 01/12/2025 - 30/11/2029

Abstract

This project develops a novel framework that bridges classical control theory and reinforcement learning (RL) to ensure stable exploration, training and inference in highly nonlinear, safety-critical systems. By leveraging data-driven techniques to learn Control Lyapunov Functions (CLFs) directly from closed-loop data, the proposed approach overcomes challenges associated with traditional, model-based CLF design and the instability risks inherent in standard RL algorithms. The framework integrates three key components: (1) learning the plant's dynamics to predict and preempt unstable actions; (2) imitating a proven classical controller (e.g., a Sliding Mode Controller) to initialize RL policies; and (3) employing an actor-critic RL scheme where the learned CLFs act as stability critics, triggering classical controller overrides when necessary. Validation on a benchmark involving a Proton Exchange Membrane (PEM) fuel cell and water electrolyzer system demonstrates the framework's ability to not only maintain stability but also enhance overall control performance. The expected outcomes include improved safety, operational efficiency, and reduced economic and environmental impacts in industrial chemical processes.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project