Process Systems and Control Engineering

PhD and MSc positions are available for Fall 2024. Please send your resume and transcripts to Prof. Liu for considerations.

RESEARCH:Optimal Control

From psacelab

Optimal Control

The process industry is currently undergoing a transition from the traditional process operations paradigm to the smart manufacturing paradigm. Smart manufacturing entails the extensive and intensified application of manufacturing intelligence throughout the manufacturing and supply chain enterprise. It involves the tight integration of different function modules through real-time communication, resulting in a coordinated and performance-oriented enterprise that takes into account economic performance, environmental sustainability, and health and safety. However, this transition to smart manufacturing presents significant challenges for process control and operations. Our focus has been on developing advanced process control techniques that facilitate the transition to smart manufacturing.

Economic Model Predictive Control (EMPC). Model Predictive Control (MPC) is a widely used advanced process control technique and falls within the realm of core manufacturing intelligence. Traditionally, the optimization of process operations is divided into two separate problems in process control. The first problem involves optimizing the operating set-points through real-time optimization (RTO) in an upper layer, while the second problem revolves around tracking these set-points using MPC. While this optimization and control separation has been successful, it does not meet the requirements of smart manufacturing. In our research, we have been focusing on economic MPC, which eliminates the separation between optimization and control. This approach enables us to directly optimize the economic performance, addressing the specific needs of smart manufacturing. Our approach unifies regulatory control and economic optimization in the framework of EMPC with zone tracking.

Safe Reinforcement Learning. For large-scale nonlinear systems, MPC can often be hindered by its high computational complexity. As an alternative for optimal control, reinforcement learning (RL), a prominent component of machine learning, offers a promising solution by shifting the burden of complex optimization calculations to offline training based on a model. RL belongs to a class of optimal control algorithms that empowers machines to learn an optimal policy (closed-loop control law) by maximizing future rewards through iterative interactions with the environment. However, traditional RL approaches lack the incorporation of safety constraints in their design, and they do not guarantee closed-loop stability. As part of our research focus, we delve into the realm of safe RL algorithms, which explicitly address operational safety during training and ensure closed-loop stability in the learned policy. Our approach revolves around integrating the wealth of stability results from systems and control theory into the design, training, and online implementation of RL. By merging the strengths of RL and control theory, our research aims to enhance the safety and stability of RL-based control systems, opening new avenues for reliable and robust applications in various domains.

Safe Operating Region Approximation. Accurately identifying the safe operating region of a process is of utmost importance when it comes to controller design and operational optimization. In order to ensure stability and efficiency, it is essential to delineate the boundaries within which a system can operate safely. Control invariant sets (CIS) serve as invaluable tools in this endeavor, providing insights into the regions of a dynamical system where desired control objectives can be achieved without violating safety constraints. However, determining these CISs can be an arduous task, particularly in the context of nonlinear systems. The inherent complexity and nonlinearity of such systems pose significant challenges when trying to establish the boundaries of safe operation. In our research, we tackle this problem by leveraging innovative approaches that combine graph-theoretic methods with adaptive subdivision, parallelization, and system decomposition techniques. By harnessing the power of graph theory, we can model the intricate relationships and dependencies within the system, aiding us in the analysis of the safe operating regions. This allows us to construct an abstract representation of the system's dynamics and explore its behavior under various operating conditions.