Book Description
The performance of model predictive control (MPC) largely depends on the accuracy of the prediction model and of the constraints the system is subject to. However, obtaining an accurate knowledge of these elements might be expensive in terms of money and resources, if at all possible. In this thesis, we develop novel learning-based MPC frameworks that actively incentivize learning of the underlying system dynamics and of the constraints, while ensuring recursive feasibility, constraint satisfaction, and performance bounds for the closed-loop. In the first part, we focus on the case of inaccurate models, and analyze learning-based MPC schemes that include, in addition to the primary cost, a learning cost that aims at generating informative data by inducing excitation in the system. In particular, we first propose a nonlinear MPC framework that ensures desired performance bounds for the resulting closed-loop, and then we focus on linear systems subject to uncertain parameters and noisy output measurements. In order to ensure that the desired learning phase occurs in closed-loop operations, we then propose an MPC framework that is able to guarantee closed-loop learning of the controlled system. In the last part of the thesis, we investigate the scenario where the system is known but evolves in a partially unknown environment. In such a setup, we focus on a learning-based MPC scheme that incentivizes safe exploration if and only if this might yield to a performance improvement.