We consider a linear system affected by an additive stochastic disturbance and address the design of a finite horizon control policy that is optimal according to some cost criterion and accounts also for probabilistic constraints on both the input and state variables. The resulting policy can be implemented over a receding horizon according to the model predictive control strategy. Such a possibility, however, is hampered by the fact that a feasibility issue may arise when recomputing the policy. Infeasibility indeed can occur if the disturbance has unbounded support and the state is required to remain in a bounded set. In this paper, we propose a solution to this issue that is based on the introduction of a constraint relaxation that becomes effective only when the original problem turns out to be unfeasible. This is obtained via a cascade of two probabilistically-constrained optimization problems where, in the first one, performance is neglected and the policy is designed to fully recover feasibility or – if this is not possible – to determine the minimum level of relaxation which is needed to recover feasibility; in the second step, such a minimum relaxation level is imposed while optimally (re-)tuning the control policy parameters. Both problems are solved through a computationally tractable scenario-based scheme using a finite number of disturbance realizations and providing an approximate solution that satisfies with high confidence the original probabilistic constraints of the cascade.

A randomized relaxation method to ensure feasibility in stochastic control of linear systems subject to state and input constraints

Deori L.;Garatti S.;Prandini M.
2020-01-01

Abstract

We consider a linear system affected by an additive stochastic disturbance and address the design of a finite horizon control policy that is optimal according to some cost criterion and accounts also for probabilistic constraints on both the input and state variables. The resulting policy can be implemented over a receding horizon according to the model predictive control strategy. Such a possibility, however, is hampered by the fact that a feasibility issue may arise when recomputing the policy. Infeasibility indeed can occur if the disturbance has unbounded support and the state is required to remain in a bounded set. In this paper, we propose a solution to this issue that is based on the introduction of a constraint relaxation that becomes effective only when the original problem turns out to be unfeasible. This is obtained via a cascade of two probabilistically-constrained optimization problems where, in the first one, performance is neglected and the policy is designed to fully recover feasibility or – if this is not possible – to determine the minimum level of relaxation which is needed to recover feasibility; in the second step, such a minimum relaxation level is imposed while optimally (re-)tuning the control policy parameters. Both problems are solved through a computationally tractable scenario-based scheme using a finite number of disturbance realizations and providing an approximate solution that satisfies with high confidence the original probabilistic constraints of the cascade.
2020
Model predictive control; Randomized methods; Scenario approach; Stochastic constrained control
File in questo prodotto:
File Dimensione Formato  
sMPC_relax_h.pdf

accesso aperto

: Pre-Print (o Pre-Refereeing)
Dimensione 353.65 kB
Formato Adobe PDF
353.65 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1134112
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
social impact