We study an optimal control problem on innite horizon for a controlled stochastic dif- ferential equation driven by Brownian motion, with a discounted reward functional. The equation may have memory or delay eects in the coecients, both with respect to state and control, and the noise can be degenerate. We prove that the value, i.e. the supremum of the reward functional over all admissible controls, can be represented by the solution of an associated backward stochastic dierential equation (BSDE) driven by the Brownian motion and an auxiliary independent Poisson process and having a sign constraint on jumps. In the Markovian case when the coecients depend only on the present values of the state and the control, we prove that the BSDE can be used to construct the solution, in the sense of viscosity theory, to the corresponding Hamilton-Jacobi-Bellman partial dierential equation of elliptic type on the whole space, so that it provides us with a Feynman-Kac representation in this fully nonlinear context. The method of proof consists in showing that the value of the original problem is the same as the value of an auxiliary optimal control problem (called randomized), where the control process is replaced by a xed pure jump process and maximization is taken over a class of absolutely continuous changes of measures which aect the stochastic intensity of the jump process but leave the law of the driving Brownian motion unchanged.

Backward SDEs and infinite horizon stochastic optimal control

F. Confortola;
2018-01-01

Abstract

We study an optimal control problem on innite horizon for a controlled stochastic dif- ferential equation driven by Brownian motion, with a discounted reward functional. The equation may have memory or delay eects in the coecients, both with respect to state and control, and the noise can be degenerate. We prove that the value, i.e. the supremum of the reward functional over all admissible controls, can be represented by the solution of an associated backward stochastic dierential equation (BSDE) driven by the Brownian motion and an auxiliary independent Poisson process and having a sign constraint on jumps. In the Markovian case when the coecients depend only on the present values of the state and the control, we prove that the BSDE can be used to construct the solution, in the sense of viscosity theory, to the corresponding Hamilton-Jacobi-Bellman partial dierential equation of elliptic type on the whole space, so that it provides us with a Feynman-Kac representation in this fully nonlinear context. The method of proof consists in showing that the value of the original problem is the same as the value of an auxiliary optimal control problem (called randomized), where the control process is replaced by a xed pure jump process and maximization is taken over a class of absolutely continuous changes of measures which aect the stochastic intensity of the jump process but leave the law of the driving Brownian motion unchanged.
2018
stochastic optimal control, backward SDEs, randomization of controls
File in questo prodotto:
File Dimensione Formato  
orizzonte_infinito_finale_revised-cocv.pdf

accesso aperto

: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 464.77 kB
Formato Adobe PDF
464.77 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1083394
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact