Publicación: Discrete-time control with non-constant discount factor
Cargando...
Fecha
2020-06-20
Autores
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Springer Nature
Resumen
This paper deals with discrete-time Markov decision processes (MDPs) with Borel state and action spaces, and total expected discounted cost optimality criterion. We assume that the discount factor is not constant: it may depend on the state and action; moreover, it can even take the extreme values zero or one. We propose sufficient conditions on the data of the model ensuring the existence of optimal control policies and allowing the characterization of the optimal value function as a solution to the dynamic programming equation. As a particular case of these MDPs with varying discount factor, we study MDPs with stopping, as well as the corresponding optimal stopping times and contact set. We show applications to switching MDPs models and, in particular, we study a pollution accumulation problem.
Descripción
Categorías UNESCO
Palabras clave
Markov decision processes, dynamic programming, optimal stopping problems
Citación
Jasso-Fuentes, H., Menaldi, JL. & Prieto-Rumeau, T. Discrete-time control with non-constant discount factor. Math Meth Oper Res 92, 377–399 (2020). https://link.springer.com/article/10.1007/s00186-020-00716-8 https://doi.org/10.1007/s00186-020-00716-8
Centro
Facultades y escuelas::Facultad de Ciencias
Departamento
Estadística, Investigación Operativa y Cálculo Numérico