Partially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilities
dc.contributor.author | Zgurovsky, M. Z. | |
dc.contributor.author | Kasyanov, P. O. | |
dc.contributor.author | Feinberg, E. A. | |
dc.contributor.author | Згуровський, Михайло Захарович | |
dc.contributor.author | Касьянов, Павло Олегович | |
dc.date.accessioned | 2017-06-29T09:16:07Z | |
dc.date.available | 2017-06-29T09:16:07Z | |
dc.date.issued | 2016 | |
dc.description.abstracten | This paper describes sufficient conditions for the existence of optimal policies for partially observable Markov decision processes (POMDPs) with Borel state, observation, and action sets, when the goal is to minimize the expected total costs over finite or infinite horizons. For infinite-horizon problems, one-step costs are either discounted or assumed to be nonnegative. Action sets may be noncompact and one-step cost functions may be unbounded. The introduced conditions are also sufficient for the validity of optimality equations, semicontinuity of value functions, and convergence of value iterations to optimal values. Since POMDPs can be reduced to completely observable Markov decision processes (COMDPs), whose states are posterior state distributions, this paper focuses on the validity of the above-mentioned optimality properties for COMDPs. The central question is whether the transition probabilities for the COMDP are weakly continuous. We introduce sufficient conditions for this and show that the transition probabilities for a COMDP are weakly continuous, if transition probabilities of the underlying Markov decision process are weakly continuous and observation probabilities for the POMDP are continuous in total variation. Moreover, the continuity in total variation of the observation probabilities cannot be weakened to setwise continuity. The results are illustrated with counterexamples and examples. | uk |
dc.format.pagerange | P. 656–681 | uk |
dc.identifier.citation | Zgurovsky, M. Z. Partially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilities / Michael Z. Zgurovsky, Eugene A. Feinberg, Pavlo O. Kasyanov // Mathematics of operations research. – Vol. 41, No. 2. – 2016. – P. 656–681. – DOI: 10.1287/moor.2015.0746 | uk |
dc.identifier.doi | 10.1287/moor.2015.0746 | |
dc.identifier.uri | https://ela.kpi.ua/handle/123456789/19880 | |
dc.language.iso | en | uk |
dc.source.name | Mathematics of operations research. – Vol. 41. – No. 2. – 2016. | uk |
dc.status.pub | published | uk |
dc.subject | partially observable Markov decision processes | uk |
dc.subject | total cost | uk |
dc.subject | optimality inequality | uk |
dc.subject | optimal policy | uk |
dc.title | Partially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilities | uk |
dc.type | Article | uk |
thesis.degree.level | - | uk |
Файли
Контейнер файлів
1 - 1 з 1
Вантажиться...
- Назва:
- feinberg2016.pdf
- Розмір:
- 453.97 KB
- Формат:
- Adobe Portable Document Format
Ліцензійна угода
1 - 1 з 1
Ескіз недоступний
- Назва:
- license.txt
- Розмір:
- 7.8 KB
- Формат:
- Item-specific license agreed upon to submission
- Опис: