Skip navigation
Будь ласка, використовуйте цей ідентифікатор, щоб цитувати або посилатися на цей матеріал: https://ela.kpi.ua/handle/123456789/19880
Повний запис метаданих
Поле DCЗначенняМова
dc.contributor.authorZgurovsky, M. Z.-
dc.contributor.authorKasyanov, P. O.-
dc.contributor.authorFeinberg, E. A.-
dc.contributor.authorЗгуровський, Михайло Захарович-
dc.contributor.authorКасьянов, Павло Олегович-
dc.date.accessioned2017-06-29T09:16:07Z-
dc.date.available2017-06-29T09:16:07Z-
dc.date.issued2016-
dc.identifier.citationZgurovsky, M. Z. Partially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilities / Michael Z. Zgurovsky, Eugene A. Feinberg, Pavlo O. Kasyanov // Mathematics of operations research. – Vol. 41, No. 2. – 2016. – P. 656–681. – DOI: 10.1287/moor.2015.0746uk
dc.identifier.urihttps://ela.kpi.ua/handle/123456789/19880-
dc.language.isoenuk
dc.subjectpartially observable Markov decision processesuk
dc.subjecttotal costuk
dc.subjectoptimality inequalityuk
dc.subjectoptimal policyuk
dc.titlePartially Observable Total-Cost Markov Decision Processes with Weakly Continuous Transition Probabilitiesuk
dc.typeArticleuk
thesis.degree.level-uk
dc.format.pagerangeP. 656–681uk
dc.status.pubpublisheduk
dc.source.nameMathematics of operations research. – Vol. 41. – No. 2. – 2016.uk
dc.identifier.doi10.1287/moor.2015.0746-
dc.description.abstractenThis paper describes sufficient conditions for the existence of optimal policies for partially observable Markov decision processes (POMDPs) with Borel state, observation, and action sets, when the goal is to minimize the expected total costs over finite or infinite horizons. For infinite-horizon problems, one-step costs are either discounted or assumed to be nonnegative. Action sets may be noncompact and one-step cost functions may be unbounded. The introduced conditions are also sufficient for the validity of optimality equations, semicontinuity of value functions, and convergence of value iterations to optimal values. Since POMDPs can be reduced to completely observable Markov decision processes (COMDPs), whose states are posterior state distributions, this paper focuses on the validity of the above-mentioned optimality properties for COMDPs. The central question is whether the transition probabilities for the COMDP are weakly continuous. We introduce sufficient conditions for this and show that the transition probabilities for a COMDP are weakly continuous, if transition probabilities of the underlying Markov decision process are weakly continuous and observation probabilities for the POMDP are continuous in total variation. Moreover, the continuity in total variation of the observation probabilities cannot be weakened to setwise continuity. The results are illustrated with counterexamples and examples.uk
Розташовується у зібраннях:Статті (ММСА)

Файли цього матеріалу:
Файл Опис РозмірФормат 
feinberg2016.pdf453.97 kBAdobe PDFЕскіз
Переглянути/відкрити
Показати базовий опис матеріалу Перегляд статистики


Усі матеріали в архіві електронних ресурсів захищені авторським правом, всі права збережені.