期刊:Operations Research [Institute for Operations Research and the Management Sciences] 日期:2025-05-26
标识
DOI:10.1287/opre.2022.0643
摘要
Controlling stochastic systems often relies on the assumption of Markovian dynamics. However, this assumption frequently breaks down in mission-critical systems subject to failures—such as drones for power grid inspections—where the system failure rate increases over time. To enhance system survivability, operators may choose to abort missions based on noisy condition-monitoring signals. Yet, determining the optimal abort time in such settings leads to an intractable stopping problem under partial observability and non-Markovian behavior. In “Optimal Abort Policy for Mission-Critical Systems Under Imperfect Condition Monitoring,” Sun, Hu, and Ye introduce a novel Erlang mixture-based approximation that transforms the original non-Markovian process into continuous-time Markov chains. This approximation enables the formulation of partially observable Markov decision processes (POMDPs), whose optimal policies are shown to converge almost surely to the original optimal abort decision rules as the Erlang rate increases. Structural properties of the optimal POMDP policy are established, and a modified point-based value iteration algorithm is proposed to numerically solve the POMDP.