Minimizing the Age of Incorrect Information for Unknown Markovian Source
The age of information minimization problems has been extensively studied in Real-time monitoring applications frameworks. In this paper, we consider the problem of monitoring the states of unknown remote source that evolves according to a Markovian Process. A central scheduler decides at each time slot whether to schedule the source or not in order to receive the new status updates in such a way as to minimize the Mean Age of Incorrect Information (MAoII). When the scheduler knows the source parameters, we formulate the minimization problem as an MDP problem. Then, we prove that the optimal solution is a threshold-based policy. When the source's parameters are unknown, the problem's difficulty lies in finding a strategy with a good trade-off between exploitation and exploration. Indeed, we need to provide an algorithm implemented by the scheduler that jointly estimates the unknown parameters (exploration) and minimizes the MAoII (exploitation). However, considering our system model, we can only explore the source if the monitor decides to schedule it. Then, applying the greedy approach, we risk definitively stopping the exploration process in the case where at a specific time, we end up with an estimation of the Markovian source's parameters to which the corresponding optimal solution is never to transmit. In this case, we can no longer improve the estimation of our unknown parameters, which may significantly detract from the performance of the algorithm. For that, we develop a new learning algorithm that gives a good balance between exploration and exploitation to avoid this main problem. Then, we theoretically analyze the performance of our algorithm compared to a genie solution by proving that the regret bound at time T is log(T). Finally, we provide some numerical results to highlight the performance of our derived policy compared to the greedy approach.
READ FULL TEXT