A Computational Theory of Robust Localization Verifiability in the Presence of Pure Outlier Measurements
The problem of localizing a set of nodes from relative pairwise measurements is at the core of many applications such as Structure from Motion (SfM), sensor networks, and Simultaneous Localization And Mapping (SLAM). In practical situations, the accuracy of the relative measurements is marred by noise and outliers; hence, we have the problem of quantifying how much we should trust the solution returned by some given localization solver. In this work, we focus on the question of whether an L1-norm robust optimization formulation can recover a solution that is identical to the ground truth, under the scenario of translation-only measurements corrupted exclusively by outliers and no noise; we call this concept verifiability. On the theoretical side, we prove that the verifiability of a problem depends only on the topology of the graph of measurements, the edge support of the outliers, and their signs, while it is independent of ground truth locations of the nodes, and of any positive scaling of the outliers. On the computational side, we present a novel approach based on the dual simplex algorithm that can check the verifiability of a problem, completely characterize the space of equivalent solutions if they exist, and identify subgraphs that are verifiable. As an application of our theory, we provide a procedure to compute a priori probability of recovering a solution congruent or equivalent to the ground truth given a measurement graph and the probabilities of each edge containing an outlier.
READ FULL TEXT