Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots (evolved version)

03/07/2023
by   Matthew Cavorsi, et al.
0

We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit to Google Maps, subject to a Sybil attack. We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5

READ FULL TEXT

page 1

page 19

research
09/25/2022

Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots

We develop a resilient binary hypothesis testing framework for decision ...
research
04/02/2023

Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks

Coordination in a large number of networked robots is a challenging task...
research
06/23/2022

Probabilistically Resilient Multi-Robot Informative Path Planning

In this paper, we solve a multi-robot informative path planning (MIPP) t...
research
12/04/2021

Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis Testing

Machine learning models are known to be susceptible to adversarial attac...
research
12/11/2020

Crowd Vetting: Rejecting Adversaries via Collaboration–with Application to Multi-Robot Flocking

We characterize the advantage of using a robot's neighborhood to find an...
research
12/05/2022

Learning Trust Over Directed Graphs in Multiagent Systems (extended version)

We address the problem of learning the legitimacy of other agents in a m...
research
10/17/2018

Algorithms and Fundamental Limits for Unlabeled Detection using Types

Emerging applications of sensor networks for detection sometimes suggest...

Please sign up or login with your details

Forgot password? Click here to reset