Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

08/31/2017
by   Rohollah Moghadam, et al.
0

An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input which allows changing the team behavior or mission in response to environmental changes. A two-layer resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. In the first security layer, an observer-based distributed H-infinity controller is designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H-infinity optimal synchronization problem and an off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. In the second security layer, a trust-confidence based distributed control protocol is proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based on only its local evidence. The proposed RL algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2017

Optimal Distributed Control of Multi-agent Systems in Contested Environments via Reinforcement Learning

This paper presents a model-free reinforcement learning (RL) based distr...
research
07/08/2018

Resilient Output Synchronization of Heterogeneous Multi-agent Systems under Cyber-Physical Attacks

In this paper, we first describe, supported with analysis, the adverse e...
research
01/03/2018

Attack Analysis and Resilient Control Design for Discrete-time Distributed Multi-agent Systems

This work presents a rigorous analysis of the adverse effects of cyber-p...
research
03/22/2023

Resilient Output Containment Control of Heterogeneous Multiagent Systems Against Composite Attacks: A Digital Twin Approach

This paper studies the distributed resilient output containment control ...
research
12/11/2021

A General Auxiliary Controller for Multi-agent Flocking

We aim to improve the performance of multi-agent flocking behavior by qu...
research
03/22/2023

Data-Driven Leader-following Consensus for Nonlinear Multi-Agent Systems against Composite Attacks: A Twins Layer Approach

This paper studies the leader-following consensuses of uncertain and non...
research
09/28/2017

Resilient Learning-Based Control for Synchronization of Passive Multi-Agent Systems under Attack

In this paper, we show synchronization for a group of output passive age...

Please sign up or login with your details

Forgot password? Click here to reset