I Introduction
Consensus protocols [1, 2, 3, 4, 5, 6] have recently found applications in many disciplines including control [7] and signal processing [8, 9, 10, 11] literature. For consensus, a group of sensors/estimators/agents reach an agreement on state values, where state may represent different quantities and parameters of interest; for example, the state may be the velocity of Unmanned Aerial Vehicles (UAVs) while moving as a flock [1] or it may represent the temperature/windspeed in geographical fields to be estimated [9]. In this scenario, a sensor network reaches consensus on measurements and/or state innovations to estimate the underlying system state. Also, in distributed detection agents share information to reach consensus on LogLikelihood Ratio (LLR) [12] or scalarvalued decision statistic [13]. One possible application is in distributed estimation [9, 10, 11, 14, 15] with further application in spacecraft attitude estimation [16, 17]. Further, in control literature, consensus finds applications in distributed optimization [18], flight formation [19, 20, 21, 22], and multiagent rendezvous [23, 24]. Of particular interest in aerospace system applications, along with rendezvous in 3D space, are distributed target tracking [25] and formation control via a group of UAVs. For example, in distancebased formation control the consensus protocol is used to stabilize the UAVs to form a specific geometric shape (see Section VD for more information).
This paper proposes a singlebit consensus with ability to converge in finitetime. The main feature of this consensus protocol is the singlebit information update. The consensus protocol is proposed based on the sign of difference between state values. This implies that only singlebit of information (the sign) is required to update the state of the agents. This reduces the amount of processing/computation load and/or network communication load at each agent. It should be emphasized that in applications with realtime data processing where the computation and communication are required in faster time scale, the less computation and/or communication load is a significant merit. Since the protocol is nonlinear, a new Lyapunov function is proposed to prove consensus stability. Further, it is proved that for the proposed singlebit consensus protocol the Lyapunov function vanishes in finitetime, implying the finitetime convergence of the consensus protocol. In [4, 5, 6] finitetime consensus protocols are proposed, however these protocols impose large amount of computation on agents and are computationally less efficient than the proposed singlebit protocol in this paper.
Quantized consensus [12, 26, 27] is a related concept, where the agents reach consensus on quantized information with finite quantization levels for possibly unbounded data. In this direction [27] investigates a deterministic quantization based on alternating direction method of multipliers (ADMM). In [12] authors adopt a singlebit quantized consensus method for detection based on Bayesian criterion and NeymanPearson criterion. In [13] the authors propose two consensus+innovation type distributed detectors based on Generalized LikelihoodRatio Test (GLRT) for composite hypothesis testing via a group of sensors. Further, communication in multiagent systems based on singlebit of information is also adopted in distributed detection [28, 29]. In these works a group of sensors are spatially distributed over a surveillance field to locally detect the existence of an uncooperative target and then communicate their singlebit decisions to a fusion center. The singlebit decisions are based on either the hybrid combination of GLRT and Bayesian estimation [29] or GeneralizedRao Test [28]. The fusion center combines the received information based on the fusion rules and makes a global decision.
The main contributions of this paper are as follows: (i) the proposed protocol is based on singlebit of information, which makes it practical in realtime system applications. This is the most important feature of our proposed protocol. (ii) a new Lyapunov function is proposed to prove the stability and convergence of this nonlinear consensus protocol under certain connectivity condition. This Lyapunov function is irrespective of the consensus protocol dynamics, and therefore, might be used for stability analysis of other nonlinear consensus protocols in the literature [2, 4]. (iii) the convergence time of the consensus is in finitetime while reducing the computation load on agents in contrast to most asymptotic consensus protocols in the literature [1, 2]. It should be noted, although finitetime consensus protocols are already exist in the literature, to name a few [4, 5, 6], their main drawback is their computational complexity as compared to the proposed protocol in this paper.
The rest of the paper is organized as follows: Section II formulates the new consensus protocol. Section III provides the proof of consensus and convergence based on Lyapunov stability. Section IV provides the convergence condition in case of timevariant switching network topologies. Section V provides some applications of the proposed protocol. Section VI presents simulation to verify the results, and finally Section VII concludes the paper.
Ii New Consensus Protocol
Assume a network of agents with ability to process information and communicate with neighboring agents to share information. The communication network of agents is represented by graph , where represents the set of graph nodes (agents), represents the set of edges (communication links) defined as . Note that is the weight assigned to communication link from node to node . Further, the neighborhood of agent is defined as
The state of each agent is represented by and represents the state of all agents. The following consensus protocol is proposed to update the state of agent as:
(1) 
where is the sign function defined as:
(2) 
where the represents the absolute value of . Notice that consensus protocol (1) only requires the sign of , which can be defined by singlebit of information.
Extending the scalarstate protocol (1
) to vectorstate
, the updating law is as follows:(3) 
where represents the Euclidean norm of the vector. In this case, the agents use the weighted summation of the unit vector of its state relative to its neighbors’ states for control update.
In both cases of scalarvalued consensus (1) and vectorstate consensus (3) the amount of information exchange and/or the computation on agents is less than the common consensus protocols in the literature [1, 2, 4, 5, 6]. In protocol (1) only the sign of relative states is needed to be exchanged among agents and to be computed for state update. Similarly, for protocol (3) only unit vector in the direction of relative state vector is needed for computation and communication. This is the key feature reducing the amount of information exchange and/or computational load at each agent and improving the realtime feasibility of the protocol.
Now the question is how the agents exchange information on the sign function or unit vector. This depends on the nature of agents’ states. Assume the state represents the position or velocity of the agent. For consensus on scalarvalued velocities, for example in flocks or vehicle platooning [30], following protocol (1) each agent only needs to know if the other agent moves faster or slower, without needing to exactly know the velocity of the neighboring agent by communication or sensing the exact velocity. For consensus on position vectors as in (3) each agent uses the unit vector in the direction of relative positions of neighboring agents and, in contrast to protocols in the literature, there is no need to communicate exact positions of agents. This can be done, for example, by omnidirectional cameras on agents without need to communicate exact locations (See more explanations on this in Section VA). Then, the agents update their state (position) based on the weighted summation of these unit vectors. In scalarstate case, the state of agent is updated based on the weighted summation of s and s, where and are respectively assigned to the case and . The agents’ states get updated and evolve in time until all the agents have the same state and reach consensus. It should be mentioned that this protocol does not fail for static states. In other words, when the weighted sum of s and s or the unit vectors is zero, the state of agent does not change. The state remains unchanged until the summation changes due to change in the state of neighboring agents, or the system reaches consensus and the state of all agents remains unchanged and equal.
One drawback of the given protocol (1), and in general any nonLipschitz protocol, is the sensitivity to timedelay. In case there is timedelay in the information exchange among agents, undesirable oscillations in agents’ states may occur which is known as chattering phenomenon. This is a sideeffect of using nonLipschitz function and is prevalent in finitetime convergent consensus protocols as in [4, 5, 6] and also in Sliding Mode Control (SMC) [31]. One solution to avoid such phenomenon is to use smooth Lipschitz functions around the equilibrium, for example saturation function,
(4) 
This is proposed in SMC as described in [31]. In such case the agents’ states reach a convergence ball (of radius ) around the equilibrium in finitetime, however, the convergence inside this convergence ball is asymptotic. In terms of information exchange, the agents share singlebit of information outside this convergence ball, while inside this ball they need to share fullstate information. For example, when the state represents location, in case agents’ states get closer to eachother the agents are able to share more information, while in distant states only singlebit of information is exchanged. It should be mentioned, replacing the nonLipschitz function with a Lipschitz equivalent only alleviates the effect of timedelay and does not completely vanish the chattering phenomenon.
Iii Proof of FiniteTime Convergence
Here, we answer the following question: what is the connectivity requirement on the network such that state of all agents reach the same value? i.e., is the stable equilibrium point of the protocol (1) under what connectivity condition. To answer this, we introduce the concept of spanning tree in directed graphs. Define a directed tree as a directed graph where every node (except the root node) has exactly one incoming edge. The root node (also referred as leader node) has no incoming edge. A graph has a spanning tree if it contains a directed tree as a subgraph that spans all nodes.
Theorem 1.
Protocol (1) reaches consensus if and only if the communication network has a spanning tree.
Proof.
Contradiction is used for the proof. Sufficiency: if the graph has a spanning tree, we prove that the equilibrium point of (1) is in the form . As a contradiction assume that . Therefore, consider the agent with maximum (or minimum) state. Since the network has a spanning tree there is at least one agent in the neighborhood of (or agent is in the neighborhood of agent ) [32]. Therefore, or which both cases contradict the definition of equilibrium point. Necessity: If no spanning tree is contained in the communication graph , it implies that there is no information flow (directed path) at least among two agents. In graph theory, this implies that either the graph has at least two roots or the graph contains at least two unconnected components [32]. In first case, note that since it has no incoming information ( [32]). Therefore, the states of two root agents remain the same initial values without updating, and these two agents never reach consensus. In the second case, since there is no information flow (directed path) between two components, each component reaches a consensus value which in general differs from the consensus value of the other component. Therefore, for both cases the consensus may not be reached. ∎
Theorem 2.
In protocol (1), having a spanning tree in , states of all agents converge to stable consensus equilibrium point in finitetime.
Proof.
We prove the theorem using Lyapunov stability theorem. Define the following Lyapunov function:
(5) 
where and are respectively the maximum state value and the minimum state value of the agents, i.e. and In fact, and are timedependent, i.e., the agent possessing the max/min value differs at every time instant. Notice that implies that the max value and min value of all agents are equal and therefore the consensus equilibrium point is reached. Note that the Lyapunov function is continuous, regular, and Lipschitz. Also, is globally positive definite, i.e. and . Further, Lyapunov function is radially unbounded, i.e. as . For convergence and stability we prove that is negative definite.
(6) 
(7)  
(8) 
Define as the minimum positive consensus weight of agents in weight matrix . Since the weight matrix might be timevariant, the term might be assigned to different agents over time. We have,
(9)  
(10) 
Therefore, This implies that is globally negative definite, i.e. and . Therefore, based on Lyapunov stability theorem [31] the consensus point is globally stable equilibrium of protocol (1).
Further let be the convergence time of the consensus protocol (1).
(11)  
(12)  
(13) 
representing finitetime upperbound on conevergence. ∎
One point to be noted in the proof of Theorem 2 is on the notation of , , , , and . For these terms max/min values do not necessarily concern a single agent over time, but these max/min values concern all agents. In other words, the agent possessing the min/max value, its neighbors, and the associated weights change over time, and therefore, the timeevolution of the Lyapunov function (5) is not necessarily smooth.
It should be noted, the proof of the stability and convergence for the vectorstate protocol (3) follows similar Lyapunov analysis. In vectorstate problem, the Lyapunov function can be considered as the perimeter of the convex hull containing the vector state of the agents, or the circumference of the smallest covering ball/circle enclosing the vector states. Following similar analysis as in above, it can be proved that the Lyapunov function is always decreasing under protocol (3).
Iv TimeVariant Network Topologies
Note that the consensus network of agents may change in time due to failure or addition of new links among agents. This may particularly happen in network of mobile agents where the communication range of agents are limited or in real world applications due to obstacles. The objective of this section is to determine the conditions on changing network topology for which the consensus can be reached. The main point in this section is that our proposed Lyapunov function does not depend on the graph topology .
Theorem 3.
Consider the network topology of agents to be selected from the finite set of graphs , where . Agents reach consensus under protocol (1) if for a sufficient sequence of bounded nonoverlapping timeintervals , the combination of network topologies across each timeinterval contain a spanningtree.
Proof.
Again consider the proposed positive definite Lyapunov function which is independent of network topology. The proof is similar to the proof of Theorem 2. Note that in every timeinterval the combination of graph topologies contain a spanning tree. Therefore, the agent with (or ) has at least one neighbor or is a neighbor of agent in a subdomain of the interval (not necessarily in the entire timeinterval). This implies that for this time domain (or ). Consequently, following the statement of the proof in Theorem 2, in this time domain is negative definite and more precisely . This implies that after sufficient (finite) number of timeintervals the Lyapunov function reaches and consensus is achieved. ∎
V Applications
Va Rendezvous in 2D/3D space
In rendezvous problem [23, 24], the goal is to devise control strategies on a group of mobile agents to eventually move them to a single location The state of each agent is its position in 2D/3D space, and the aim is to reach a consensus on the position. In words, each agent applies the weighted summation of the unit vector relative to its neighbors’ positions to update its own location. In fact, using protocol (3), every agent only needs to be informed of the direction of the neighboring agent’s relative position vector, but not its magnitude. This is significant as by using, for example, omnidirectional cameras [33] each agent finds information on the relative direction towards its neighbor’s position, and there is no need to communicate the exact location of the agents. This approach can be implemented, for example, to improve the experimental results in [34] in terms of realtime communication and computation; each robot only needs to find the direction that the neighboring robots are located using an omnidirectional camera and there is no need to communicate its position to the neighboring robots. This, further, can be extended to the 3D case to implement the rendezvous task over a network of UAVs.
VB Distributed estimation
In single timescale distributed estimation [9, 10, 11, 14] the idea is to track the state of the dynamical system via a network of agents. Consider a noisy system monitored by noisecorrupted measurements,
(14)  
(15) 
In the above formulation, is the dynamical system matrix, is the timestep, is the measurement of agent at timestep , and and are the noise terms. Following the distributed estimation protocol in [10, 35, 36, 37, 38], the following protocol based on the consensusupdate law (3) is considered:
(16) 
where, and represent some specific neighborhoods of agent , and is the estimation gain at agent (see the previous works by author [10, 35, 36, 37, 38] for more information). One interesting extension to distributed estimation protocol (16) may be considered for the case of sensor failure and countermeasures to recover for that (see more information in [37].)
One application of the above distributed estimation scenario is in target tracking based on timedifferenceofarrival (TDOA) via a group of UAVs [25]. In this framework a group of UAVs estimate the location of a mobile target based on the timedifferenceofarrival of some beacon signal received by the UAVs. Each UAV shares the TDOAbased information with its neighboring UAVs, and also shares the estimated position of the target. Then, by consensus averaging of the position estimates and information fusion on the statepredictions, each UAV in the network can localize the source, and the group tracks the location of the mobile target.
VC Distributed optimization
In distributed optimization problem [18] the objective is to distributively solve the following optimization problem via a multiagent network,
(17) 
where is the continuously convex objective function and is the local objective function only known by agent . The distributed gradientdescentbased solution for this problem applying the consensus protocol (1) is as follows,
(18) 
where is a scalar weight and is the optimization stepsize at time .
The protocol (18) reduces the amount of information processing and computation load on agents as compared to [18], particularly in large scale applications. Note that each agent only relies on the sign of relative state of neighboring agents. In the same line of research, [26] considers the case that the communicated decision information among agents are quantized in order to alleviate the communication bottleneck in distributed optimization. The authors propose a Quantized Decentralized Gradient Descent (QDGD) and prove the convergence of their protocol for strongly convex and smooth local cost functions.
VD Formation control of UAVs
One application within the aerospace systems is cooperative control of UAV formation [19, 20, 21, 22]. A group of autonomous UAVs form a prespecified formation setup (e.g. a starshaped formation) in which two neighboring UAVs and are in the distance based on their formation shape. The control input to the group of UAVs are designed such that they can move along on this formation without collision in order to, for example, track a mobile target or avoid an obstacle. One example of such formation control scenario is given as following. Define as the position of UAV (or agent) in 3D space evolving in time as:
(19) 
where , , and
(20) 
with , . Note that the above formation protocol is a distancebased approach, where the final formation of agents depend on the rigidity of the neighboring graph, and is also based on the distance of every two agents and in forming the geometric shape.
Vi Simulation
For simulation we consider network of agents with random initial states in . Assume the network is timevariant and switches every seconds between the graph topologies shown in Fig.1.
As it can be seen from the figure, graph is connected undirected and contains a spanning tree. Graph is connected with no spanning tree. represents a strongly connected graph having a spanning tree. Finally, contains no spanning tree as a subgraph. To check the conditions of Theorem 3 for consensus convergence, note that the combination of the network in time domain of every seconds contain a spanning tree. This implies that consensus can be reached, as it is shown in Fig.2. This figure shows that the difference of state values () is decreasing over time and all agents reach a consensus value.
To compare our results, the timeevolution of the Lyapunov function (5) for our proposed consensus protocol along with six other consensus protocols from [1, 2, 4, 5, 6] are shown in Fig.3. In this simulation for protocol in [4], for protocol in [5], , , , and for protocol in [6], for protocols in [2]. All these protocols are evaluated over switching network topologies (Fig. 1) with the same initial state values of agents. As it can be seen, the linear average consensus [1], the geometric consensus [2], and the harmonic consensus [2] all reach asymptotic stability while the convergence of the other four protocols are in finitetime. The Lyapunov function for our proposed protocol (and the six other protocols) is decreasing over time () which implies Lyapunov stability.
Vii Conclusions
Note that the protocol is based on computation and communication of singlebit of information and unit vector on state update. This makes the protocol more feasible in terms of realtime applications, since it requires less computational load and information exchange among agents. The known protocols in the literature require the calculation of the exact relative state or a function of it , however this protocol only requires the singlebit information on sign of . In other words, the agent only needs to know for neighboring agent if or . Similarly, for vector state consensus the agent only need to know the unitvector in the direction of relative state vector, e.g. by use of omnidirectional cameras in rendezvous problem as discussed in Section VA. This is more realtime feasible in terms of computational complexity and communication load on agents.
The consensus protocol in this paper may be used to decentralize the information fusion in [28, 29]. In this case the information on detecting the existence or absence of the target may be shared by agents in their neighborhood via an undirected communication network and agents eventually average the received information via the proposed protocol and reach a consensus on detecting the target. However, this approach may result in performance degradation as compared to advanced GLRT or GRao fusion methods.
References
 [1] R. OlfatiSaber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multiagent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, Jan. 2007.
 [2] D. Bauso, L. Giarré, and R. Pesenti, “Nonlinear protocols for optimal distributed consensus in networks of dynamic agents,” Systems & Control Letters, vol. 55, no. 11, pp. 918–928, 2006.
 [3] H. Sayyaadi and M. Doostmohammadian, “Finitetime consensus in directed switching network topologies and timedelayed communications,” Scientia Iranica, vol. 18, no. 1, pp. 75–85, 2011.
 [4] L. Wang and F. Xiao, “Finitetime consensus problems for networks of dynamic agents,” IEEE Transactions on Automatic Control, vol. 55, no. 4, pp. 950–955, 2010.

[5]
X. Liu, J. Lam, W. Yu, and G. Chen,
“Finitetime consensus of multiagent systems with a switching
protocol,”
IEEE Transactions on Neural Networks and Learning Systems
, vol. 27, no. 4, pp. 853–862, 2015.  [6] Z. Zuo and L. Tie, “A new class of finitetime nonlinear consensus protocols for multiagent systems,” International Journal of Control, vol. 87, no. 2, pp. 363–370, 2014.
 [7] S. Park and N. Martins, “Necessary and sufficient conditions for the stabilizability of a class of LTI distributed observers,” in 51st IEEE Conference on Decision and Control, 2012, pp. 7431–7436.
 [8] M. O. Sayin and S. S. Kozat, “Single bit and reduced dimension diffusion strategies over distributed networks,” IEEE Signal Processing Letters, vol. 20, no. 10, pp. 976–979, 2013.

[9]
S. Das and J. M. F. Moura,
“Consensus+ innovations distributed kalman filter with optimized gains,”
IEEE Transactions on Signal Processing, vol. 65, no. 2, pp. 467–481, 2017.  [10] M. Doostmohammadian and U. Khan, “On the genericity properties in distributed estimation: Topology design and sensor placement,” IEEE Journal of Selected Topics in Signal Processing, vol. 7, no. 2, pp. 195–204, 2013.
 [11] M. Doostmohammadian and U. A. Khan, “On the distributed estimation of rankdeficient dynamical systems: A generic approach,” in 38th International Conference on Acoustics, Speech, and Signal Processing, Vancouver, CA, May 2013, pp. 4618–4622.
 [12] S. Zhu and B. Chen, “Distributed detection in adhoc networks through quantized consensus,” IEEE Transactions on Information Theory, vol. 64, no. 11, pp. 7017–7030, 2018.
 [13] A. K. Sahu and S. Kar, “Recursive distributed detection for composite hypothesis testing: Nonlinear observation models in additive gaussian noise,” IEEE Transactions on Information Theory, vol. 63, no. 8, pp. 4797–4828, 2017.
 [14] M. Doostmohammadian, H. R. Rabiee, and U. A. Khan, “Cybersocial systems: modeling, inference, and optimal design,” IEEE Systems Journal, 2019.
 [15] M. Doostmohammadian and U. A. Khan, “On the characterization of distributed observability from first principles,” in 2nd IEEE Global Conference on Signal and Information Processing, 2014, pp. 914–917.
 [16] J. L. Crassidis and F. L. Markley, “Unscented filtering for spacecraft attitude estimation,” Journal of guidance, control, and dynamics, vol. 26, no. 4, pp. 536–542, 2003.
 [17] S. Roy and R. A. Iltis, “Decentralized linear estimation in correlated measurement noise,” IEEE Transactions on Aerospace and Electronic Systems, vol. 27, no. 6, pp. 939–941, 1991.
 [18] A. Nedić and A. Olshevsky, “Distributed optimization over timevarying directed graphs,” IEEE Transactions on Automatic Control, vol. 60, no. 3, pp. 601–615, 2014.
 [19] S. S. Kia, B. Van Scoy, J. Cortes, R. A Freeman, K. M. Lynch, and S. Martinez, “Tutorial on dynamic average consensus: The problem, its applications, and the algorithms,” IEEE Control Systems Magazine, vol. 39, no. 3, pp. 40–72, 2019.
 [20] R. OlfatiSaber and R. M. Murray, “Distributed cooperative control of multiple vehicle formations using structural potential functions,” IFAC Proceedings Volumes, vol. 35, no. 1, pp. 495–500, 2002.
 [21] R. Padhi, P. R. Rakesh, and R. Venkataraman, “Formation flying with nonlinear partial integrated guidance and control,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 4, pp. 2847–2859, 2014.
 [22] A. Zou and K. Kumar, “Distributed attitude coordination control for spacecraft formation flying,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 2, pp. 1329–1346, 2012.
 [23] J. Cortés, S. Martínez, and F. Bullo, “Robust rendezvous for mobile autonomous agents via proximity graphs in arbitrary dimensions,” IEEE Transactions on Automatic Control, vol. 51, no. 8, pp. 1289–1298, 2006.
 [24] W. Ren, R. W. Beard, and E. M. Atkins, “Information consensus in multivehicle cooperative control,” IEEE Control systems magazine, vol. 27, no. 2, pp. 71–82, 2007.
 [25] O. Ennasr, G. Xing, and X. Tan, “Distributed timedifferenceofarrival (tdoa)based localization of a moving target,” in IEEE 55th Conference on Decision and Control. IEEE, 2016, pp. 2652–2658.
 [26] A. Reisizadeh, A. Mokhtari, H. Hassani, and R. Pedarsani, “Quantized decentralized consensus optimization,” in 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018, pp. 5838–5843.
 [27] S. Zhu and B. Chen, “Quantized consensus by the admm: Probabilistic versus deterministic quantizers,” IEEE Transactions on Signal Processing, vol. 64, no. 7, pp. 1700–1713, 2015.
 [28] D. Ciuonzo, P. S. Rossi, and P. Willett, “Generalized rao test for decentralized detection of an uncooperative target,” IEEE Signal Processing Letters, vol. 24, no. 5, pp. 678–682, 2017.
 [29] D. Ciuonzo and P. S. Rossi, “Distributed detection of a noncooperative target via generalized locallyoptimum approaches,” Information Fusion, vol. 36, pp. 261–274, 2017.
 [30] M. Pirani, E. Hashemi, A. Khajepour, B. Fidan, B. Litkouhi, S. Chen, and S. Sundaram, “Cooperative vehicle speed fault diagnosis and correction,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 2, pp. 783–789, 2018.
 [31] J.J. Slotine and W. Li, Applied nonlinear control, PrenticeHall, 1991.
 [32] R. Diestel, Graph theory, Springer Publishing Company, Incorporated, 2017.
 [33] T. Kato, M. Nagata, H. Nakashima, and K. Matsuo, “Localization of mobile robots with omnidirectional camera,” World Academy of Science, Engineering and Technology International Joumal of Computer, Control, Quantum and Information Engineering, vol. 8, no. 7, 2014.
 [34] W. Ren, H. Chao, W. Bourgeous, N. Sorensen, and Y. Chen, “Experimental validation of consensus algorithms for multivehicle cooperative control,” IEEE Transactions on Control Systems Technology, vol. 16, no. 4, pp. 745–752, 2008.
 [35] M. Doostmohammadian and U. Khan, “Graphtheoretic distributed inference in social networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 8, no. 4, pp. 613–623, Aug. 2014.
 [36] M. Doostmohammadian and U. A. Khan, “Topology design in network estimation: a generic approach,” in American Control Conference, Washington, DC, Jun. 2013, pp. 4140–4145.
 [37] M. Doostmohammadian, H. R. Rabiee, H. Zarrabi, and U. A. Khan, “Distributed estimation recovery under sensor failure,” IEEE Signal Processing Letters, vol. 24, no. 10, pp. 1532–1536, 2017.
 [38] M. Doostmohammadian and U. A. Khan, “Communication strategies to ensure generic networked observability in multiagent systems,” in 45th Annual Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2011, pp. 1865–1868.
Comments
There are no comments yet.