Multi-agent systems involve interacting elements with computing capabilities, also called agents or nodes, who communicate with each other to achieve a collective control task that is more difficult or sometimes even impossible to be performed by an individual agent. This configuration of multi-agent systems has a great benefit to model and solve many problems in different fields of applications including sensor networks [1, 2, 3], computer networks [4, 5] and social science [6, 7, 8]. One of the common problems that has been studied in these applications is the consensus of multi-agent systems on aggregate functions such as, e.g., MIN, MAX, SUM and AVERAGE. For instance, in a group of distributed sensors, it can be required to compute the average temperature of a specific region or to elect the sensor with maximum power resource to preserve the communication over a costly link or to reduce energy for a wireless sensor network, see, e.g., [9, 10, 11].
Most existing results of the literature rely on the assumption that the system composition is static, i.e., the set of agents present in the system does not change after the initial time, see, e.g., [12, 10, 13, 14] and the references therein. However, this requirement can be difficult to satisfy in some implementation scenarios where new agents can join and/or existing agents can leave the network at any time instant. This phenomenon is known in the literature as “network churn” [15, 16], “dynamic network” [17, 18, 19] or “open multi-agent systems” [20, 21, 22]
. In this case, the consensus problem on aggregate estimates becomes more challenging to handle compared to the case of static networks. For instance, consider the paradigmatic problem of MAX-consensus with distributed communications and assume that the agent with the largest state value has left the network after all the agents have converged to its state value. In this case, all the existing nodes in the network will then hold outdated information. This scenario cannot occur in static networks, which highlights one of the inherent challenges of open multi-agent systems.
In this paper, we investigate the problem of MAX-consensus in open multi-agent systems with distributed communications. The agents are assumed to be anonymous, do not have global identifiers, and all run the same algorithm. We further assume that interactions only occur via pairwise “gossip” exchanges between randomly selected agents in the sense that, at any (discrete) time instant, (only) two agents are selected randomly to exchange their information, update their MAX estimates and possibly other variables. To cope with the dynamic nature of the network, two different solutions are proposed depending on whether or not it is possible for the leaving agents to announce their departures.
In the case in which announcements are made, our algorithm relies on a variable that describes how “up-to-date” agents are with respect to recent departures, and priority is given to information coming from the most “up-to-date” agents. In the case where agents disappear without sending a last message, our algorithm maintains an estimate of the age of the information, and estimates corresponding to information deemed too old are discarded.
We will show that our two approaches ensure that outdated information can be forgotten, and that the consensus on the MAX value can be achieved (with high probability) if the system composition stops evolving.
The problem of MAX-consensus in multi-agent systems has been studied in, e.g., [23, 10, 24, 11]. Among existing techniques, the work of  has considered MAX-consensus with random gossip interactions between agents, on a static network. Compared to existing works of the literature, our result is adapted to the problem of MAX-consensus in multi-agent systems when the network is open, which has not been considered in the previously mentioned works. Our proposed approach encompasses the result of pairwise gossip interaction in static networks in  as a particular case.
The remainder of the paper is organized as follows. Notations are given in Section II. The problem is formulated in Section III. In Section IV, we treat the case where leaving agents send a last message, and in Section V, we treat the case where they do not. Numerical simulations are given in Section VI. Conclusions and discussions are provided in Section VII.
Iii Problem statement
Consider a connected time-varying graph , where and denote, respectively, the set of existing agents and the set of edges in the graph at time . The graph is dynamic in the sense that new agents can join and/or existing agents can leave at any time . Hence, the cardinality of , denoted by , is not necessarily constant for all . The agents communicate with each other in a pairwise randomized gossip fashion . In other words, at any time instant , there are three possibilities: (i) an agent joins the system and , (ii) an agent leaves the system and , or (iii) two randomly selected agents communicate with each other (Note that these discrete-time instants may be interpreted as the sampling of an asynchronous process at those times where an event occurs). Joining agents are assumed to know that they join the system. Leaving agents may or may not be able to send one last message (to one other agent) before leaving, which are two cases of interest, which will be discussed in Section IV and V, respectively.
Every agent has two special states: is its intrinsic value, which is constant and determined arbitrarily when joining the system, and is its estimated answer at time for the MAX value. Our goal is to estimate the maximum intrinsic value of all the agents present in the system, so we would ideally want, when no more agents are joining or leaving the network after time , that there is a time such that , for all and for . Agents may then have other states that they use to reach this goal.
If the network would be static, i.e., is time-invariant, the estimation of the maximum could be achieved in finite time by starting from for every agent, and setting whenever agents and interact at time , see, e.g., . The main challenge in a dynamic or open network lies with the need for the algorithm to take new agents into account and to eventually discard information related to agents no longer present in the system to ensure that is eventually recovered once the system composition stops evolving. Classical algorithms such as that in  do not guarantee this: outdated values from agents no longer in the system may never be discarded.
Note that an alternative and maybe more natural goal would be to have the track sufficiently accurately. This more ambitious goal is left for future studies, see Section VII for further discussions on this issue.
Finally, we chose to make the following assumption for the sake of simplicity of exposition.
The graph is complete for all .
This means that every pair of distinct agents in the network can communicate directly with each other. The algorithms we develop would actually also work on general dynamic graphs under suitable connectivity assumptions, but the analysis would be more complex.
Iv Departures are announced
Iv-a Algorithm description
If leaving agents announce their departure (to one other agent), then we can benefit from this knowledge to correct the outdated information. For that purpose, we introduce an auxiliary variable at each agent, meant to represent the “level of information” available to about the departures up to time . It will in general not be equal to the actual number of departures, nor converge to it. The algorithm is designed to ensure that those with the largest value have valid estimates, i.e. their correspond to the of agents present in the system. For this purpose, information coming from agents with higher will be given priority over information coming from agents with lower values, and it will be made sure that agents with a lower value of will never have influenced those with a higher value.
The algorithm is summarized as follows. Initially, every existing agent at sets and , as shown in Algorithm 1.
When a new agent joins the group at time , it initializes its counter and its estimate according to Algorithm 2.
If an agent leaves the system, it sends a last message containing its counter value to a randomly selected agent . The reaction of is governed by Algorithm 3, which can be interpreted as follows: If the counter of the leaving agent is less than , then agent ignores this departure since the information of is deemed less up-to-date than its own and has not influenced it. On the other hand, if , then may have influenced and possibly agents with values higher than , but no larger than . To ensure that none of the agents with the highest values hold the now outdated value , will reset its to , which is by definition a valid value, and set its to , a value above that of all those who could have been influenced by .
The gossip communication between agents is performed via Algorithm 4 (values not explicitly updated remain constant between and ). When , this implies that agents and either have not been informed about any departure from the group, i.e., , or have equal information level about the departure of one or more agents. In either case, agents and can exchange their information to update their estimate for the MAX value. When , agent ’s information about past departures is deemed more up to date. Agent is then not allowed to transfer its estimate to avoid infecting with possibly outdated information (unless its estimate is actually its own value, which is by definition valid). Therefore, agent restarts to and increments its counter to in order to alert other future agents who have not been informed yet to restart. The case when is completely symmetric.
Iv-B Eventual Correctness
We now show that the algorithm described in the previous subsection is correct in the sense that, with high probability (and even almost surely), it eventually settles on the correct value if arrivals and departures stop.
Remember that denotes the group of agents present at time , and let be the set of intrinsic values of nodes in . Assume that after some time no agent leaves and no new agent joins the system, so that , and for all . Then, we need to show that all the currently existing agents in the network will successfully reach the correct maximum value. For that purpose, we define the following property.
We say that an algorithm is eventually correct if for any with for all , there exists a such that for all and all .
Denote , MAX and . We state the following result.
For all and any , if then .
Lemma 1 states that, at any time , if the counter value of an agent is equal to the maximum value , then its estimate is equal to an intrinsic value of one of the agents present in the system at this time and whose value is .
Proof. Consider any agent with . We have three scenarios:
(a) Agent has just joined the system at time . Hence, and according to Algorithm 2. Since , this implies that . Hence, for all . Consequently, it holds that .
(b) (and is not a new agent). In this case, since at most one agent can change its counter at any time, there is exactly one agent with . This implies that an agent with has left at time and informed agent about its departure, otherwise or . Consequently, agent restarts according to lines 7-9 in Algorithm 3 and we have that and .
(c) (and is not a new agent). We have two possibilities:
c1) , i.e., agent has increased its counter value at time such that , which can happen by one of the following actions:
an agent with has left the group and informed agent about its departure. Consequently, in view of lines 4-6 in Algorithm 3, agent has incremented its counter to , otherwise and .
no departure occurred but agent has interacted with an agent with . Consequently, in view of lines 4-6 in Algorithm 4, we obtain and .
c2) , i.e., agent did not increase its counter value at time . Then, since , it holds that and we know that for some . There are two different possibilities:
. We know that agent did not leave because otherwise it would have been true that agent has informed some neighbour about its departure and resulted in , which leads to case (b) not case (c). Hence, since , it holds that .
This completes the proof of Lemma 1.
Proof. The proof of Theorem 1 relies on Lemma 1 and the result developed in . Note that an essential difference between our problem and the setup in  is that the gossip interaction between agents (as in Algorithm 4) depends considerably on their counter values, which is not the case in static networks as in . Therefore, we will invoke their result twice, once on the counter values to show that all agents eventually have the maximal counter value , and once on the actual estimate to show that they eventually reach .
After time , only Algorithm 4
is applied. Ignoring for the moment its effect on the, observe that it performs a classical gossip operation on the , in the sense that an interaction between and results in . Theorem 4, 5 in , applied to complete graphs following Assumption 1, allows us then to guarantee that the counters of all agents converge to the maximum counter value in a finite time with the following properties
where denotes the th harmonic number, i.e., . Moreover, we have, with probability that is bounded by
After , since for all , it follows from Lemma 1 that all correspond to actual values . Moreover, since one can easily verify that at all times, there holds . It is therefore sufficient to show that all eventually settle on the same value.
For this purpose, observe that when all agents have the same , Algorithm 4 reduces to its line 2, , which is again a classical pairwise gossip. We can then re-invoke Theorem 4, 5 in  to show the existence of a after which , with the same bounds on as on . In particular, , and there is a probability that is at most twice the expression in (2). This achieves the proof of Theorem 1.
Note that, since we apply the result of  twice to prove that Algorithm 1-4 is eventually correct, the upper bound that we obtain on the time needed to achieve this property is conservative. This comes from the fact that, in Algorithm 1-4, the agents update their counters and their estimates simultaneously and not sequentially.
V Departures are not announced
V-a Algorithm description
Leaving agents may not always be able to announce their departure, such as in case of unforeseen failures or disconnections. The algorithms in Section IV can no longer be applied in such a more challenging setting. Therefore, we now propose an alternative algorithm that does not use messages from departing agents. The idea is to have each agent maintain a variable representing the “age” of its information. This age is kept at 0 when the agent’s estimate of corresponds to (only) its own value , as the validity of its information is then guaranteed. Otherwise is increased by 1 every time agent interacts with another agent, as the information gets “older”. When an agent changes its estimate by adopting the estimate of an agent , it also sets to the value , which corresponds to the age of the new information it now holds. Finally, when reaches a threshold , the information is considered too old to be reliable and is discarded; is reset to and to 0. We defer the discussion on the value of to Section VII, but already note that it should depend on (bounds of) the system size, or (possibly) change with time.
Formally, the behavior of an agent joining the system is governed by Algorithm 5, while the update of and the gossip interactions are governed by Algorithms 6 and 7 (where we use to denote intermediate values the variables may take during the computation leading to their values at ). Observe that when and have the same estimate they update the age of information to the smallest among and . Observe also that the algorithms guarantee that for every at all times, since can never decrease except when it is re-initialized at . Finally, there is no algorithm for the departure, since agents are not assumed to be able to take any action when other agents leave as this is not announced.
V-B Eventual Correctness
We now discuss the eventual correctness of the algorithm described above. For space reasons, only sketches of proofs will be presented. We use the same conventions as in Section IV-B. We first prove that outdated values are eventually discarded if agents stop leaving or arriving.
If no arrival or departure takes place after time , then almost surely there exists a time after which every estimate corresponds to the value of an agent present in the system, i.e., for , for all there exists a such that . As a consequence, for every and .
Proof. Observe first that agents can only set their to their own or to the value of some other agent. Hence, since the set of values remains unchanged after , values for times that are not equal to some , must be equal to some , i.e., must have been held as estimated at time . We show that these outdated values are eventually discarded.
Let be such an outdated value, that is, for some but for no . Let then be the set of agents holding as estimate at time , and be the minimal age of information at for those holding this outdated value as estimate. As long as is non-empty, there must hold due to the reset in Algorithm 6. We will show that must keep increasing if remains non-empty, leading to a contradiction.
Every time an agent for which interacts with some other agent, It follows from Algorithm 7 and the timer update in Algorithm 6 that it must increase its counter by 1, unless it changes its value and no longer belongs to . In both cases the set of agents in with this taking this value has decreased by 1. Besides, since is equal to no , the only way an agent can join if it was not in is by interacting with an agent , and the rules of the algorithm imply then that . Hence never decreases, and when it is not increasing, the number of agents in for which either remains constant, or decreases as soon as one of them is involved in an interaction (once it reaches 0, automatically increases). Since all agents are almost surely repeatedly involved in interactions, this means will almost surely eventually increase as long as is nonempty, in contradiction with the fact that it cannot exceed . must thus almost surely eventually be empty, which means that any outdated value is thus almost surely eventually discarded, so that after some time every estimate corresponds to a for .
Let us now prove that the agents’ estimates eventually take the correct value MAX with a high probability.
For all , there exists a (sufficiently large) such that, if no arrival or departure takes place after time , then there exists a time after which holds for every with a probability at least .
Proof. Let be an agent holding the maximal value after time : . It follows from Lemma 2 that holds after some , which implies , since one can verify that holds for all agents at all times. The timer update Algorithm 6 implies then that at all times after .
Let us now fix some arbitrary time and let be the set of agents such that (i) , and (ii) . The set contains at least agent . Moreover, for , there holds . Indeed, observe first that no agent of “resets” because the of agents in are by definition smaller than . Moreover, agents in do not change their value either because it follows from Lemma 2 that no agent has a value , so condition (i) still holds. Besides, the timer increase by at most 1 at each iteration so condition (ii) also holds. Observe now that whenever an agent interacts with an agent at a time , agent will set to and join . A reasoning similar to that the analysis of classical pairwise gossip algorithm in  shows then that, for every , there exists of a given by (2) such that with probability at least , all agents will be in after and at least until (provided ). There would thus hold for all . Since this holds true for any arbitrary , it follows that for every and , holds with probability at least .
The proofs of eventual correctness show that the value of the threshold is subject to a trade-off: We see from the proof of Lemma 2 that the time needed to discard outdated values increases when is increased. On the other hand, a sufficiently large threshold is needed in Theorem 2. In its proof, we see that larger thresholds allow larger , which imply smaller probabilities of some agent not having the correct value.
Besides, we see in Theorem 2 that must be sufficiently larger than the expression (2), which depends on , the eventual size of the system. This implies that agent must know at least a bound on this size, unlike in the algorithm developed in Section IV when leaving agents could send a last message. One theoretical solution to avoid this problem would be to let slowly grow with time, so that it would eventually always be sufficiently large if the system composition stops changing (This growth should be sufficiently slow for the argument of Lemma 2 still to be valid). However, the system would also become slower and slower in discarding outdated information.
We demonstrate the application of our algorithms on a group of 25 agents: Initially, the intrinsic states of all agents were assigned to random integer values between 0 and 1000. The largest two values of are found to be and . The estimates for all agents are initialized to and all the counters and ages are initialized to 0. Agent with the highest value, leaves at . Pairwise interactions between two randomly selected agents take place at every other time.
We have simulated the two algorithms, with two thresholds for that of Section V, and the results are represented in Fig. 1. We note that in the three cases, all the agents first converge to the MAX value of in a bit more than 100 time steps, before agent 9 leaves the network. After the departure of agent at , we see that the algorithm of Section IV that uses messages from departing agents reconverges to the new maximal value in 137 time steps. The performance of the algorithm of Section V without messages from leaving agents are significantly worse. For a threshold , we see that it takes 506 time steps to reconverge to the new maximal value, but the system later suffers from several spurious resets. These are caused by agents reaching the threshold by chance. The probability of this occurring can be significantly reduced by taking a higher threshold, but this results in an even longer time to react to the departure of , as seen in Fig. 1(c) with . 2439 time-steps are indeed needed to re-obtain the correct value, mostly because it takes very long before the agents abandon their former estimate. This clearly illustrates the trade-off on the threshold value : a too small value will result in spurious resets as soon as some agents “have not heard” about the agent with the highest value for too long. But a too large threshold will result in a significant delay before agents decide that an agent has probably left the system.
We also performed comparisons between the two approaches on the convergence time to reach consensus after the agent with MAX has left the group for other numbers of nodes. The results are summarized in Table I. We take with the algorithm of Section V. We observe that when the number of agents increases, the algorithm of Section IV requires proportionally much fewer iterations to reach consensus, as expected and already observed in Fig. 1. Moreover, it also achieves a stronger version of the property of eventual correctness than the algorithm of Section V, as it avoids spurious resets, as discussed above. It does however require the possibility of sending messages when leaving.
Vii Discussion and Conclusion
We have investigated the distributed MAX-consensus problem for open multi-agent systems. Two algorithms have been proposed depending on whether the agents who leave the network can inform another existing agent about their departure or cannot. The eventual correctness has been proven for both.
Taking a step back, we see two main challenges in the design of algorithms for open multi-agent systems, as also briefly noted in :
Robustness and dynamic information treatment: The algorithms should be robust to departures and arrivals, in the sense that they should keep updating their estimates to discard outdated information. Moreover, novel information held by arriving agents should be taken into account, and outdated information, for example, related to agents no longer in the system, should eventually be discarded.
Performance in open context: The performance of classical multi-agents algorithms is often measured by the rate at which they converge to an exact solution or a desired situation (or the time to reach such a situation). This approach is no longer relevant in a context where agents’ departures and arrivals keep “perturbing” the system, and possibly the algorithm goal (as is the case here). Rather, efficient algorithms would be those for which the estimated answer remains “close” to some “instantaneous exact solution”, according to a suitable metric.
The algorithms we have developed here do answer the first issue of robustness and information treatment for the problem of distributed maximum computation. The characterization and optimization of their performance in an open context, however, remains unanswered at present and could be the topic of further works. We note that the behavior of a gossip averaging algorithm in an open multi-agent system was characterized in , but this algorithm was not designed to compute a specific value, as is the case here.
In particular, we observe that both algorithms would suffer from occasional apparently unnecessary resets. This may happen after the departure of an agent that did not have the largest value in the algorithm of Section IV, or when an agent has been isolated for too long from that with the highest value in the algorithm of Section V. We do not know at this stage if these spurious resets can be entirely avoided, especially when leaving agents cannot send a final message. In this case, it is indeed impossible to know for sure whether the agent with the highest value has left or has just not communicated for a while. There are, however, several possibilities to mitigate the damage of these spurious resets and to play on the trade-off between the effect of these perturbation and the speed at which the system reacts. A simple solution could be for example to apply an additional filtering layer when the algorithm requires an important decrease of . In this case, a new estimate would follow except that sharp decrease would be replaced by gradual ones. We also observe that our second algorithm will either only work when the system size is not too large with respect to (case of a fixed threshold) or eventually work for all size but gradually become slower and slower to react (case of a growing ). Whether this can be avoided in a context when leaving agents do not warn others about their departure also remains an open interesting question.
-  R. Olfati-Saber and N. F. Sandell, “Distributed tracking in sensor networks with limited sensing range,” In Proceedings of the 2008 American Control Conference, Washington, U.S.A., pp. 3157–3162, 2008.
-  L. Shi, A. Capponi, K. Johansson, and R. Murray, “Resource optimisation in a wireless sensor network with guaranteed estimator performance,” IET Control Theory and Applications, vol. 4, no. 5, pp. 710–723, 2010.
-  O. Demigha, W. Hidouci, and T. Ahmed, “On energy efﬁciency in collaborative target tracking in wireless sensor network: A review,” IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 1210–1222, 2012.
-  V. Cerf and R. Kahn, “A protocol for packet network inter-communication,” IEEE Transactions on Communications, vol. 22, no. 5, pp. 637–648, 1974.
-  S. Muthukrishnan, B. Ghosh, and M. Schultz, “First and second order diffusive methods for rapid, coarse, distributed load balancing,” Theory of Computing Systems, pp. 331–354, 1998.
-  R. Hegselmann and U. Krause, “Opinion dynamics and bounded confidence models, analysis, and simulation,” Journal of Artifical Societies and Social Simulation, vol. 5, no. 3, pp. 1–33, 2002.
-  V. Blondel, J. Hendrickx, and J. Tsitsiklis, “Continuous-time average-preserving opinion dynamics with opinion-dependent communications,” SIAM Journal on Control and Optimization, vol. 48, no. 8, pp. 5214–5240, 2010.
-  J. Liu, N. Hassanpour, S. Tatikonda, and A. Morse, “Dynamic threshold models of collective action in social networks,” In Proceedings of the 51st IEEE Conference on Decision and Control, Hawaii, U.S.A., pp. 3991–3996, 2012.
-  S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE Transactions on Automatic Control, vol. 52, no. 6, pp. 2508–2530, 2006.
-  F. Iutzeler, P. Ciblat, and J. Jakubowicz, “Analysis of max-consensus algorithms in wireless channels,” IEEE Transactions on Signal Processing, vol. 60, no. 11, pp. 6103–6107, 2012.
-  S. Giannini, A. Petitti, D. D. Paola, and A. Rizzo, “Asynchronous max-consensus protocol with time delays: Convergence results and applications,” IEEE Transactions on Circuits and Systems-I, vol. 63, no. 2, pp. 256–264, 2016.
-  R. Olfati-Saber, J. Fax, and R. Murray, “Consensus and cooperation in networked multi-agent systems,” In Proceedings of the IEEE, pp. 215–233, 2007.
-  J. Hendrickx, A. Olshevsky, and J. Tsitsiklis, “Distributed anonymous discrete function computation,” IEEE Transactions on Automatic Control, vol. 56, no. 10, pp. 2276–2289, 2011.
-  W. Ren, R. Beard, and E. Atkins, “A survey of consensus problems in multi-agent coordination,” In Proceedings of the 2005 American Control Conference, Portland, U.S.A., pp. 1859–1864, 2005.
-  D. Stutzbach and R. Rejaie, “Understanding churn in peer-to-peer networks,” In Proceedings of the 6th ACM SIGCOMM conference on Internet measurement, Rio de Janeriro, Brazil, pp. 189–202, 2006.
-  F. Kuhn, S. Schmid, and R. Wattenhofer, “Towards worst-case churn resistant peer-to-peer systems,” Distributed Computing, vol. 2, no. 4, pp. 249–267, 2010.
-  M. Jelasity, A. Montresor, and O. Babaoglu, “Gossip-based aggregation in large dynamic networks,” ACM Transactions on Computer Systems, vol. 23, no. 3, pp. 219–252, 2005.
-  F. Kuhn, N. Lynch, and R. Oshman, “Distributed computation in dynamic networks,” In Proceedings of the 42nd ACM symposium on Theory of computing, Massachusetts, U.S.A., pp. 513–522, 2010.
-  C. Dutta, G. Pandurangan, R. Rajaraman, Z. Sun, and E. Viola, “On the complexity of information spreading in dynamic networks,” In Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 717–736, 2013.
-  J. Hendrickx and S. Martin, “Open multi-agent systems: Gossiping with deterministic arrivals and departures,” In Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, U.S.A.
-  T. Huynh, N. Jennings, and N. Shadbolt, “An integrated trust and reputation model for open multi-agent systems,” Autonomous Agents and Multi-Agent Systems, vol. 13, no. 2, p. 119–154, 2006.
-  I. Pinyol and J. Sabater-Mir, “Computational trust and reputation models for open multi-agent systems: a review,” Artificial Intelligence Review, vol. 40, no. 1, pp. 1–25, 2013.
-  B. Nejad, S. Attia, and J. Raisch, “Max-consensus in a max-plus algebraic setting: The case of ﬁxed communication topologies,” In Proceedings of the International Symposium on Information, Communication and Automation Technologies, Sarajevo, Bosnia and Herzegovina, pp. 1–7, 2009.
-  S. Zhang, C. Tepedelenlioğlu, M. Banavar, and A. Spanias, “Max consensus in sensor networks: Non-linear bounded transmission and additive noise,” IEEE Sensors Journal, vol. 16, no. 24, pp. 9089–9098, 2016.