1 Introduction
Over the last decade, the continuously increasing development and excessive use of energyhungry mobile devices (like smartphones, tablets, or even electric vehicles; see [13, 3]) in ad hoc networks, has given rise to the problem of efficient power management under various objectives. A viable solution to this critical problem, that has been extensively studied in the recent related literature due to its efficiency and wide applicability, is the Wireless Power Transfer (WPT) technology using magnetic resonant coupling [12] combined together with ultrafast rechargeable batteries [14]. By exploiting such a technology, it is possible to recharge the network devices as required and prolong their lifetime.
In a rechargeable ad hoc network, there are two main types of entities (with different characteristics) that are distributed in the network area. These are called chargers and agents, respectively. Usually, a charger is considered as a special device that has high energy supplies and acts as a transmitter, while an agent has significantly lower battery capacity and acts as a receiver. The charger is responsible for the energy management in the network, by transferring parts of its energy to the agents in an effective manner. On the contrary, the agents are the actual network devices that consume energy by performing various communication and sensing tasks (like collecting and routing data) and are, therefore, in need of energy replenishment to sustain their normal operation.
There are several studies that deviate from the above modeling assumptions. In particular, Zhang et al. [24] introduced the notion of collaborative charging, where the chargers are able to transfer energy to each other as well. This feature was extended by Madhja et al. [15] in an hierarchical structure. Furthermore, recent studies do not even use chargers, but they assume that the agents themselves are able to both receive and send power wirelessly [16, 19]. Another research direction deals with the simultaneous energy transfer and data collection by the charger (e.g. [25]). In this setting, practically, the charger acts as an energy transmitter as well as a sink.
There are generally many different assumptions regarding the charging process, whether there is a single or multiple chargers that are mobile or not, as well as the information that is available about the energy levels and the locations of the (possibly mobile) agents. As the survey of all these different settings are not the main focus of this paper, we refer the interested reader to the book [21].
1.1 Our contribution
In this paper, we consider ad hoc networks that consist of mobile agents and a single static charger. The agents move around following a mobility model and consume energy for communication purposes. The charger is assumed to have initial finite energy that can be used to replenish the battery of the agents that get in its charging range. See Section 2 for a detailed description of our model. As the mobility and energy consumption characteristics of the agents become available online, the charger adapts by changing its transmission power (which, in turn, defines the charging range) as a response to the agents’ behavior, with the goal of extending the network lifetime. To the best of our knowledge, this is the first paper that systematically studies the setting where the charging range is dynamically selected adaptively to the agents status.
We theoretically and experimentally showcase the need for adaptiveness. In particular, for every possible fixed range that the charger may have, we identify worstcase scenarios where there is always an adaptive solution that performs better (see Section 3). In addition, we define two simplified offline optimization problems that are closely related to the online multiobjective one, and prove their computational intractability using reductions from the knapsack problem (see Section 4
). Furthermore, we design three adaptive algorithms that exploit different knowledge levels regarding the mobility and residual energy of the agents. We compare their performance with respect to various metrics using a nontrivial simulation setup, where we consider probability distributions over randomized mobility and energy consumption scenarios that are designed to test our methods in highly heterogeneous instances (see Section
5).1.2 Related work
In this section, we briefly discuss some recent papers that are closely related to the current one. Mobility in ad hoc networks has been thoroughly studied and many models have been proposed over the years. Generally, such mobility models assume that the agents perform different kinds of random walks that may depend on many different parameters (e.g. [4, 2]), and even be influenced by social network attributes that attempt to capture human behavior (e.g. [17, 23, 10]). In this work, we adopt a generic mobility model that allows us to construct many different and interesting mobility patterns for the agents.
Recharging in mobile ad hoc networks has been the focus of many research papers. Indicatively, Nikoletseas et al. [20] considered mobile ad hoc networks with multiple static chargers of finite energy supplies. They designed and evaluated (using real devices) two algorithms that decide which chargers must be active during each round, in order to maximize charging efficiency and achieve energy balance, respectively. Angelopoulos et al. [1] also considered mobile ad hoc networks, with the difference that there exists a single mobile charger that has infinite energy and traverses the network in order to recharge the agents as needed. They focused on designing optimal traversal strategies for the mobile charger with the goal of prolonging the network lifetime.
He et al. [9] studied the energy provisioning problem. That is, to minimize the number of chargers, and compute where they should be located in the network area, so that all (possibly) mobile agents are always active (i.e., they have or get enough energy to complete their tasks). By taking into account an agent’s velocity and battery capacity, Dai et al. [5] showed that the agent’s continuous operation cannot be guaranteed, and introduced the Quality of Energy Provisioning (QoEP) metric to characterize the expected time that the agent is actually active.
Dai et al. [7] studied the safe charging problem with the goal of maximizing the charging utility, while ensuring that there is no point in the network area with electromagnetic radiation (EMR) that exceeds a threshold value. Specifically, they assumed a network consisting of static agents and multiple stationary chargers. They investigated which of the chargers should be active such that the EMR constraint is not violated and proposed algorithms with provable efficiency guarantees. In [6], the authors studied a variation of this problem where the power of each charger can be adjusted once at the beginning, and are not necessarily equal to each other. Nikoletseas et al. [18] studied the low radiation efficient wireless charging as well, but, they defined a different charging model that takes into account hardware constraints for the chargers and the agents (i.e. the chargers have finite energy supplies and the agents have battery capacity constraints).
The last two papers seem to be the most related ones to ours, in the sense that the power of each charger is adjustable. However, observe that since the agents are static in both models considered in [6, 18], each charger adjusts its power only once, at the beginning of the time horizon. In contrast, the power of the charger in our setting constantly changes over time, adaptively to the behavior of the mobile agents which is revealed in an online manner. Practically, this means that the problem of computing the power that the charger should have, must be solved every single time.
2 Model
There are agents that move around in a bounded network area , and a single static charger that is positioned at the center of . For simplicity, we assume that is represented by a rectangle defined by the points and on the Euclidean space. Hence, the position of the charger is given by the coordinates .
We assume that there is a discrete time horizon consisting of a number of distinct rounds each of which runs for a constant period of time . For every agent , we denote by its position at the beginning of round . The positions of the agents are updated as they move around in . For the charger, we denote by its range during round . is decided by the transmission power of the charger and defines a circle of radius around ; let denote this circle on the plane. All agents that pass through during round can get recharged (if they need to).
2.1 Mobility model
At the beginning of each round , every agent randomly selects a speed mode . This aims to model three kinds of movement: slow (like walking), medium (like running), and fast (like travelling in a vehicle). Let be the maximum possible velocity that any agent can have at any time. Then, the speed mode of an agent indicates whether its velocity takes random values in the intervals , , or .
Each agent performs a random walk as follows. At round , it starts from position , and chooses randomly a new direction as well as a new velocity . The direction together with , define a line along which the agent travels with the chosen velocity until it reaches its final position at the end of the round, which is the position at the beginning of the next round. In particular, has coordinates
We remark that if the above equations do not define a point in , then the movement is redefined accordingly. Starting from and the initial agents’ deployment in , the above process is repeated for all rounds .
Notice that this mobility model is general enough to allow us to create many interesting special and extreme scenarios by restricting the movement of the agents as necessary.
2.2 Energy model
Let be the energy of agent at the beginning of round . All agents have the same battery characteristics in the sense that they have the same battery capacity, denoted by . We assume that initially all agents are fully charged, i.e., for every agent .
During round , each agent consumes an amount of energy for communication purposes which depends on random sensing and routing events. Since the thorough study of such events are out of the scope of this paper, we simply assume that follows a poisson probability distribution with expected value . The energy of agent at the beginning of the next round (assuming no recharging takes place), is equal to
We remark that the agents are assumed to not consume any energy due to movement as the necessary energy can be supplied by different sources. For example, in any crowdsensing scenario it is supplied by the humans that carry around their smart devices.
2.3 Charging model
Let denote the energy that the charger has at the beginning of round . We assume that that the charger initially has some finite amount of energy that can be used to replenish the energy that the agents consume.
In particular, if the charger has the appropriate amount of energy, then all agents that get in its range receive a positive amount of energy. Let and be the first and last position of agent that are in range. These may or may not be defined depending on whether the agent travels or not through ; Figure 1 depicts an example of all possible cases about the relations between , , and . The time that agent spends in the charger’s range is then equal to
We assume that agent receives energy according to a simplified version of the wellknown Friis transmission equation. In particular,
(1) 
where and are environmental and technological constants. The energy of agent at the beginning of the next round (taking into account both energy consumption and recharging), is equal to
Observe that the amount of energy that the agent receives must respect its battery limit. Of course, the energy of the charger is also decreased accordingly.
3 The need for adaptiveness
In this section, we aim to justify the need for algorithms that can dynamically change the charging range over time in order to adapt to the agents’ behavior. The simplest algorithm that we can come up with, is to have the range fixed during the whole period of time. However, observe that there are essentially infinitely many different fixed values. Therefore, finding the one that works efficiently (with respect to the various objectives that we could be interested in) for every possible instance is improbable. In fact, in the following we will prove that this is actually impossible.
3.1 Theoretical justification
First, we will show that, for any fixed range value (different than the maximum one), there always exists an instantiation of the agents’ movements for which there will be no recharging at all.
Proposition 1.
For any range value , there exists a scenario for which fixing the charger’s range equal to is equivalent to not using a charger at all.
Proof.
Consider the scenario according to which no agent ever passes through the circle . Then, if the range is set to for the whole period of time, no agent will ever get recharged. ∎
Notice that a scenario similar to the one described in the proof of Proposition 1 exists even for the maximum possible range . However, in such a case there exists no algorithm that can do any better. Hence, we need to make the critical assumption that all agents will pass through the circle at least once. Any agent not passing through this area should not be accounted for in our objectives.
Next, we prove a stronger statement that holds true even when we consider the maximum range value. In particular, we claim that there exist multiple scenarios (that are instantiations of the one described in the proof of our next proposition) for which all fixed range values underperform simultaneously.
Proposition 2.
There exists a scenario for which setting the charger’s range equal to any fixed value is not optimal.
Proof.
Consider the scenario according to which the agents get in range only when their energy levels are below a threshold. This scenario captures cases where the agents correspond to humans using smart devices; they recharge their devices only when they need to. Assume that the agents have the following energy consumption characteristics. There are agents with small energy consumption and a single greedy agent that consumes all of its available energy, at every round.
If the charger’s range is fixed to any during the whole time horizon, this single greedy agent can choose its inrange position so that it gets its battery fully recharged. As a result, the charger’s energy can be quickly drained out (if the initial energy is small enough), before the other agents have a chance to get recharged.
Now, consider the algorithm that adapts to the behavior of this greedy agent and, in each round, sets the range such that this agent gets a minimum amount of energy. For example, it can set the range equal to the distance between the agent and the charger so that, according to equation (1), it gives to the agent only a small amount of energy every time. This way, the charger conserves energy for the rest of the agents and the network’s lifetime can be expanded. ∎
3.2 Experimental justification
We conclude with an experimental demonstration of the phenomenon observed above, implemented in Matlab R2016a. We consider a simulation setup with agents that move around in a network area . The charger is positioned at the center of , has initial energy , and its range can take values in . Each agent has battery , maximum velocity , and its speed mode is redefined with probability in each round. Also, the agents are randomly partitioned into 4 groups, namely, of expected sizes . Then, agent
consumes energy following a poisson distribution with randomly chosen expected value
such that(2) 
We remark that the expected values are chosen nonuniformly from the corresponding intervals so that there is heterogeneous energy consumption among the agents.
We compare two fixed value algorithms and an adaptive one. The first fixed value algorithm sets the range equal to during the whole period of time, while the second one sets the range equal to ; we will refer to these as the  and algorithm, respectively. The adaptive algorithm is simple and oblivious to the agents’ characteristics: at the beginning of each round, it equiprobably sets the range equal to or . Furthermore, we also compare these algorithms to the optimal one when the charger is given infinite energy. Its performance serves as an upper bound that is unreachable by any algorithm when the charger has finite energy.
We present results for two different setups corresponding to two different mobility scenarios. In the first one, all agents randomly move around the whole network area. In the second one, no agent is allowed to pass through the circle . The first scenario aims to capture random movements, while the second one follows Proposition 1 and serves as an extreme case for small range values. Recall that we would like our algorithms to perform efficiently in both scenarios, as the agents’ characteristics are generally unknown and become partially available in an online manner.
Figure 2 depicts the performance of the algorithms with respect to three different objectives:
The third objective (number of agents with adequate energy) is stronger than the second one (number of working agents), and the fact that the corresponding figures are very similar indicates that the quality of the recharges is sufficient.
As expected, in both simulations, the algorithm recharges more agents during the early rounds, essentially simulating the infiniteenergy optimal algorithm. However, since the charger’s energy is finite, it is drained out quickly. On the other hand, the algorithm consistently recharges less agents but over a longer period of time in the first simulation, while it performs poorly and is equivalent to not having a charger (zero charges) in the second simulation. The adaptive algorithm performs sufficiently in the first simulation where it strikes a balance between the two fixed value algorithms, while it outperforms both of them in terms of keeping the network active for longer time in the second simulation. Notice the difference between the algorithm and the adaptive one, even though the expected range of the latter is exactly equal to .
Of course, keeping the network active for a longer period of time while having too little agents with adequate energy to complete tasks may not be desirable in many ways. In fact, one could argue that the performance of the algorithm is more reasonable in these scenarios since it maintains more agents active simultaneously (but for a shorter period of time). The counterargument would be that the best objective to consider always depends on the application, and there are always agent characteristics that could make the algorithm (or any high fixed value algorithm) unfair. For example, consider again the scenario presented in the proof of Proposition 2, where there exists a small population of greedy agents that demand all of the charger’s energy for themselves.
4 Optimization problems
In this section, we define two simplified offline optimization problems and prove their computational intractability. These two problems are closely related to the online one that we defined in the previous sections, and each of them focuses on a particular objective goal, the number of charges that the charger performs during a given time horizon and the number of rounds during which the network is active, respectively. The hardness of these problems is only indicative of how hard the actual online problem is.
4.1 Maximizing the number of charges
As input, we are given all information about the movement and energy consumption characteristics of the agents during all rounds , where is a given finite time horizon. Moreover, the charger has initial energy and we can choose its charging range from a set of distinct values such that . All nonfully charged agents that are in the specified charging range receive energy from the charger according to equation (1) with and . The goal is to set the range of the charger, for any round , in order to maximize the total number of agents that get recharged until the charger is left out of energy; we explicitly assume that the charger does not recharge the agents if it does not have the requested amount of energy (i.e., we do not give fractions of the requested energy). In the following, we will refer to this simple offline full–information maximization problem as Maximize the Number of Charges (MNC, for short).
Theorem 3.
The MNC problem is NPhard.
Proof.
We use a reduction from the Knapsack Problem (KP, for short) which is known to be NPhard [8]. Its formal description is as follows.
KP: Consider a collection of items such that item has value and weight . We are given a knapsack of capacity , and the goal is to select a set of items of total weight at most in order to maximize the total value of these items.
Given an instance of KP we will design an instance of MNC. First, without loss of generality, we assume that the values of the items as well as the weight of the knapsack in the instance of KP are rescaled so that they are integer numbers (for example they are all multiplied by some large number). Second, there are no items with zero value (as such items can be discarded) and no items with zero weight (as such items are for free).
Now, our MNC instance is as follows:

There are agents with battery .

The initial energy of the charger is (the knapsack corresponds to the charger).

There are rounds (every item corresponds to a round) and each of them lasts for a unit of time.

The range of the charger can either be set to or ; essentially, the charger is either inactive or active (and its range is ).

For each round , the movement and energy consumption characteristics of the agents are as follows. At the beginning of the round, all agents are fully charged. There is a set of exactly agents at distance each of whom travels along the circle , and consumes energy equal to in case the charger is active, and otherwise; such an energy consumption may be due to the communication of the agents with the charger itself. All other agents (if there are any) do not have any energy consumption during round and move arbitrarily (but consistently to future positioning requirements).
Now, let us focus on an arbitrary round . If the charger is inactive during this round, then of course no agent gets recharged. However, according to the above specified energy consumption characteristics, all agents remain fully charged in such a case. On the other hand, if the charger is active during round , then according to equation (1) with and , every agent in receives energy equal to
which is exactly its energy consumption during this round. Therefore, the charger needs to spend units of energy in total in order to fully recharge these agents during round . In other words, the number of charges corresponds to the total value of the selected items and the total needed energy corresponds to the total weight of these items. Consequently, any set of items with maximum total value satisfying the knapsack capacity corresponds to a set of rounds during which the charger is active with maximum number of charges satisfying the initial energy of the charger, and vice versa. The proof is complete. ∎
We remark that the MNC instance that is used in the above proof is actually equivalent to an instance of a more complicated variation of KP, known as MultipleChoice Knapsack Problem [11, 22] (MCKP, for short). According to this problem, the items are further partitioned into subsets and the goal is to select exactly one item from each subset in order to maximize the total value of the selected items without exceeding the knapsack capacity. To see the equivalence, observe that each item in a subset can correspond to a different charging range, while its value and weight can correspond to a number of agents and the required energy to fully recharge them, respectively. The reduction of KP to MNC that we used in the proof Theorem 3 can be viewed as an adaptation of the reduction of KP to MCKP (in simple instances of two choices), by taking into account the special features of MNC.
4.2 Maximizing the network lifetime
Here, we formulate another optimization problem with the goal of maximizing the network lifetime. In particular, as input, we are again given all information about the movement and energy consumption characteristics of the agents during a time horizon . The charger has initial energy and its charging range is selected from a set of distinct values such that . All nonfully charged agents that are in the specified charging range receive energy from the charger according to equation (1) with and . The goal is to set the range of the charger, for any round , in order to maximize the total rounds during which there exists at least one agent with nonzero (strictly positive) energy. In the following, we will refer to this maximization problem as Maximize Network Lifetime (MNL, for short).
Theorem 4.
The MNL problem is NPhard.
Proof.
We again use a reduction from KP (see the proof of Theorem 3 for its formal definition). Given an instance of KP, we define the following instance of MNL:

There is a single agent with battery .

The initial energy of the charger is (the knapsack corresponds to the charger).

Every round lasts for a unit of time.

The charger can either be inactive with zero range or active with range .

During the first round , the agent is out of the range of the charger and consumes all of its battery. For each item , there is time horizon consisting of rounds. During the first of these rounds the agent is in range at fixed distance (for example it travels along the circle or is static), while for the remaining rounds, the agent moves out of range and has a total energy consumption of so that during all these rounds it has nonzero energy. These time horizons are continuous, given a permutation of the items: .
If the charger is inactive during the first round of any time horizon , then the agent does not get and does not have any energy during (a total of rounds). On the other hand, if the charger is active during the first round of , since the agent is at distance from the charger, and using equation (1) with and , the energy that the agent receives by the charger is equal to
which is exactly the energy that it consumes during . Therefore, if the charger is active during the first round of , the agent is active for rounds and the charger spends units of energy. As a result, the number of rounds that the agent is active is equal to the total value of the selected items. Hence, any set of items with maximum total value satisfying the knapsack capacity corresponds to a set of time horizons with maximum number of rounds (during which the agent is active) satisfying the energy capacity of the charger, and vice versa. The proof is complete. ∎
The hardness of MNC and MNL, where there is full information about the agents’ characteristics and the charging range can take a small number of distinct values, is only indicative of the hardness of the online version of the problem, where the movement and energy consumption of the agents are not a priori known, and the range can take a seemingly infinite number of different values. Again, recall that we would like to have a solution that performs efficiently under any possible instance, and under multiple objectives (both the number of charges and the network lifetime); in fact, one can combine MNC and MNL, as well as the proofs of Theorems 3 and 4, and prove that simultaneously maximizing the number of charges and the network lifetime is an intractable problem.
5 Comparison of adaptive algorithms
In this section, we propose three adaptive algorithms and compare them to each other. The algorithms are presented in an increasing order in terms of the knowledge they require in order to decide the charging range during any round . The first algorithm uses information about the position of every agent for whom it is . The other two algorithms require information about the positions and as well as the energy level of every agent in . Moreover, the third algorithm needs additional information about the energy consumption of the agents. As one can see by their definitions below, the algorithms also differ substantially in their computational complexity as well.
Least Distant Agent or Maximum Range (LdMax)
The LdMax algorithm uses a parameter and works as follows. At the beginning of each round , it sets
This is a generalization of the randomized algorithm that we considered in Section 3 which sets the range equiprobably to or . The difference here is that there is a probability of setting the range equal to the distance between the charger and its closest agent (if this is a valid range value) in order to capture worstcase scenarios where there are no agents close to the charger.
Maintain Working Agents (MWA)
The MWA algorithm uses a parameter and, during each round , sets the range in an attempt to guarantee that there are at least working agents in the network (i.e. agents that either have positive energy at the beginning of the round or get recharged during it). To find the appropriate range it works as follows. First, it counts the number of agents that are in and have positive energy at the beginning of the round. If , then it sets since the requirement is already satisfied. Otherwise, it counts the number of agents that have zero energy at the beginning of the round and or . If , then it sets since the requirement cannot be satisfied. Otherwise, it searches for the smallest such that the circle covers at least agents, and sets .
Maximize Charges over Energy Ratio (MCER)
Let be a set of discrete range values in . Let be the number of agents that get recharged when the charger has range equal to during round , and let be the total given energy in this case. The MCER algorithm uses a parameter and sets
This algorithm tries to strike a balance between the number of charges and the energy that it has to give in order to perform these charges. However, observe that it needs to perform many heavy computations as, in order to choose the best range, it has to simulate the whole recharging process multiple times.
5.1 Simulation setup
We now experimentally compare these adaptive algorithms. We partially consider the simulation setup presented in Section 3. The network area is of size . The charger has initial energy , minimum range , and maximum range . There are agents with battery capacity , maximum velocity , and probability of redefining the speed mode during each round. Also, the energy consumption of the agents follows the rule defined by equation (2).
For the mobility behavior of the agents we consider three different randomized scenarios:

All agents randomly move around in .

Choose uniformly at random. Then, no agent is allowed to enter circle .

Choose , and uniformly at random. Then, agents live in the ring , while the remaining agents randomly move around in .
We create a probability distribution over these three mobility scenarios by repeating our simulation for times so that a different scenario is chosen equiprobably every time. Observe that there are many different random choices to be made and these give birth to many different instantiations. The goal is to test our algorithms under a highly heterogeneous setting.
5.2 Results and interpretation
After extensive finetuning of the parameters used by our adaptive algorithms, we have concluded that setting , and are the best values for the particular simulation setup that we consider here. In general, we expect to depend heavily on the density of the network; it should be smaller for more sparse networks. On the other hand, seems to nicely balance the ratio considered by MCER due to the fact that the given energy is of square order according to equation (1). Finally, parameter can be picked by the designer to maintain a sufficient number of agents, depending on the needs of the network, the energy of the charger, etc. Figure 3 depict the performance of the adaptive algorithms as well as that of the fixed value algorithm over time, with respect to various metrics:
Due to its definition, MWA guarantees for a long period of time a stable number of working agents (as well as agents with adequate energy). However, MCER seems to outperform the other two algorithms in terms of the total number of charges and the charging frequency of the agents. Essentially, MWA and MCER work in exactly opposite ways, while LdMax lies somewhere inbetween of these two, due to its randomized nature.
To interpret this data, we will briefly analyze how MWA and MCER respond to the behavior of the agents by inspecting Figure 2(a) which displays the evolution of the charging range over time depending on the algorithm. During the early rounds of the simulation, most of the agents are considered working since they are initially fully charged. Therefore, the requirement of maintaining working agents is trivially satisfied and MWA starts by having the minimum possible range, so that it stores energy for future use (see Figure 2(b)). In contrast, MCER chooses a higher range in order to perform more charges while giving away little energy; since the agents already have energy, they request only a small amount of energy when they get in range, which means that the cost (in energy) per charge is quite small. However, as the time progresses, the energy levels of the agents gradually get lower, there are less working agents, and when an agent gets in range requests for more energy. As a result, MWA is forced to increase the range in order to keep satisfying the requirement of maintaining working agents, while MCER decreases its range as the cost per charge has increased substantially.
5.3 Scalability issues
We have also experimented with many different values for the number of agents, their battery, as well as the initial energy of the charger. Our results are scalable in the sense that these parameters seem to affect only the network lifetime (it is either increased or decreased) and not the relative performance of the algorithms. Indicatively, Figure 4 showcases the performance of our adaptive algorithms, in terms of the number of working agents, when there are , and agents, respectively. We remark that, by keeping the network area size fixed and changing the number of agents, we essentially create networks of different densities.
6 Conclusion
In this paper, we studied the problem of dynamically selecting the appropriate charging range of a single static charger to prolong the lifetime of a network of mobile agents. We proved the hardness of the problem, and presented three interesting heuristics that perform fairly well in the simulation setups that we considered. Of course, there are multiple interesting future directions.
Can we design better adaptive algorithms that perform well under any possible scenario regarding the agents’ characteristics? An interesting way to try to tackle this, would be to consider a machine learning like approach. In particular, given statistical information (a prior probability distribution) about the behavior of the agents, is it possible to learn the “correct” sequence of values for the charging range in order to prolong the network lifetime as much as possible, while maintaining a fair amount of working agents? We remark that our algorithms do not exploit such training information, and function based only on the online behavior of the agents. Another possible direction could be to consider the natural generalization of using multiple chargers that can move around in the network, and even be able to charge each other. This, couples (in a nontrivial way) our work together with that of Angelopoulos et al.
[1], and definitely deserves investigation.Acknowledgments
This work was partially supported by the Greek State Scholarships Foundation (IKY), and by a PhD scholarship from the Onassis Foundation. The third author would like to thank Ioannis Caragiannis for fruitful discussions at early stages of this work.
References
 [1] Constantinos Marios Angelopoulos, Julia Buwaya, Orestis Evangelatos, and José D. P. Rolim. Traversal strategies for wireless power transfer in mobile adhoc networks. In Proceedings of the 18th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), pages 31–40, 2015.
 [2] Christian Bettstetter, Giovanni Resta, and Paolo Santi. The node distribution of the random waypoint mobility model for wireless ad hoc networks. IEEE Transactions on Mobile Computing, 2(3):257–269, 2003.
 [3] Zicheng Bi, Tianze Kan, Chunting Chris Mi, Yiming Zhang, Zhengming Zhao, and Gregory A. Keoleian. A review of wireless power transfer for electric vehicles: Prospects to enhance sustainable mobility. Applied Energy, 179:413 – 425, 2016.
 [4] Tracy Camp, Jeff Boleng, and Vanessa Davies. A survey of mobility models for ad hoc network research. Wireless Communications and Mobile Computing, 2(5):483–502, 2002.
 [5] Haipeng Dai, Guihai Chen, Chonggang Wang, Shaowei Wang, Xiaobing Wu, and Fan Wu. Quality of energy provisioning for wireless power transfer. IEEE Transactions on Parallel and Distributed Systems, 26(2):527–537, 2015.
 [6] Haipeng Dai, Yunhuai Liu, Guihai Chen, Xiaobing Wu, and Tian He. Scape: Safe charging with adjustable power. In Proceedings  International Conference on Distributed Computing Systems, pages 203–204, 2014.
 [7] Haipeng Dai, Yunhuai Liu, Guihai Chen, Xiaobing Wu, Tian He, Alex X. Liu, and Huizhen Ma. Safe charging for wireless power transfer. IEEE/ACM Transactions on Networking, 25(6):3531–3544, 2017.
 [8] Michael R. Garey and David S. Johnson. Computers and intractability: A guide to the theory of NPcompleteness. W. H. Freeman, 1979.
 [9] Shibo He, Jiming Chen, Fachang Jiang, David K. Y. Yau, Guoliang Xing, and Youxian Sun. Energy provisioning in wireless rechargeable sensor networks. IEEE Transactions on Mobile Computing, 12(10):1931–1942, 2013.
 [10] Dávid Hrabcák, Martin Matis, L’ubomír Dobos, and Ján Papaj. Students social based mobility model for MANETDTN networks. Mobile Information Systems, 2017:2714595:1–2714595:13, 2017.
 [11] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack problems. Springer, 2004.
 [12] André Kurs, Aristeidis Karalis, Robert Moffatt, J. D. Joannopoulos, Peter Fisher, and Marin Soljačić. Wireless power transfer via strongly coupled magnetic resonances. Science, 317(5834):83–86, 2007.
 [13] Siqi Li and Chris Mi. Wireless power transfer for electric vehicle applications. In IEEE Journal of Emerging and Selected Topics in Power Electronics, volume 3, pages 4–17, 2015.
 [14] MengChang Lin, Ming Gong, Bingan Lu, Yingpeng Wu, DiYan Wang, Mingyun Guan, Michael Angell, Changxin Chen, Jiang Yang, Bing Joe Hwang, and Hongjie Dai. An ultrafast rechargeable aluminiumion battery. 520, 2015.
 [15] Adelina Madhja, Sotiris E. Nikoletseas, and Theofanis P. Raptis. Hierarchical, collaborative wireless energy transfer in sensor networks with multiple mobile chargers. Computer Networks, 97:98–112, 2016.
 [16] Adelina Madhja, Sotiris E. Nikoletseas, Christoforos Raptopoulos, and Dimitrios Tsolovos. Energy aware network formation in peertopeer wireless power transfer. In Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), pages 43–50, 2016.
 [17] Mirco Musolesi, Stephen Hailes, and Cecilia Mascolo. An ad hoc mobility model founded on social network theory. In Proceedings of the 7th International Symposium on Modeling Analysis and Simulation of Wireless and Mobile Systems (MSWiM), pages 20–24, 2004.
 [18] Sotiris E. Nikoletseas, Theofanis P. Raptis, and Christoforos Raptopoulos. Radiationconstrained algorithms for wireless energy transfer in ad hoc networks. Computer Networks, 124:1–10, 2017.
 [19] Sotiris E. Nikoletseas, Theofanis P. Raptis, and Christoforos Raptopoulos. Wireless charging for weighted energy balance in populations of mobile peers. Ad Hoc Networks, 60:1–10, 2017.
 [20] Sotiris E. Nikoletseas, Theofanis P. Raptis, Alexandros Souroulagkas, and Dimitrios Tsolovos. Wireless power transfer protocols in sensor networks: Experiments and simulations. Journal of Sensor and Actuator Networks, 6(2):4, 2017.
 [21] Sotiris E. Nikoletseas, Yuanyuan Yang, and Apostolos Georgiadis, editors. Wireless power transfer algorithms, technologies and applications in ad hoc communication networks. Springer, 2016.
 [22] Prabhakant Sinha and Andris A. Zoltners. The multiplechoice knapsack problem. Operations Research, 27(3):503–515, 1979.
 [23] Nikolaos Vastardis and Kun Yang. An enhanced communitybased mobility model for distributed mobile social networks. Journal of Ambient Intelligence and Humanized Computing, 5(1):65–75, 2014.
 [24] Sheng Zhang, Jie Wu, and Sanglu Lu. Collaborative mobile charging. IEEE Transactions on Computers, 64(3):654–667, 2015.
 [25] Miao Zhao, Ji Li, and Yuanyuan Yang. Joint mobile energy replenishment with wireless power transfer and mobile data gathering in wireless rechargeable sensor networks, pages 667–700. Springer International Publishing, 2016.
Comments
There are no comments yet.