The multichannel rendezvous problem that asks two secondary users (SU) to find a common available channel (not used by primary users (PU)) is one of the fundamental problems in cognitive radio networks (CRNs) (see e.g., the book  and references therein). In view of possible jamming attacks , the multichannel rendezvous problem is commonly solved by having each SU hopping on its available channels over time. When both SUs hop on a common available channel at the same time, it is assumed that a successful rendezvous occurs. For such a rendezvous problem, the objective is to minimize the time-to-rendezvous (TTR), i.e., the first time that the two SUs have a successful rendezvous. In the literature, there are various deterministic channel hopping (CH) sequences that can guarantee finite maximum time-to-rendezvous (MTTR) under various assumptions for CRNs, e.g., QCH , DRSEQ , Modular Clock , JS , DRDS , FRCH , ARCH , CBH , and Two-prime Modular Clock . As pointed out in , there is one practical factor that is not considered in these CH sequences, i.e., the channel states. Due to channel fading and interferences from other SUs, two SUs might not have a successful rendezvous even when they both hop on a common available channel at the same time. As such, it might be more practical to focus on the expected time-to-rendezvous (ETTR), instead of the MTTR.
In , the authors considered a random channel state model, in which each channel has several random states and the probability that two SUs hopping on a common channel have a successful rendezvous is a function of the channel state. Specifically, for a CRN with channels, the states of the channels are characterized by a stochastic process , where ,
, is the random variable that represents the state of channelat time . The channel states are assumed to be not observable by a SU. When two SUs hopping on a channel in state , they will rendezvous with probability . The authors in  considered the class of blind rendezvous policies in which each user selects channel independently with probability , , in every time slot. They showed that there does not exist a channel selection policy (in terms of the channel selection probabilities, , ) that is universally optimal for any time-varying channel state model. For a fast time-varying channel model, the optimal policy is the single channel policy that only selects one particular channel. On the other hand, for a slow time-varying channel model, SUs should avoid selecting a single channel as that channel could be in a bad state for a long period of time.
Even though the channel states are not observable, one question is whether they can be implicitly learned (from either failed attempts or successful rendezvous) so as to speed up the rendezvous process in the future. To address such a question, we adopt a reinforcement learning approach to learn the channel selection probabilities of a SU. Reinforcement learning (see, e.g., the book  and the recent survey 
) is a field of machine learning that addresses the problems of how to behave in an environment by performing certain actions and observing the reward from those actions. In these problems, the fixed limited resources must be allocated to maximize their expected gain. The reward of choice is only known at the time of allocation and may become better understood as time passes. Our problem is then to treat the channel selection probabilities as the fixed limited resources and learn how to allocate the channel selection probabilities to minimize ETTR. Specifically, our approach is to consider the multichannel rendezvous problem as the multi-armed bandit problem, and each successful rendezvous on a channel renders a reward for that channel. We then use the adversarial bandit algorithm,Exp3, in  to learn the channel selection probabilities. When the channels are independent and identically distributed (i.i.d.), our numerical experiments show that Exp3 yields comparable ETTRs to various approximation algorithms proposed in . On the other hand, when channels are not i.i.d., Exp3 is capable of learning the “best” channel. To the best of our knowledge, it seems that our paper is the first to study the multichannel rendezvous problem by a reinforcement learning approach.
Ii System model
Ii-a The multichannel rendezvous problem
In this paper, we consider a cognitive radio network (CRN) with channels (with ), indexed from to , in the discrete-time setting where time is slotted and indexed from . We assume that there are two states for each channel, state 0 for the bad state and state 1 for the good state. Denote by the rendezvous probability when a channel in state , or 1. Then when two users hop on a channel in state at the same time, these two users will rendezvous with probability , and this is independent of everything else. Since state 0 is the bad state and state 1 is the good state, we assume that
The states of the channels are characterized by the stochastic process , where , , is the random variable that represents the state of channel at time . The exact state of a channel at any time is not observable by a user. As discussed in , the reason for that is because it is in general difficult for a user to know the congestion level of a channel (the number of users in a channel).
We consider the class of dynamic blind rendezvous policies, i.e., at the time slot each user selects channel with probability , . Such a channel selection is independent of everything else. Suppose that the channel state of the channel at time is , . Then under the dynamic blind rendezvous policy, the probability that these two users will have a successful rendezvous at time on channel is simply . This is because the two users have to hop on channel at time and the rendezvous is successful on channel with probability . As such, the two users will have a successful rendezvous at time is . The objective is to learn a dynamic blind rendezvous policy (and the corresponding channel selection probabilities) that minimizes the expected time-to-rendezvous (ETTR).
Ii-B A Markov channel model with two states
For the model of channel states, we consider the Markov chain with two states in . We assume that the states of these channels are independent. The probability that the channel is in the good (resp. bad) state is (resp. ) for some
. As such, we have the following stationary joint distribution for the channel states
where (with the value being 0 or 1) is the state of channel . For the channel, its channel state is characterized by a Markov chain with the transition probabilities:
where . Clearly, we have
and thus the correlation coefficient between and , denoted by , is
We say that the Markov chain is positively correlated if . In this paper, we only consider positively correlated two-state Markov chains and we note the transition probabilities of the Markov chain can be characterized by the two parameters and . It is shown in  that the ETTR of a blind rendezvous policy is bounded below when for all and it is bounded above when for all . The argument used there can also be extended to show that the ETTR of a blind rendezvous policy is in fact increasing in when . Based on such structural results, various approximation algorithms for choosing the channel selection probabilities of a (fixed) blind rendezvous policy were proposed in . These policies include
Single selection policy: and for .
Uniform selection policy: for .
-approximation policy: , where , , , and .
Harmonic selection policy: , , where is the normalization constant so that the sum of ’s is 1.
Square selection policy: , , where is the normalization constant so that the sum of ’s is 1.
Sqrt selection policy: , , where is the normalization constant so that the sum of ’s is 1.
These 6 blind rendezvous policies will serve as the benchmarks for the comparison with our reinforcement learning approach. In particular, it was shown in  that the -approximation policy achieves an asymptotic -approximation ratio in the setting where either or and for all . In our experiments, we set .
Iii Reinforcement learning
In this section, we adopt a reinforcement learning approach to learn the channel selection probabilities so as to minimize the ETTR. It is assumed that each user cannot observe the channel states of the (hidden) Markov chain. This is similar to the multi-armed bandit problem where a gambler does not know the success probability of a slot machine. For this, we formulate the multichannel rendezvous problem as an adversarial bandit problem  in which there are possible actions, indexed from , in each time slot. The action corresponds to the selection of the channel. When two SUs rendezvous, one unit of reward is given to both users. Otherwise, there is no reward for the two SUs. For such an adversarial bandit problem, a famous algorithm to choose actions is the Exp3 algorithm  (which stands for ”Exponential-weight algorithm for Exploration and Exploitation”). In Algorithm 1, we show the detailed steps of the Exp3 algorithm for the multichannel rendezvous problem.
To see the intuition of Algorithm 1, we note that there are two terms in the channel selection probabilities in Step 1. These two terms represent two fundamental concepts of reinforcement learning, exploration, and exploration. The first term in is the ”exploitation” term that makes the “best” decision given the current information. The second terms in is the ”exploration” term that allows us to gather more information that might lead to better decisions. These two concepts are rather intuitive for the channel selection problem. The exploitation term leads to a “good” channel. On the other hand, as the channel might change its state in the next time slot, the exploration term allows us to find another good channel. The parameter is the weight of the channel at time and they are set to be 1 at time 1. When a successful rendezvous occurs, both SUs receive one unit of reward. We do not give a penalty to a channel selected by an SU that does not lead to a successful rendezvous. Therefore, two SUs have the same weights for all and thus the same channel selection probabilities , for all . The weight update rules in Steps 4 and 5 are the softmax update  that increases the weight of a channel with a successful rendezvous.
The reward in Step 3 of the original Exp3 algorithm in  is assigned by an adversary. As there are two SUs in the multichannel rendezvous problem, SU 2 can be viewed as the adversary of SU 1. Intuitively, one might think such an adversarial viewpoint might be used for deriving an upper bound on the expected weak regret (defined as the difference between the maximum accumulated reward and the accumulated reward from the Exp3 algorithm) like Theorem 3.1 of . However, as the channel selection probabilities of these two SUs are coupled through Algorithm 1, the rewards of these two SUs are not independent of each other and the analysis in Theorem 3.1 of  cannot be directly applied.
Another insight of Algorithm 1 is to view it as a stochastic game . If , then it is clear that the single selection policy that selects channel 1 all the time is optimal as both users rendezvous in every time slot. Through the process of exploration and exploitation, one expects that Algorithm 1 converges to the channel selection probabilities with , , . This is exactly the -approximation policy (for some that is a function of ) that achieves the -approximation ratio. Such an intuitive observation will be further verified in our experiments in Section IV.
Iv Experimental results
In this section, we report our experimental results. For Algorithm 1, we set . If
is set to be very large, then the probability distribution will be similar to the uniform selection policy. On the other hand, ifis very small, then the update of is very small and that leads to a very slow convergence of the algorithm.
In our first experiment, we consider a system of 16 independent two-state Markov channels with the same parameters, i.e., . The rendezvous probability at state 0 (resp. state 1) is (resp. ). There are 9 parameter settings for the two-state Markov channel model, the steady state probability , 0.5 and 0.9, and the correlation coefficient 0.1, 0.5, and 0.9.
For all the 9 settings, we find that the probability distributions learned by the algorithm when it converges are the same in every simulation. They all converge to the probability distribution (after sorting in the descending order of the channel selection probabilities). This is exactly the -approximation policy in  with . In Figure 1, we plot the channel selection probability with and . Each curve (marked with various colors) in this figure corresponds to the channel selection probability of a channel with respect to time.
To see whether the Exp3 algorithm converges to a good blind rendezvous policy, we measure the ETTR for the blind rendezvous policy with the channel selection probabilities with and , and compare that with the 6 blind rendezvous policies described in Subsection II-B. The ETTRs are obtained by averaging over 1000 independent runs for these blind rendezvous policies. In Table I, we show the comparison results for the 9 settings. For the fast time-varying channel model, i.e., , the optimal policy is the single channel policy . For the slow time-varying channel model, i.e., , the -approximation policy has the asymptotic approximation ratio . From these numerical results, The ETTRs of Exp3 are comparable to the best blind rendezvous policy among the 6 blind rendezvous policies described in Subsection II-B for all the 9 settings.
To show the effectiveness of Algorithm 1, we measure the ETTRs for the channel selection probabilities , learned at time (by averaging over 1000 independent runs). In Figure 2, we plot the ETTRs as a function of . As shown in this figure, all the ETTR curves are decreasing in time. This shows that Algorithm 1 is indeed learning better blind rendezvous policies with respect to time. One notable difference in these 9 settings is the convergence time of the algorithm. When is small, the probability that a channel is in a good state is also small. Hence, it is difficult to receive a reward for each channel. As such, it is more difficult to learn when is small and that leads to a longer convergence time. Moreover, we note that the fluctuation of ETTRs is much larger when (the yellow curves). The intuition behind this is that the channel state changes slowly when is large. But when a channel changes its state, it will take some time for the algorithm to learn such a change of states.
Even though the channel states are not directly observable, they can be implicitly learned. This is an additional advantage of Algorithm 1. Instead of assuming that all the channels are identically distributed in , we consider the setting with 10 channels that have different values of being in a good state. Specifically, We set . In Figure 3, we show the channel selection probability with , respectively. As shown in this figure, Algorithm 1 learns that channel 10 is the best channel in the long run and sticks to that channel with the probability 0.982. The channel selection probabilities for all the other channels are 0.002.
In this paper, we proposed a reinforcement learning approach for the multichannel rendezvous problem. When the channel states are not observable, we showed that the Exp3 algorithm is very effective and yields comparable ETTRs when comparing to various approximation policies in the literature. One future work is to extend the reinforcement learning approach to the setting where the channel states are either observable or partially observable. In that setting, we need to develop effective -learning algorithms.
-  Z. Gu, Y. Wang, Q.-S. Hua, and F. C. M. Lau, Rendezvous in Distributed Systems: Theory, Algorithms and Applications. Springer, 2017.
-  M. J. Abdel-Rahman, H. Rahbari, and M. Krunz, “Multicast rendezvous in fast-varying DSA network,” IEEE Transactions on Mobile Computing, vol. 14, no. 7, pp. 1449–1462, 2015.
-  K. Bian, J.-M. Park, and R. Chane, “A quorum-based framework for establishing control channels in dynamic spectrum access networks,” ACM MobiCom’09, 2009.
-  D. Yang, J. Shin, and C. Kim, “Deterministic rendezvous scheme in multichannel access networks,” Electronics Letters, vol. 46, no. 20, pp. 1402-1404, 2010.
-  N. C. Theis, R. W. Thomas, and L. A. DaSilva, “Rendezvous for cognitive radios,” IEEE Transactions on Mobile Computing, vol. 10, no. 2, pp. 216–227, 2011.
-  Z. Lin, H. Liu, X. Chu, and Y.-W. Leung, “Jump-stay based channel-hopping algorithm with guaranteed rendezvous for cognitive radio networks,” in Proc. IEEE INFOCOM 2011.
-  Z. Gu, Q.-S. Hua, Y. Wang, and F. C. M. Lau, “Nearly optimal asynchronous blind rendezvous algorithm for cognitive radio networks,” in Proc. IEEE SECON, 2013.
-  G.-Y. Chang and J.-F. Huang, “A fast rendezvous channel-hopping algorithm for cognitive radio networks,” IEEE Communications Letters, vol. 17, no. 7, pp. 1475–1478, 2013.
-  G.-Y. Chang, W.-H. Teng, H.-Y. Chen, and J.-P. Sheu, “Novel channel-hopping schemes for cognitive radio networks,” IEEE Transactions on Mobile Computing, vol. 13, pp. 407–421, Feb. 2014.
-  Z. Gu, Q.-S. Hua, and W. Dai, “Fully distributed algorithm for blind rendezvous in cognitive radio networks,” In Proc. ACM MobiHoc, pp. 155–164, 2014.
-  C.-S. Chang, C.-Y. Chen, D.-S. Lee, and W. Liao, “Efficient encoding of user IDs for nearly optimal expected time-to-rendezvous in heterogeneous cognitive radio networks,” IEEE/ACM Transactions on Networking, vol. 25, no. 6, pp. 3323–3337, 2017.
-  C.-S. Chang, D.-S. Lee, Y.-L. Lin, and J.-H. Wang, “ETTR Bounds and Approximation Solutions of Blind Rendezvous Policies in Cognitive Radio Networks with Random Channel States,” arXiv e-prints, p. arXiv:1906.10424, Jun 2019.
-  R. S. Sutton and A. G. Barto, Reinforcement learning: an introduction. Massachusetts: The MIT Press, 2012.
-  N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y. C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Communications Surveys & Tutorials, 2019.
-  J. C. Gittins, K. D. Glazebrook, R. Weber, and R. Weber, Multi-armed bandit allocation indices. Wiley Online Library, 1989, vol. 25.
-  P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM Journal on Computing, vol. 32, no. 1, pp. 48–77, 2002.
-  Journal of Artificial Neural Networks, vol. 2, no. 4, pp. 381–399, 1996.
-  L. S. Shapley, “Stochastic games,” Proceedings of the National Academy of Sciences, vol. 39, no. 10, pp.1095–1100, 1953.