Learning End-User Behavior for Optimized Bidding in HetNets: Impact on User/Network Association

09/11/2019 ∙ by Mohammad Yousefvand, et al. ∙ 0

We study the impact of end-user behavior on service provider (SP) bidding and user/network association in a HetNet with multiple SPs while considering the uncertainty in the service guarantees offered by the SPs. Using Prospect Theory (PT) to model end-user decision making that deviates from expected utility theory (EUT), we formulate user association with SPs as a multiple leader Stackelberg game where each SP offers a bid to each user that includes a data rate with a certain probabilistic service guarantee and at a given price, while the user chooses the best offer among multiple such bids. We show that when users underweight the advertised service guarantees of the SPs (a behavior observed under uncertainty), the rejection rate of the bids increases dramatically which in turn decreases the SPs utilities and service rates. To overcome this, we propose a two-stage learning-based optimized bidding framework for SPs. In the first stage, we use a support vector machine (SVM) learning algorithm to predict users' binary decisions (accept/reject bids), and then in the second stage we cast the utility-optimized bidding problem as a Markov Decision Problem (MDP) and propose a reinforcement learning-based dynamic programming algorithm to efficiently solve it. Simulation results and computational complexity analysis validate the efficiency of our proposed model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Ultra-dense heterogeneous networks (HetNets) with overlaid macro and small cells is a design solution that has emerged in 5G networks to cope with the increasing demand from mobile users for more capacity and higher data rates. In such HetNets users could have the option of choosing from multiple service providers (SPs), the one that offers them the best data service to meet their current demands and application requirements. In such a scenario, SPs will have to dynamically compete with each other to get the users to connect to their network by offering attractive service guarantees and prices. Therefore, user/network association and smart data pricing are extremely important from both user and SP perspectives [1, 2, 3]. Earlier work in the literature on both user/network association as well as pricing (discussed in section II) has primarily relied on models based on Expected Utility Theory (EUT). When a SP controls access to end-users via differentiated and hierarchical monetary pricing, then the performance of the network is directly subject to end-user decision-making that has shown to deviate from EUT in many cases and is better captured by models based on the Nobel Prize-winning Prospect Theory (PT) [4].

Due to uncertainties in channel and traffic conditions, when the advertised data rates offered by SPs can only be met with probabilistic guarantees, then these probabilities are not necessarily the same from user and SPs perspectives owing to the subjective biases of human decision making. This disparity can result in the rejection of SP offers as shown in our previous work

[5]. In fact, it was shown in [5]

that when users underweight the advertised service guarantees of the SPs (a behavior observed under uncertainty), the rejection rate of the bids increases dramatically which in turn decreases the SPs utilities and service rates. In this paper, to overcome this, we propose a two-stage learning-based optimized bidding framework for SPs. In the first stage, we use a support vector machine learning algorithm to predict users’ binary decisions (accept/reject bids), and then in the second stage we cast the utility-optimized bidding problem as a Markov Decision Problem (MDP) and propose a reinforcement learning-based dynamic programming algorithm to efficiently solve it. Simulation results and computational complexity analysis validate the efficiency of our proposed model.

The rest of this paper is organized as follows. In section II, we review the related works. In section III, we present the HetNet model used along with optimization problems from the user and SP perspectives. In section IV, we introduce the proposed two-stage learning-based optimized bidding method for SPs to learn the end users behavior and optimize their bids based on that. We validate the efficiency of the proposed model in section V, and conclude in section VI.

Ii Related Work

Numerous studies have been done in recent years to address user association in HetNets [1]. Due to the inherent interdependencies between user/cell association and resource allocation problems in HetNets and their direct impacts on each other, these two problems are usually jointly addressed and optimized with respect to a given performance parameter [6, 7]. The diverse range of performance criteria considered in the user/network association schemes proposed for HetNets spans from load balancing [8, 9, 10], to energy efficiency [11, 12, 13, 14], interference management [15], coverage maximization [16], and latency minimization [17]. Also, in terms of the methodology, the range of approaches proposed for user/network association in HetNets spans from optimization methods [12], to game theoretic solutions [14, 17, 18, 19, 20, 21]

, evolutionary algorithms

[22]

, estimation methods

[23] and learning algorithms [24].

Moreover, several works investigated the role of network economics on user/network association and allocation of network resources, via supply and demand-based smart pricing models [25, 26, 27, 28, 29]. For example, in [25] an iterative double-auction mechanism is proposed to offload traffic from macro BSs to third party small cell access points. A cost-aware adaptive bandwidth management mechanism for empowering users to make traffic offloading decisions is proposed in [26]. The effects of users social learning on service provider’s dynamic pricing policies is investigated in [27]. In [28] the duopoly competition between mobile network operators for mobile data plans with time flexibility is characterized using a three stage game model. In [29] a framework for realization of time depending pricing for multimedia data traffic is proposed to modify users’ behavior and prevent congestion. The role of PT in wireless data pricing has been explored in [30, 31] where users use their subjective biases in evaluating objective probabilistic parameters, and has been observed via human subject studies in [32].

In contrast to the earlier works, in this paper, using PT we address both user/network association and data pricing with emphasis on end-user behavior and decision making under uncertainty. We model the HetNet with probabilistic service guarantees and PT based decision making using the approach in [5]

where it was observed that the underweighting of the service guarantees by the users results in an increased rejection of the SP bids. To overcome this, a heuristic solution based on expanding the bandwidth available to the SPs bids was proposed in

[5]. However, it required the SPs to have knowledge of how the users perceive the uncertainty in the service guarantees, while in reality, SPs don’t have access to such information. The focus of the current paper is to overcome the bid rejection problem via a completely different approach, namely a two-stage learning-based optimized bidding framework for SPs. In the first stage, we use a support vector machine learning algorithm to predict users’ binary decisions, and then in the second stage we cast the utility-optimized bidding problem as a Markov Decision Problem and propose a reinforcement learning-based dynamic programming algorithm to efficiently solve it.

Iii System Model and Problem Formulation

Iii-a Network Model

To study user association in HetNets, we developed a two-tier HetNet scenario which includes wireless users that are randomly distributed within the coverage area of base stations. As shown in Fig. 1, in our HetNet model there is one macrocell LTE BS located in the center of the area and overlaid small cell WiFi access points who are competing with each other to serve the users in the HetNet. We assume each user in the HetNet receives several bids from service providers (SPs) in both cellular and WiFi tiers, where each bid includes a data rate with a certain probabilistic service guarantee and at a given price. Upon receiving such bids, the user makes a binary decision to accept or reject each of the received bids.

[width=]NetworkModel.pdf

Fig. 1: Heterogeneous Network (HetNet) Model.

Iii-B Stackelberg Game for User Association in HetNets

In our HetNet model, each user receives different bids from all base stations, i.e., one bid from the cellular BS and bids from the WiFi BSs. To enable multihoming, we assume each user can be simultaneously connected to both cellular and WiFi SPs to receive the data service. Specifically, we assume that each user can only be associated to the cellular SP and the best serving WiFi SP for that specific user, which is the SP who offers a bid with highest utility among all WiFi SPs. Hence, we model user association problem in this work as a Stackelberg game with two leaders and one follower, where SPs act as the leaders who make service offers to the user, and the user serves as a follower who accepts or rejects the received bids. Since we have users in our HetNet model, to solve the user/network association problem in a distributed manner, we need to solve stackelberg games each with three players including WiFi SP and cellular SP as the leaders and one user as the follower. Assuming the BS’s maximum bandwidth budget per user is fixed and the same for all users as shown in Eq. 16, these Stackelberg games are independent, and hence we consider and solve only one of these Stackelberg games without loss of generality. Note that finding the optimal bandwidth/power allocation for SPs is a NP-hard problem [33] and not the focus of this paper. In the remainder of this paper, we focus on one of these games. Using the index for user’s preferred WiFi SP and the index for cellular SP, we denote the of WiFi and cellular SPs with triples , and , respectively, in which the first term shows the advertised data rate, the second term is the proposed price for the offered data rate, and the third term is the amount of BW that will be allocated to the user by each SP. User decisions are binary, which means the user either accepts a bid or rejects it, and there is no probabilistic decision by user. We denote user decisions on cellular and WiFi SPs bids with and

respectively, which are binary variables. So, the tuple

represents user’s strategy with regard to the received bids, hence, the user has four possible strategies , , and . Fig. 2 illustrates the Stackelberg game model between cellular and WiFi SPs and mobile user.

[width=]StackelbergGame.pdf

Fig. 2: Stackelberg Game Model.

All three players have a cost function and a benefit function, and their utility functions are simply the difference of the cost and benefit functions. We represent user’s utility function as

(1)

where is the users benefit function which is a logarithmic concave function of the user’s expected aggregate data rate, , and is the user’s cost function which shows the aggregate price that must be paid by user to the SPs for each strategy. The user’s expected aggregate data rate is defined by

(2)

where is the service guarantee of the bid received from the cellular SP, and denotes the probability of having the actual data rate of user from cellular SP,

which is a random variable, equal or higher than the advertised data rate by cellular SP,

, and is given by

(3)

where for a fixed data rate , a larger yields a higher service guarantee. Similar definition holds for which is the service guarantee of the bid received from WiFi SP.

Once the user chooses its best response strategy, , the SPs will respond with their best response strategies to maximize their own utilities based on the received user decision. The utility of the WiFi SP, is defined as

(4)

and the utility of cellular SP, is defined as

(5)

where, the first term in both of these equations is the SPs’ expected payoff from the user, and the second term is their incurred service cost. The SPs’ payoff from the user is equal to the offered price in their bids if the user accepts their bids, otherwise their payoff from the user is equal to zero. In this work, we assume both SPs use convex pricing functions, as , and , , where and are payoff parameters for the WiFi SP, and and are payoff parameters for the cellular SP. We also assume the SPs use linear cost functions, as , and , where and are cost coefficients for the WiFi SP, and and are cost coefficients for the cellular SP. To satisfy the user’s minimum data rate constraint, the SPs must ensure that their offered data rate is higher than the minimum data rate required by the user, . Thus, the data rate constraints for the WiFi and the cellular SPs will be defined as below, respectively:

(6)
(7)

Iii-C User Optimization Problem

Upon receiving the bids from the SPs, the user will run an optimization problem to find its best strategy with regard to the received bids. We assume the user’s payoff function from the received data is a concave function, as defined in Eq. 1, in which the user’s utility is not linearly increased with increasing the data rate. It means that as long as the minimum data rate constraint is satisfied, the user is not willing to pay extra price with linear relation to the extra data rate offered by SPs. To find its best response strategy,, user will run the following optimization problem (denoted as ):

Max1 Problem: User’s Utility Maximization.
—————————————————————————

(8)
subject to
(9)
(10)
(11)

As shown above, the user has two major constraints for bid selection. The first constraint, shown in Eq. 9, is the user’s data rate constraint which ensures the expected data rate for the user is higher than its minimum required data rate, . The second constraint defined in Eq. 10 is the user’s utility constraint which guarantees a positive utility for the user from its strategy.

Iii-D SPs Optimization Problems

When the SPs receive user’s decision with regard to their bids, they choose their best response strategy. The the best response strategy for the WiFi BSs is obtained by solving the optimization problem below (denoted as ):

Max2 Problem: WiFi SP’s Utility Maximization.
—————————————————————————

(12)
subject to
(13)
(14)
(15)

in which, is the maximum amount of bandwidth that can be allocated to the user by the WiFi SP, and is the maximum achievable data rate by the user from the WiFi SP considering and the gain of the channel between the WiFi SP and the user. We assume the SPs use a proportionally fair bandwidth allocation algorithm to determine the maximum amount of bandwidth that can be allocated to each of their users. Assuming as the total amount of bandwidth available at SP , , and as the total number of users in our HetNet, the maximum amount of bandwidth that can be allocated to each user by the SP , is given as

(16)

in which is the gain of bandwidth allocation which is less than one due to the guard bands between channels for preventing interference, is a binary variable showing the activity of the users which if the user is active and has data demand, otherwise , and is also a binary variable representing the coverage of the user by the the BS , where if the SINR of the link between the user and the BS is higher than a certain threshold to have a reliable transmission, otherwise . Considering , the maximum achievable data rate by user, is given by:

(17)

where, is the transmit power of the BS, is the channel gain between the user and the SP , and

is the noise variance over the transmission channel. Similarly, the cellular SP runs

optimization problem, which is defined exactly similar to , except the index is replaced with the index and the parameters , , , are replaced with the parameters , , , , respectively. In Theorem 1, we prove that for both WiFi and cellular SPs, the best response strategies derived from and optimization problems, satisfy the minimum data rate constraint with equality.

Theorem 1.

The SPs best response strategies, derived from and problems, will always satisfy the minimum data rate constraint in the boundary of its feasibility region, i.e. we always have , and for the WiFi and the Cellular SPs, respectively.

Proof.

We prove this by contradiction for the WiFi SP. Assume is the optimal solution for problem, and is not a marginal BW, i.e. . Considering Eq. 15, we can infer that (1). In this case such that (2). From (1) and (2), and considering the direct relation between and , we can infer that . Now, considering the WiFi SP’s utility function defined in Eq. (11) and due to its reverse relation with the advertised bandwidth, , we can infer that which is in conflict with the initial assumption that is the optimal solution for problem. So, the proof is complete. The same proof is valid for the cellular SP. ∎

A characterization of the Nash Equilibria under the EUT and the PT models follows directly from the above Theorem, and can be found in [5]. To model the effects of PT on user decision making, we assume that users use their subjective biases in evaluating the bids made by the SPs. Specifically, we use the probability weighting effect (PWE) of PT to model how the user evaluates the probabilistic service guarantee that is part of the SPs bids. We use the Prelec function [34] to model the PWE under PT, which is given by

(18)

where is the weighted version of the probability , which is used by end-users while making decisions based on probabilistic parameters, and is the Prelec function parameter that describes the deviation of from . This function is a regressive and s-shaped function which is concave in region and convex in region, and in the former domain while in the later. Under this PWE model, we can infer that the user overestimates the service guarantees of the received offers if the advertised service guarantees are less than , and user underestimates them if advertised service guarantees are higher than . Assuming that SP networks are well designed to offer service guarantees higher than , we focus on the underestimating of service guarantees by the user under PT. It is also justified by the fact that end-users in real world wireless networks typically perceive the quality of their service as lower than that advertised by the SPs [35, 36].

The results in [5] reveal that when users underweight the SPs advertised service guarantees, the rejection rate of the SP bids, and consequently the resulting utility and revenue for SPs decreases dramatically. To overcome this, we discuss next a learning-based optimized bidding mechanism for SPs.

Iv Learning-based Optimized Bidding Method for SPs

To find the utility-optimized bids, we propose a two-stage learning method for SPs where in the first stage, SPs learn a binary user decision classifier function via a support vector machine learning algorithm. Then in the second stage, SPs find the utility-optimized bids using a reinforcement learning-based bidding algorithm in which the classifier function obtained in the first stage is used to evaluate the value of each potential bid by predicting the user decision with regard to that bid. A heuristic solution based on expanding the offered bandwidth in the SPs bids was proposed in

[5]. However, it requires the SPs to have knowledge of how the users perceive the uncertainty in the service guarantees, while in reality, SPs don’t have access to such information.

Iv-a SVM-based Classifier for Predicting User Decisions

Using a record of previous user decisions in response to different offered services, and noting the binary nature of user decisions in our work, we can efficiently train a SVM classifier to predict user decisions with regard to any future bid. The set of training data, here denoted as D, is given by

(19)

which includes entries of labeled data samples where the feature vector represents the bid features including its advertised data rate, the offered price, and offered bandwidth, respectively, and represents the previous binary decision of user in response to bid . Using the gathered training data, the SVM classifier parameters, and , can be derived by solving the optimization problem given in (20)-(22) using the SVM learning algorithm.

SVM Learning Optimization.
—————————————————————————

(20)
subject to:
(21)
(22)

In the SVM optimization objective function given in Eq. (20), is the width of the separating margin between two groups of accepted and rejected bids which is supposed to be maximized by minimizing its inverse, and is the amount of error in classifying each data point , which is supposed to be minimized, and is a regularization parameter that defines the trade-off between these two objectives. The inequality constraint in Eq. (21) enforces for all accepted bids and for all rejected bids, as it combines these two constraints. The constraint in Eq. (22) ensures that the amount of classification error for each data point , , is either zero for correctly classified points, or a positive value for misclassified points. After finding the SVM classifier parameters and from solving the SVM optimization, we can define the SVM classifier function as

(23)

to predict the user’s binary decision with regard to any given bid , denoted as .

Iv-B Reinforcement Learning-based Optimized Bidding

To find the utility-optimized bids for SPs that are highly likely to be accepted by the user while maximizing their utility, we formulate the SPs optimized bidding problem as a Markov Decision Problem (MDP) with the tuple in which denotes the set of states, defines the set of actions in each state, is the set of state-action transition probabilities, is the discount factor, and is the reward function. In this work, the set of states is defined as

(24)

in which, each state is associated with a tuple which is a potential bid and shows how much data rate, and bandwidth will be advertised to the user, respectively, if SPs bid . Note that to cast the SPs bidding problem as a MDP, we need to have discrete set of states, and to do so we quantize the possible values for both originally continuous bid variables bandwidth, , and data rate, , by considering possible values for data rate, and possible values for bandwidth. Since there is one state defined for each possible combination of the values of and , we have overall states in our MDP model, considering all such possible combinations.

is the set of is the set of actions available in each state . Taking each action in a given state results in a transition from state to the new state , with new data rate and bandwidth coordinates of . Hence, each action can be defined in terms of how that action changes the current data rate, and bandwidth coordinates of the system, and hence the action set for any given state , , is given by

(25)

in which, there is one unique action defined for each possible transition from any state to any other state . For example, taking action in state changes the state of the MDP from the state to the state by adding the to the coordinates of the state . Equation (IV-B) explains how this transition, denoted as , happens between these two states in the MDP.

(26)

defines the transition probabilities for all state-action pairs, which in our model are deterministic and given as

(27)

where

(28)

is the state-action reward function which defines an immediate reward that can be achieved from taking the action in state . In our model, is defined as

(29)

where is the users predicted decision with regard to bid which is defined in Eq. (23), and are the advertised price and the expected cost associated with bid , respectively, which are defined in section III-B. In our model, the reward of taking any action in a given state to go to the new state is equal to the expected utility of bidding the bid which is associated to the destination state .
The goal for SPs is to find a bid which maximizes their utility, which in our MDP model is equivalent to finding the best state with an associated bid with the highest amount of reward or expected utility. Such a state is accessible from any initial state in the MDP, by taking a set of actions based on the optimal policy. A policy is any function mapping from the states to the actions, and we say that we are executing some policy if whenever we are in any state , we take action . The value function for evaluating the value of each policy in each state is defined as

(30)

assuming , and since transition probabilities are binary and given in our MDP model, the value function can be simplified to

(31)

as in our model we have and . Also, since the optimal state is accessible from any initial state in the MDP, we set the discount factor to focus on the current action only and find the optimal action that takes us from the current state to the optimal state. Such an action is given by executing the optimal policy, , in any given state , as

(32)

To find the optimal policy which defines the best action for each state, we should theoretically exhaust all possible actions in each state, which is computationally intractable for MDPs with large number of states and actions. To solve this issue, we propose an efficient dynamic programming based algorithm which finds the optimal action for each state in the formulated MDP, with a logarithmic convergence rate.

Iv-C Dynamic Programming-based Optimized Bidding (DPOB) Algorithm

In the formulated MDP for optimized bidding, if user rejects any bid , it will also reject any other bid that results in a lower expected utility for that user, considering the rationality of users and their objective to maximize their expected utility. In fact, if user rejects the bid it will reject any other bid if since Pareto dominates from the user’s perspective, under such conditions. Likewise, when user accepts any bid , the offering SP is not willing to offer any new bid if results in a lower utility for that SP. In fact, if user accepts any bid , the offering SP will not offer any new bid to the user if since Pareto dominates from the SP’s perspective, under such conditions. Note that based on the utility functions of user and SPs defined in section III-B, when other parameters remain the same, increasing the advertised data rate will increase the utility of SPs and decrease the utility of users, while increasing the allocated bandwidth increases the utility of users and decreases the utility of SPs. The Pareto optimal relation between different states in the MDP and their associated bids helps SPs to update the set of feasible actions in each state after taking any random action, by removing the infeasible actions from the list of actions in the newly visited state while executing the reinforcement learning algorithm. The proposed dynamic programming-based reinforcement learning algorithm for optimized bidding is given in Algorithm IV-C. Note that in Algorithm IV-C, instead of initializing the MDP with a random state/bid, we initialize it with the SP’s previous bid derived from MAX2 optimization under EUT model, here denoted as , to increase the efficiency of the algorithm. The iterative pruning of infeasible actions from the list of actions in the proposed dynamic programming-based optimized bidding algorithm, leads it to converge to the optimal solution, and find the utility-optimized bids with a logarithmic convergence rate according to Theorem 2.

Algorithm1: Dynamic Programming-based Optimized Bidding (DPOB) Algorithm
——————————————————————–

Input: {Set of states: , Initial state of the system: , Initial set of actions:}
Initialize SP_Utility=0,
while  do
       choose a random action from the set
       calculate using Eq. (29)
       if  then
             Set
             Set
       end if
      Update set of actions:
       Calculate using SVM classifier in Eq. (23)
       if  then
            Update S: remove all actions from if
       end if
      if  then
            Update S: remove all actions from if
       end if
      
end while
Output: ,
Theorem 2.

The proposed DPOB algorithm converges to the optimal solution with a logarithmic convergence rate of .

Proof.

We first show that DPOB algorithm converges to the optimal solution. We prove this by contradiction. Assume is the optimal state in the MDP such that its associated bid maximizes the SPs utility, i.e. , assuming to be the initial state of the MDP. Since the DPOB algorithm exhausts all possible transition actions from initial state to all other states, the optimal state will be visited by the algorithm unless the action gets removed from the set of actions, , during iterative updates of this set in the algorithm. In that case, there must exists a state which Pareto dominates and results in higher utility for the SP as compared to , which is in contrast with initial assumption that is the optimal state in the MDP. Now we have to show that DPOB algorithm converges to the optimal solution with a logarithmic convergence rate of . Assume and to be the states with lowest and highest expected utility for the SP, respectively. If SP bids and user accepts that, still all the other actions are feasible, i.e. no action will be removed from after its update, while if SP bids and user accepts that, all other actions become infeasible as Pareto dominates all other states, which means the list of actions becomes empty immediately. However, in general after visiting any new state like , on average for half of the remaining feasible states like we either have , or , which means either the state Pareto dominates them, or they will Pareto dominate the state , and depending on the users response, one group of such states become infeasible, and their associated transition actions will be removed from the set of actions, . Hence, on average the initial set of actions shrinks by one fourth after each iteration of the algorithm, and become empty after iterations which proves the converges to the optimal solution with , where denotes the total number of states in the MDP. ∎

According to Theorem 2, the complexity of the proposed DPOB algorithm is while model-free reinforcement learning algorithms like Q-learning have a complexity of where denotes the number of states, and the number of actions [37]. For the bidding problem considered here, we have and it follows that the complexity of a Q-learning algorithm used to solve it would be . Thus, the computational gain of DPOB as compared to model free reinforcement learning algorithms is noteworthy since it reduces the network access delay for users which is attractive for SPs to enable and support delay critical applications in 5G HetNets.

V Simulation Results

In this section, we provide simulation results to validate the efficiency of the proposed learning based bidding algorithm. As shown in Fig. 3, we consider a HetNet scenario in which there are randomly distributed mobile users, that are covered by 9 BSs including one cellular BS (CBS) with the coverage radius of located in the center of a macro-cell and 8 overlaid small-cell long-range WiFi BSs (WBSs) with maximum coverage radius of of Each WBS competes with the CBS to offer data service to each mobile user located in a coverage area common to them. We use the Hata propagation model for urban environments [38] to capture the effects of path loss on each of the radio links. Table I details the parameters used in the simulation of the HetNet environment modeled here.

[width=]SimulatedScenario.pdf

Fig. 3: Simulated HetNet Scenario.

[width=]TableofParameters.pdf

TABLE I: Simulation Parameters.

Fig. 4 compares the sum utility of the SPs under three scenarios: (i) EUT- where there is no difference between the users objective and subjective perceptions of the SPs service guarantee, (ii) Deviation from EUT- where the users subjectively weight the SPs service guarantees with the Prelec probability weighting function parameterized by , and (iii) DPOB- where the learning algorithm is used to predict the users responses to bids. In the figure, in each of the cases, the total number of users (load) in the HetNet changes from to . As we can see, when the number of users is lower than 200, since the advertised data rates are high and low service guarantees in the range of are advertised by SPs to satisfy users data rate constraints, users overestimate the value of the SPs bids due to PWE of PT described in Eq. (18). When the total number of users is more than 200, since the advertised data rates are low, the SPs bids include high service guarantees in the range of which results in the underestimation of such bids by the users and their subsequent rejection. Thus the deviation from EUT affects the network adversely in the region corresponding to underweighting of the service guarantees. It can also be seen that the proposed DPOB algorithm uniformly improves performance in both low and high load situations, by taking (a) advantage of learning the end users decision making, and (b) optimizing the SPs bids based on the learned information to maximize the users acceptance rate of the offered bids. In fact, as seen in Fig. 4, the sum utility of SPs under DPOB increases on average by a factor of as compared to EUT model.

Further, the gap between DPOB and EUT is bigger in low load situations. The reason is that when the total number of users is low, the SPs allocate more bandwidth to each user and hence their advertised data rates are much higher, which in turn results in a very high price for their bids considering the convex pricing function used by them. This situation results in a rejection of bids by the users since their payoff function is a concave function of the data rate, making the SPs offers unattractive. In the case of DPOB, the ability to learn user actions (accept/reject) by evaluating all feasible bids, allows the SPs to make only those bids that are still affordable to the users.

Fig. 5 compares the sum utility of users under the mentioned three scenarios. As we can see again, the proposed DPOB algorithm uniformly outperforms EUT model in terms of users sum utility, due to offering more affordable bids to users which increases user’s acceptance rates and utilities, accordingly. Moreover, we observe that users sum utility under all three models increases with increasing the number of users, before reaching the max network capacity under our setting when . The reason is that when the load is low, users receive bids with advertised data rates much higher than their required minimum data rate, and due to SPs convex pricing and users concave payoff functions, such bids result in low utility for users, while by increasing the load, the gap between advertised data rates of SPs and required data rates of users shrinks, which increases the users utility accordingly. As seen in Fig. 5, the sum utility of users under DPOB increases on average by a factor of as compared to the EUT model.

We can also see in both Fig. 4 and Fig. 5 that when the number of users is more than 400, which is the max network capacity under our setting, most of the SPs are not capable of satisfying the users data rate constraints since their BW budget per user decreases with increasing load. Therefore, the number of users who accept bids, and are connected to BSs decreases, which in turn reduces both the SPs and users sum utilities.

[width=]SumUtilitySPs.pdf

Fig. 4: Sum Utility of SPs under EUT, Deviation from EUT and DPOB.

[width=]SumUtilityUsers.pdf

Fig. 5: Sum Utility of Users under EUT, Deviation from EUT and DPOB.

Fig. 6 and Fig. 7 compare the sum utility of SPs and sum utility of users for the three scenarios, respectively, when the Prelec parameter . This corresponds to a situation where the PWE of PT is more intense, i.e. the deviation of end-user behavior from EUT is greater. The performance results in the figures show that the deviation from EUT is more pronounced and the performance improvements in sum utilities using DPOB are also more significant.

[width=]SumUtilitySPs2.pdf

Fig. 6: Sum Utility of SPs under EUT, Deviation from EUT and DPOB.

[width=]SumUtilityUsers2.pdf

Fig. 7: Sum Utility of Users under EUT, Deviation from EUT and DPOB.

Vi Conclusion

In this paper, we studied the impact of end-user behavior on SP bidding and user/network association in a HetNet with multiple SPs while considering the uncertainty in the service guarantees offered by the SPs. We formulated user association with SPs as a multiple leader Stackelberg game where each SP offers a bid to each user that includes a data rate with a certain probabilistic service guarantee and at a given price, while the user chooses the best offer among multiple such bids. Using PT to model end-user decision making that deviates from EUT, we showed that when users underweight the advertised service guarantees of the SPs, the rejection rate of the bids increases dramatically which in turn decreases the SPs utilities and service rates. To overcome this, we proposed a two-stage learning-based optimized bidding framework for SPs. In the first stage, we used a support vector machine learning algorithm to predict users’ binary decisions, and then in the second stage we cast the utility-optimized bidding problem as a MDP and proposed the DPOB algorithm to efficiently solve it. Simulation results and computational complexity analysis validated the efficiency of the proposed learning based bidding algorithm.

References

  • [1] D. Liu et al., “User Association in 5G Networks: A Survey and an Outlook,” IEEE Commun. Surveys Tuts., vol. 18, no. 2, pp. 1018-1044, 2016.
  • [2] S. Sen, C. Joe-Wong, S. Ha, M. Chiang, “A survey of smart data pricing: Past proposals, current plans, and future trends,” ACM Comput. Surv., vol. 46, no. 2, pp. 15-37, Nov. 2013.
  • [3] S. Sen, C. Joe-Wong, S. Ha, M. Chiang, “Smart data pricing: using economics to manage network congestion,”Commun. ACM, vol. 58, no. 12, pp. 86-93, Dec. 2015.
  • [4] D. Kahneman and A. Tversky, “Prospect theory: An analysis of decision under risk,” Econometrica: J. Econom. Soc., Vol. 47, No. 2, pp. 263–291, 1979.
  • [5] M. Yousefvand, M. Hajimirsadeghi and N. B. Mandayam, “Impact of End-User Behavior on User/Network Association in HetNets,” IEEE Int. Conf. Commun. (ICC), Kansas City, MO, 2018.
  • [6] D. Fooladivanda and C. Rosenberg, “Joint Resource Allocation and User Association for Heterogeneous Wireless Cellular Networks,” IEEE Trans. Wireless Commun., vol. 12, no. 1, pp. 248-257, January 2013.
  • [7] Q. Kuang, W. Utschick and A. Dotzler, “Optimal Joint User Association and Multi-Pattern Resource Allocation in Heterogeneous Networks,” IEEE Trans. Signal Process., vol. 64, no. 13, pp. 3388-3401, 1 July1, 2016
  • [8] A. T. Hirata, E. C. Xavier and J. F. Borin, “Optimal and Heuristic Decision Strategies for Load Balancing and User Association on HetNets,” IEEE Symp. Comput. and Commun. (ISCC), Natal, 2018.
  • [9] X. Huang, W. Xu, G. Xie, S. Jin and X. You, “Learning Oriented Cross-Entropy Approach to User Association in Load-Balanced HetNet,” IEEE Wireless Commun. Lett., vol. 7, no. 6, pp. 1014-1017, Dec. 2018.
  • [10] Y. L. Lee, T. C. Chuah, A. A. El-Saleh and J. Loo, “User Association for Backhaul Load Balancing With Quality of Service Provisioning for Heterogeneous Networks,” IEEE Commun. Lett., vol. 22, no. 11, pp. 2338-2341, Nov. 2018.
  • [11] H. Ding, H. Zhang, J. Tian, S. Xu and D. Yuan, “Energy Efficient User Association and Power Control for Dense Heterogeneous Networks,” Int. Conf. Comput., Netw. Commun. (ICNC), Maui, HI, 2018, pp. 741-746.
  • [12] Q. Li, Q. Yang, M. Qin and K. Kwak, “Energy efficient user association and resource allocation in active array aided HetNets,” IET Commun., vol. 12, no. 6, pp. 672-679, Apr. 2018.
  • [13] U. Bin Farooq, U. Sajid Hashmi, J. Qadir, A. Imran and A. N. Mian, “User Transmit Power Minimization through Uplink Resource Allocation and User Association in HetNets,” IEEE Global Commun. Conf. (GLOBECOM), Abu Dhabi, United Arab Emirates, 2018.
  • [14] F. Yin, A. Wang, D. Liu and Z. Zhang, “Energy-Aware Joint User Association and Resource Allocation for Coded Cache-Enabled HetNets,” IEEE Access, vol. 7, pp. 94128-94142, 2019.
  • [15] M. Feng and S. Mao, “Interference Management and User Association for Nested Array-Based Massive MIMO HetNets,” IEEE Trans. Veh. Technol., vol. 67, no. 1, pp. 454-466, Jan. 2018.
  • [16] G. Hattab and D. Cabric, “Coverage and Rate Maximization via User Association in Multi-Antenna HetNets,” IEEE Trans. Wireless Commun., vol. 17, no. 11, pp. 7441-7455, Nov. 2018.
  • [17] M. Elkourdi, A. Mazin and R. D. Gitlin, “Towards Low Latency in 5G HetNets: A Bayesian Cell Selection / User Association Approach,” IEEE 5G World Forum (5GWF), Silicon Valley, CA, 2018.
  • [18] Y. Sun, G. Feng, S. Qin and S. Sun, “Cell Association With User Behavior Awareness in Heterogeneous Cellular Networks,” IEEE Trans. Veh. Technol., vol. 67, no. 5, pp. 4589-4601, May 2018.
  • [19] S. Brihi, A. Walid, J. Ben-Othman and M. El Koutbi, “A New Profile-QoS Aware User Association Scheme for 5G Ultra-Dense mmW-W HetNets,” IEEE Global Commun. Conf. (GLOBECOM), Abu Dhabi, United Arab Emirates, 2018.
  • [20] Y. M. Waheidi, M. Jubran and M. Hussein, “User Driven Multiclass Cell Association in 5G HetNets for Mobile & IoT Devices,” IEEE Access, vol. 7, pp. 82991-83000, 2019.
  • [21] A. Keshavarz-Haddad, E. Aryafar, M. Wang and M. Chiang, “HetNets Selection by Clients: Convergence, Efficiency, and Practicality,” IEEE/ACM Trans. Netw., vol. 25, no. 1, pp. 406-419, Feb. 2017.
  • [22] M. Amine, A. Walid, A. Kobbane and J. Ben-Othman, “New User Association Scheme Based on Multi-Objective Optimization for 5G Ultra-Dense Multi-RAT HetNets,” IEEE Int. Conf. Commun. (ICC), Kansas City, MO, 2018.
  • [23] G. Hattab and D. Cabric, “Long-Term Rate-Based User-Centric Association for Downlink Multi-Antenna HetNets,” IEEE Int. Conf. Commun. (ICC), Kansas City, MO, 2018.
  • [24] F. Pervez, M. Jaber, J. Qadir, S. Younis and M. A. Imran, “Memory-Based User-Centric Backhaul-Aware User Cell Association Scheme,” IEEE Access, vol. 6, pp. 39595-39605, 2018.
  • [25] G. Iosifidis, L. Gao, J. Huang and L. Tassiulas, “A Double-Auction Mechanism for Mobile Data-Offloading Markets,” IEEE/ACM Trans. Netw., vol. 23, no. 5, pp. 1634-1647, Oct. 2015.
  • [26] Y. Im, C. Joe-Wong, S. Ha, S. Sen, T. “Taekyoung” Kwon and M. Chiang, “AMUSE: Empowering Users for Cost-Aware Offloading with Throughput-Delay Tradeoffs,” IEEE Trans. Mobile Comput., vol. 15, no. 5, pp. 1062-1076, May 2016.
  • [27] Q. Ma, B. Shou, J. Huang, T. Başar, “Dynamic Pricing in the Presence of Participation-Dependent Social Learning,” Proc. ACM Mobihoc, Los Angeles, CA, USA, 2018
  • [28] Z. Wang, L. Gao and J. Huang, “Duopoly Competition for Mobile Data Plans with Time Flexibility,” IEEE Trans. Mobile Comput., Apr. 2019
  • [29] S. Sen, C. Joe-Wong, S. Ha and M. Chiang, “Time-Dependent Pricing for Multimedia Data Traffic: Analysis, Systems, and Trials,” IEEE JSAC, vol. 37, no. 7, pp. 1504-1517, July 2019.
  • [30] T. Li and N. Mandayam, “When users interfere with protocols: Prospect theory in wireless networks using random access and data pricing as an example,” IEEE Trans. Wireless Commun., vol. 13, no. 4, April 2014, pp. 1888–1907.
  • [31] Y. Yang and N. B. Mandayam, “Impact of end-user decisions on pricing in wireless networks,” IEEE Conf. Inf. Sci. Sys. (CISS), 2014, pp. 1–6.
  • [32] Y. Yang, L. T. Park, N. B. Mandayam, I. Seskar, A. L. Glass and N. Sinha, “Prospect Pricing in Cognitive Radio Networks,” IEEE Trans. Cogn. Commun. Netw., vol. 1, no. 1, March 2015, pp. 56-70.
  • [33] M. Yousefvand, T. Han, N. Ansari, A. Khreishah, “Distributed Energy-Spectrum Trading in Green Cognitive Radio Cellular Networks,” IEEE Trans. Green Commun. Netw., vol. 1, no. 3, Sept. 2017, pp. 253-263.
  • [34] Prelec Drazen, “The probability weighting function.” Econometrica, 1998, pp. 497-527.
  • [35] Measuring Mobile Broadband America Mobile. FCC Website [online]. Available: http://www.fcc.gov/measuring-broadband-america/mobile
  • [36] Measuring Broadband America. FCC Website [online]. Available: http://www.fcc.gov/measuring-broadband-america/
  • [37] S. Koenig, R. G. Simmons, “ Complexity analysis of real-time reinforcement learning applied to finding shortest paths in deterministic domains,” School of Computer Science, Carnegie Mellon University, Dec. 1992.
  • [38] T. S. Rappaport, “Wireless Communications: Principles and Practice,” 2d ed., Prentice Hall., 2002, pp. 153-154.