Mobile and Internet-of-Things (IoT) devices, including mobile phones, wearables, sensors, etc., have become extremely popular in modern society 
. The rapid growth of those devices have increased the variety and sophistication of software applications and services such as facial recognition, interactive gaming , real-time, large-scale warehouse management , etc. Those applications usually require intensive processing power and high energy consumption. Due to the limited computing capabilities and battery power of mobile and IoT devices, a lot of computing tasks are offloaded to app vendors’ servers in the cloud. However, as the number of connected devices is skyrocketing with the continuously increasing network traffic and computational workloads, app vendors are facing the challenge of maintaining a low-latency connection to their users.
Edge computing – sometimes often referred to as fog computing – has been introduced to address the latency issue that often occurs in the cloud computing environment . A usual edge computing deployment scenario involves numerous edge servers deployed in a distributed manner, normally near cellular base stations . This network architecture significantly reduces end-to-end latency thanks to the close proximity of edge servers to end-users. The coverage areas of nearby edge servers usually partially overlap to avoid non-serviceable areas – the areas in which users cannot offload tasks to any edge server. A user located in the overlapping area can connect to one of the edge servers covering them (proximity constraint) that has sufficient computing resources (resource constraint) such as CPU, storage, bandwidth, or memory. Compared to a cloud data-center server, a typical edge server has very limited computing resources, hence the need for an effective and efficient resource allocation strategy.
Naturally, edge computing is immensely dynamic and heterogeneous. Users using the same service have various computing needs and thus require different levels of quality of service (QoS), or computational requirements, ranging from low to high. Tasks with high complexity, e.g. high-definition graphic rendering, eventually consume more computing resources in an edge server. A user’s satisfaction, or quality of experience (QoE), varies along with different levels of QoS. Many researchers have found that there is a quantitative correlation between QoS and QoE, as visualized in Fig. 1 [8, 2, 15]. At one point, e.g. , the user satisfaction tends to converge so that the QoE remains virtually unchanged at the highest level regardless of how high the QoS level is.
Consider a typical game streaming service for example, gaming video frames are rendered on the game vendor’s servers then streamed to player’s devices. For the majority of players, there is no perceptible difference between 1080p and 1440p video resolution on a mobile device, or even between 1080p and UHD from a distance farther than x the screen height regardless of the screen size . Servicing a 1440p or UHD video certainly consumes more resources (bandwidth, processing power), which might be unnecessary since most players are likely to be satisfied with 1080p in those cases. Instead, those resources can be utilized to serve players who are currently unhappy with the service, e.g. those experiencing poor 240p or 360p graphic, or those not able to play at all due to all nearby servers being overloaded. Therefore, the app vendor can lower the QoS requirements of high demanding users, potentially without any remarkable downgrade in their QoE, in order to better service users experiencing low QoS levels. This way, app vendors can maximize users’ overall satisfaction measured by their overall QoE. In this context, our research aims at allocating app users to edge servers so that their overall QoE is maximized.
We refer to the above problem as a dynamic QoS edge user allocation (EUA) problem. Despite being critical in edge computing, this problem has not been extensively studied. Our main contributions are as follows:
We define and model the dynamic QoS EUA problem, and prove its -hardness.
We propose an optimal approach based on integer linear programming (ILP) for solving the dynamic QoS EUA and develop a heuristic approach for finding sub-optimal solutions to large-scale instances of the problem efficiently.
Extensive evaluations based on a real-world dataset are carried out to demonstrate the effectiveness and efficiency of our approaches against a baseline approach and the state of the art.
The remainder of the paper is organized as follows. Section 2 provides a motivating example for this research. Section 3.1 defines the dynamic QoS problem and proves that it is -hard. We then propose an optimal approach based on ILP and an efficient sub-optimal heuristic approach in Sect. 4. Section 5 evaluates the proposed approaches. Section 6 reviews the related work. Finally, we conclude the paper in Sect. 7.
2 Motivating Example
. Each edge server has a particular amount of different types of available resources ready to fulfill users’ requests. A server’s resource capacity or player’s resource demand are denoted as a vector. The game vendor can allocate its users to nearby edge servers and assign a QoS level to each of them. In this example, there are three QoS levels for the game vendor to choose from, namely and (Fig. 1), which consume , , and units of , respectively. Players’ corresponding QoE, measured based on Eq. 3, are , and , respectively. If the server’s available resources are not limited then all players will be able to enjoy the highest QoS level. However, a typical edge server has relatively limited resources so not everyone will be assigned . The game provider needs to find a player - server - QoS allocation so that the overall user satisfaction, i.e. QoE, is maximized.
Let us assume server has already reached its maximum capacity and cannot serve anymore players. As a result, player needs to be allocated to server along with player . If player is assigned the highest QoS level , the remaining resources on server will suffice to serve player with QoS level . The resulting total QoE of those two players is . However, we can see that the released resources from the downgrade from to allows an upgrade from to . If players and both receive QoS level , players’ overall QoE is , greater than the previous solution.
The scale of the dynamic QoS EUA problem in the real-world scenarios can of course be significantly larger than this example. Therefore, it is not always possible to find an optimal solution in a timely manner, hence the need for an efficient yet effective approach for finding a near-optimal solution to this problem efficiently.
3 Problem Formulation
3.1 Problem Definition
This section defines the dynamic QoS EUA problem. Table 1 summarizes the notations and definitions used in this paper. Given a finite set of edge servers , and users in a particular area, we aim to allocate users to edge servers so that the total user satisfaction, i.e. QoE, is maximized. In the EUA problem, every user covered by edge servers must be allocated to an edge server unless all the servers accessible for the user have reached their maximum resource capacity. If a user cannot be allocated to any edge servers, or is not positioned within the coverage of any edge servers, they will be directly connected to the app vendor’s central cloud server.
|finite set of edge server , where|
|a set of computing resource dimension|
|dimensional vector with each dimension being a resource type, such as CPU or storage, representing the available resources of an edge server ,|
|finite set of user , where|
|a set of predefined resource level , where . A higher resource level requires more resource than a lower one We will also refer to a resource level as a QoS level.|
|dimensional vector representing the resource amount demanded by user . Each vector component is a resource type, . Each user can be assigned a resource level|
|set of users allocated to server ,|
|set of user ’s candidate servers – edge servers that cover user ,|
|edge server assigned to serve user ,|
|coverage radius of server|
A user can only be allocated to an edge server if they are located within ’s coverage area . We denote as the set of all user ’s candidate edge servers – those that cover user . Take Fig. 2 for example, users and can be served by servers , or . Server can serve users , and as long as it has adequate resources.
If a user is allocated to an edge server, they will be assigned a specific amount of computing resources , where each dimension represents a type of resource, e.g. CPU, RAM, storage, or bandwidth. is selected from a predetermined set of resource levels, ranging from low to high. Each of those resource levels corresponds to a QoS level. The total resources assigned to all users allocated to an edge server must not exceed the available resources on that edge server. The available computing resources on an edge server are denoted as . In Fig. 2, users , and cannot all receive QoS level on server because the total required resources would be , exceeding server ’s available resources .
Each user ’s assigned resource corresponds to a QoS level that results in a different QoE level. As stated in [8, 2, 15], QoS is non-linearly correlated with QoE. When the QoS reaches a specific level, a user’s QoE improves very trivially regardless of a noticeable increase in the QoS. For example, in the model in Fig. 1, the QoE gained from the upgrade is nearly 1. In the meantime, the QoE gained from the
upgrade is approximately 3 at the cost of a little extra resource. Several works model the correlation between QoE and QoS using the sigmoid function[12, 20, 10]. In this research, we use a logistic function (Equation 3), a generalized version of the sigmoid function, to model the QoS - QoE correlation. This gives us more control over the QoE model, including QoE growth rate, making the model more generalizable to different domains.
where is the maximum value of QoE, controls where the QoE growth should be, or the mid-point of the QoE function, controls the growth rate of the QoE level (how steep the change from the minimum to maximum QoE level is), represents the QoE level given user ’s QoS level , and . We let if user is unallocated.
Our objective is to find a user-server assignment with their individual QoS levels in order to maximize the overall QoE of all users:
3.2 Problem Hardness
We can prove that the dynamic QoS EUA problem defined above is -hard by proving that its associated decision version is -complete. The decision version of dynamic QoS EUA is defined as follows:
Given a set of demand workload and a set of server resource capacity ; for each positive number determine whether there exists a partition of into with aggregate QoE greater than , such that each subset of sums to at most , and the constraint (1) is satisfied. By repeatedly answering the decision problem, with all feasible combination of , it is possible to find the allocation that produces the maximum overall QoE.
The dynamic QoS EUA problem is .
Given a solution with servers and users, we can easily verify its validity in polynomial time – ensuring each user is allocated to at most one server, and each server meets the condition of having its users’ total workload less or equal than its available resource. Dynamic QoS EUA is thus in class.
Partition dynamic QoS EUA. Therefore, dynamic QoS EUA is -hard.
We can prove that the dynamic QoS EUA problem is -hard by reducing the Partition problem, which is -complete , to a specialization of the dynamic QoS EUA decision problem.
(Partition) Given a finite sequence of non-negative integers , determine whether there exists a subset such that .
Each user can be either unallocated to any edge server, or allocated to an edge server with an assigned QoS level . For any instance of Partition, construct the following instance of the dynamic QoS problem: there are users, where each user has two 2-dimensional QoS level options, and ; and a number of identical servers whose size is , where . Assume that all users can be served by any of those servers. Note that . Clearly, there is a solution to dynamic QoS EUA that allocates users to two servers if and only if there is a solution to the Partition problem. Because this special case is -hard, and being , the general decision problem of dynamic QoS EUA is thus -complete. Since the optimization problem is at least as hard as the decision problem, the dynamic QoS EUA problem is -hard, which completes the proof.
4 Our Approach
We first formulate the dynamic QoS EUA problem as an integer linear programming (ILP) problem to find its optimal solutions. After that, we propose a heuristic approach to efficiently solve the problem in large-scale scenarios.
4.1 Integer Linear Programming Model
From the app vendor’s perspective, the optimal solution to the dynamic QoS problem must achieve the greatest QoE over all users while satisfying a number of constraints. The ILP model of the dynamic QoS problem can be formulated as follows:
is the binary indicator variable such that,
The objective (5) maximizes the total QoE of all allocated users. In (5), the QoE level can be pre-calculated based on the predefined set of QoS levels . Constraint (6) enforces the proximity constraints. Users not located within a server’s coverage area will not be allocated to that server. A user may be located within the overlapping coverage area of multiple edge servers. Resource constraint (7) makes sure that the aggregate resource demands of all users allocated to an edge server must not exceed the remaining resources of that server. Constraint family (8) ensures that every user is allocated to at most one edge server with one QoS level. In other words, a user can only be allocated to either an edge server or the app vendor’s cloud server.
By solving this ILP problem with an Integer Programming solver, e.g. IBM ILOG CPLEX111www.ibm.com/analytics/cplex-optimizer/, or Gurobi222www.gurobi.com/, an optimal solution to the dynamic QoS EUA problem can be found.
4.2 Heuristic Approach
However, due to the exponential complexity of the problem, computing an optimal solution will be extremely inefficient for large-scale scenarios. This is demonstrated in our experimental results presented in Sect. 5. Approximate methods have been proven to be a prevalent technique when dealing with this type of intractable problems. In this section, we propose an effective and efficient heuristic approach for finding sub-optimal solutions to the dynamic QoS problem.
The heuristic approach allocates every user one by one (line 2). For each user , we obtain the set of all candidate edge servers that cover that user (line 3). If the set is not empty, or user is covered by one or more edge servers, user will then be allocated to the server that has the most remaining resources among all candidate servers (line 5) so that the server will be most likely to have enough resources to accommodate other users. In the meantime, user is assigned the highest QoS level that can be accommodated by the selected edge server (line 6).
The running time of this greedy heuristic consists of: (1) iterating through all users, which costs , and (2) sorting a maximum of candidate edge servers for each user, which costs , to obtain the server that has the most remaining resources. Thus, the overall time complexity of this heuristic approach is .
5 Experimental Evaluation
In this section, we evaluate the proposed approaches by an experimental study. All the experiments were conducted on a Windows machine equipped with Intel Core i5-7400T processor(4 CPUs, 2.4GHz) and 8GB RAM. The ILP model in Sect. 4.1 was solved with IBM ILOG CPLEX Optimizer.
5.1 Baseline Approaches
Our optimal approach and sub-optimal heuristic approach are compared to two other approaches, namely a random baseline, and a state-of-the-art approach for solving the EUA problem:
Random: Each user is allocated to a random edge server as long as that server has sufficient remaining resources to accommodate this user and has this user within its coverage area. The QoS level to be assigned to this user is randomly determined based on the server’s remaining resources. For example, if the maximum QoS level the server can achieve is , the user will be randomly assigned either or .
VSVBP:  models the EUA problem as a variable sized vector bin packing (VSVBP) problem and proposes an approach that maximizes the number of allocated users while minimizing the number of edge servers needs to be used. Since VSVBP does not consider dynamic QoS, we randomly preset users’ QoS levels, i.e., resource demands.
5.2 Experiment Settings
Our experiments were conducted on the widely-used EUA dataset , which includes data of base stations and end-users within the Melbourne central business district area in Australia. In order to simulate different dynamic QoS EUA scenarios, we vary the following three parameters:
Number of end-users: We randomly select users. Each experiment is repeated 100 times to obtain 100 different user distributions so that extreme cases, such as overly sparse or dense distributions, are neutralized.
Number of edge servers: Say the users selected above are covered by servers, we then assume of those servers are available to accommodate those users.
Server’s available resources: The server’s available computing resources is generated following a normal distribution, where and the average resource capacity of each server in each dimension .
Table 2 summarizes the settings of our three sets of experiments. The possible QoS level, for each user is preset to . For the QoE model, we set , and . We employ two metrics to evaluate our approaches: (1) overall QoE achieved over all users for effectiveness evaluation, and (2) execution time (CPU time) for efficiency evaluation.
|Number of users||Number of servers||Server’s available resources|
5.3 Experimental Results and Discussion
1) Effectiveness: Figures 5, 8, and 11(a) demonstrate the effectiveness of all approaches in experiment sets 1, 2, and 3, measured by the overall QoE of all users in the experiment. In general, Optimal, being the optimal approach, obviously outperforms other approaches across all experiment sets and parameters. The performance of Heuristic largely depends on the computing resource availability, which will be analyzed in the following section.
In experiment set 1 (Fig. 5(a)), we vary the number of users starting from 100 and ending at 1,000 in steps in 100 users. From 100 to 600 users, Heuristic results in higher total QoE than Random and VSVBP. Especially in the first three steps (100, 200, and 300 users), Heuristic achieves a QoE almost as high as Optimal. This occurs in those scenarios because the available resource is redundant and therefore almost all users receive the highest QoS level. However, as the number of users continues to increase while the amount of available resources is fixed, the computing resource for each user becomes more scarce, making Heuristic no longer suitable in these situations. In fact, from 700 users onwards, Heuristic starts being outperformed by Random and VSVBP. Due to being a greedy heuristic, Heuristic always tries to exhaust the edge servers’ resources by allocating the highest possible QoS level to users, which is not an effective use of resource. For example, one user can achieve a QoE of if assigned the highest QoS level , which consumes a resource amount of . That resource suffices to serve two users with QoS levels and , resulting in an overall QoE of . Since a user’s QoS level is randomly assigned by Random and VSVBP, these two methods are able to user resource more effectively than Heuristic in those specific scenarios.
A similar trend can be observed in experiment sets 2 and 3. In resource-scarce situations, i.e. number of servers ranging from 10% - 40% (Fig. 8(a)), and server’s available resources ranging from 5 - 25 (Fig. 11(a)), Heuristic shows a nearly similar performance to Random and VSVBP (slightly worse in a few cases) for the same reason discussed previously. In those situations, the performance difference between Heuristic and Random/VSVBP is not as significant as seen in experiment set 1 (Fig. 5(a)). Nevertheless, the difference might be greater if the resources are more limited, e.g. 1,000 users in both experiment sets 2 and 3, an average server resource capacity of 20 in set 2, and 50% number of servers in set 3.
As discussed above, while being suitable for resource-redundant scenarios, Heuristic has not been proven to be superior when computing resources are limited. This calls for a more effective approach to solve the dynamic QoS problem under resource-scarce circumstances.
2) Efficiency: Figures 5, 8, and 11(b) illustrate the efficiency of all approaches in the study, measured by the elapsed CPU time. The execution time of Optimal follows a similar pattern in all three experiment sets. As the experimental parameters increase from the starting point to a point somewhere in the middle – 600 users in set 1, 70% number of servers in set 2, and 30 average server resource capacity in set 3 – the time quickly increases until it reaches a cap of around a hefty 3 seconds due to being -hard. The rationale for this is that the complexity of the problem increases as we keep adding up more users, servers, and available resource, generating more possible options and solutions for Optimal to select from. After passing that mid-point, the time gradually decreases at a slower rate then tends to converge. We notice that this convergence is a reflection of the convergence of the total QoE produced by Optimal in each corresponding experiment set. After the experimental parameters passing the point mentioned above, the available resource steadily becomes more redundant so that more users can obtain the highest QoS level without competing with each others, generating less possible options for Optimal, hence running faster.
In experiment sets 1 and 2, the execution time of Heuristic grows gradually up to just 1 milliseconds. However, it does not grow in experiment set 3 and instead stabilizes around 0.5 - 0.6 milliseconds. This is because the available resource does not impact the complexity of Heuristic, which runs in .
5.4 Threats to Validity
Threat to construct validity. The main threat to the construct validity lies in the bias in our experimental design. To minimize the potential bias, we conducted experiments with different changing parameters that would have direct impact on the experimental results, including the number of servers, the number of users, and available resources. The result of each experiment set is the average of 100 executions, each with a different user distribution, to eliminate the bias caused by special cases such as over-dense or over-sparse user distributions.
Threat to external validity. A threat to the external validity is the generalizability of our findings in other specific domains. We mitigate this threat by experimenting with different numbers of users and edge servers in the same geographical area to simulate various distributions and density levels of users and edge servers that might be observed in different real-world scenarios.
Threat to internal validity. A threat to the internal validity is whether an experimental condition makes a difference or not. To minimize this, we fix the other experimental parameters at a neutral value while changing a parameter. For more sophisticated scenarios where two or more parameters change simultaneously, the results can easily be predicted in general based on the obtained results as we mentioned in Sect. 5.3.
Threat to conclusion validity. The lack of statistical tests is the biggest threat to our conclusion validity. This has been compensated for by comprehensive experiments that cover different scenarios varying in both size and complexity. For each set of experiments, the result is averaged over 100 runs of the experiment.
6 Related Work
Cisco  coined the fog computing, or edge computing, paradigm in 2012 to overcome one major drawback of cloud computing – latency. Edge computing comes with many new unique characteristics, namely location awareness, wide-spread geographical distribution, mobility, substantial number of nodes, predominant role of wireless access, strong presence of streaming and real-time applications, and heterogeneity. Those characteristics allows edge computing to deliver a very broad range of new services and applications at the edge of network, further extending the existing cloud computing architecture.
QoE management and QoE-aware resource allocation have long been a challenge since the cloud computing era and before that . Su et al.  propose a game theoretic framework for resource allocation among media cloud, brokers and mobile social users that aims at maximizing user’s QoE and media cloud’s profit. While having some similarity to our work, e.g. the brokers can be seen as edge servers, there are several fundamental architectural differences. The broker in their work is just a proxy for transferring tasks between mobile users and the cloud, whereas our edge server is where the tasks are processed. In addition, the price for using/hiring the broker/media cloud’s resource seems to vary from time to time, broker to broker in their work. We target a scenario where there is no price difference within a single service provider.  investigates the cost - QoE trade-off in virtual machine provisioning problem in a centralized cloud, specific to video streaming domain. QoE is measured by the processing, playback, or downloading rate in those work.
QoE-focused architecture and resource allocation have started gaining attraction in edge computing area as well.  proposes a novel architecture that integrates resource-intensive computing with mobile application while leveraging mobile cloud computing. Their goal is to provide a new breed of personalized, QoE-aware services.  and  tackle the application placement in edge computing environments. They measure user’s QoE based on three levels (low, medium, and high) of access rate, required resources, and processing time. The problem we are addressing, user allocation, can be seen as the step after application placement.  focuses on computation offloading scheduling problem in mobile clouds from a networking perspective, where energy and latency must be considered in most cases. They propose a QoE-aware optimal and near-optimal scheduling scheme applied in time-slotted scenarios that takes into account the trade-off between user’s mobile energy consumption and latency.
Apart from the aforementioned literature, there are a number of work on computation offloading or virtual machine placement problem. However, they do not consider QoE, which is important in an edge computing environment where human plays a prominent role. Here, we seek to provide an empirically grounded foundation for the dynamic QoS/QoE edge user allocation problem, forming a solid basis for further developments.
App users’ quality-of-experience is of great importance for app vendors where user satisfaction is taken seriously. Despite being significant, there is very limited work considering this aspect in edge computing. Therefore, we have identified and formally formulated the dynamic QoS edge user allocation problem with the goal of maximizing users’ overall QoE as the first step of tackling the QoE-aware user allocation problem. Having been proven to be -hard and also experimentally illustrated, the optimal approach is not efficient once the problem scales up. We therefore proposed a heuristic approach for solving the problem more efficiently. We have also conducted extensive experiments on real-world dataset to evaluate the effectiveness and efficiency of the proposed approaches against a baseline approach and the state of the art.
Given this foundation of the problem, we have identified a number of possible directions for future work with respect to QoE such as dynamic QoS user allocation in resource-scarce or time-varying situations, user’s mobility, service migration, service recommendation, just to name a few. In addition, a finer-grained QoE model with various types of costs or network conditions could be studied next.
Acknowledgments. This research is funded by Australian Research Council Discovery Projects (DP170101932 and DP18010021).
Aazam, M., St-Hilaire, M., Lung, C.H., Lambadaris, I.: Mefore: Qoe based resource estimation at fog to enhance qos in iot. In: 2016 23rd International Conference on Telecommunications (ICT). pp. 1–5. IEEE (2016)
-  Alreshoodi, M., Woods, J.: Survey on qoeqos correlation models for multimedia services. arXiv preprint arXiv:1306.0221 (2013)
-  Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog computing and its role in the internet of things. In: Proceedings of the first edition of the MCC workshop on Mobile cloud computing. pp. 13–16. ACM (2012)
-  Cerwall, P., Lundvall, A., Jonsson, P., Carson, S., Möller, R., Jonsson, P., Carson, S., Lindberg, P., Öhman, K., Sorlie, I., Queirós, R., Muller, F., Englund, L., Arvedson, M., Carlsson, A.: Ericsson mobility report. Ericsson, Stockholm (2018), https://www.ericsson.com/en/mobility-report/reports/november-2018
-  Chen, M., Zhang, Y., Li, Y., Mao, S., Leung, V.C.: Emc: Emotion-aware mobile cloud computing in 5g. IEEE Network 29(2), 32–38 (2015)
-  Chen, X.: Decentralized computation offloading game for mobile cloud computing. IEEE Transactions on Parallel and Distributed Systems 26(4), 974–983 (2015)
-  Ding, B., Chen, L., Chen, D., Yuan, H.: Application of rtls in warehouse management based on rfid and wi-fi. In: 2008 4th International Conference on Wireless Communications, Networking and Mobile Computing. pp. 1–5. IEEE (2008)
-  Fiedler, M., Hossfeld, T., Tran-Gia, P.: A generic quantitative relationship between quality of experience and quality of service. IEEE Network 24(2), 36–41 (2010)
-  Garey, M.R., Johnson, D.S.: Computers and intractability, vol. 29. wh freeman New York (2002)
-  Hande, P., Zhang, S., Chiang, M.: Distributed rate allocation for inelastic flows. IEEE/ACM Transactions on Networking (TON) 15(6), 1240–1253 (2007)
-  He, J., Wen, Y., Huang, J., Wu, D.: On the cost–qoe tradeoff for cloud-based video streaming under amazon ec2’s pricing models. IEEE Transactions on Circuits and Systems for Video Technology 24(4), 669–680 (2013)
-  Hemmati, M., McCormick, B., Shirmohammadi, S.: Qoe-aware bandwidth allocation for video traffic using sigmoidal programming. IEEE MultiMedia 24(4), 80–90 (2017)
-  Hobfeld, T., Schatz, R., Varela, M., Timmerer, C.: Challenges of qoe management for cloud applications. IEEE Communications Magazine 50(4), 28–36 (2012)
-  Hong, S.T., Kim, H.: Qoe-aware computation offloading scheduling to capture energy-latency tradeoff in mobile clouds. In: 2016 13th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). pp. 1–9. IEEE (2016)
-  Hoßfeld, T., Seufert, M., Hirth, M., Zinner, T., Tran-Gia, P., Schatz, R.: Quantification of youtube qoe via crowdsourcing. In: 2011 IEEE International Symposium on Multimedia. pp. 494–499. IEEE (2011)
-  Hu, Y.C., Patel, M., Sabella, D., Sprecher, N., Young, V.: Mobile edge computing—a key technology towards 5g. ETSI white paper 11(11), 1–16 (2015)
-  Lachat, A., Gicquel, J.C., Fournier, J.: How perception of ultra-high definition is modified by viewing distance and screen size. In: Image Quality and System Performance XII. vol. 9396, p. 93960Y. International Society for Optics and Photonics (2015)
-  Lai, P., He, Q., Abdelrazek, M., Chen, F., Hosking, J., Grundy, J., Yang, Y.: Optimal edge user allocation in edge computing with variable sized vector bin packing. In: International Conference on Service-Oriented Computing. pp. 230–245. Springer (2018)
-  Mahmud, R., Srirama, S.N., Ramamohanarao, K., Buyya, R.: Quality of experience (qoe)-aware placement of applications in fog computing environments. Journal of Parallel and Distributed Computing (2018)
-  Shenker, S.: Fundamental design issues for the future internet. IEEE Journal on selected areas in communications 13(7), 1176–1188 (1995)
-  Soyata, T., Muraleedharan, R., Funai, C., Kwon, M., Heinzelman, W.: Cloud-vision: Real-time face recognition using a mobile-cloudlet-cloud acceleration architecture. In: Computers and communications (ISCC), 2012 IEEE symposium on. pp. 59–66. IEEE (2012)
-  Su, Z., Xu, Q., Fei, M., Dong, M.: Game theoretic resource allocation in media cloud with mobile social users. IEEE Transactions on Multimedia 18(8), 1650–1660 (2016)