FlowBazaar: A Market-Mediated Software Defined Communications Ecosystem at the Wireless Edge

Wireless Internet access has brought legions of heterogeneous apps all sharing the same resources. However, current wireless edge networks with tightly coupled PHY/MAC that cater to worst or average case performance lack the agility to best serve these diverse sessions. Simultaneously, software reconfigurable infrastructure has become increasingly mainstream to the point that per packet and per flow decisions can be dynamically controlled at multiple layers of the communications stack. The goal of this work is to design, develop and demonstrate FlowBazaar, an open market-based approach to create a value chain from the end-user of an application on one side, to algorithms operating over reconfigurable infrastructure on the other, so as to enable an ecosystem wherein disparate applications are able to obtain necessary resources for optimal performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 9

01/04/2019

QFlow: A Reinforcement Learning Approach to High QoE Video Streaming over Wireless Networks

Wireless Internet access has brought legions of heterogeneous applicatio...
06/19/2018

Creating Tailored and Adaptive Network Services with the Open Orchestration C-RAN Framework

Next generation wireless communications networks will leverage software-...
12/02/2020

Towards Intelligent Reconfigurable Wireless Physical Layer (PHY)

Next-generation wireless networks are getting significant attention beca...
12/23/2020

Physical Wireless Resource Virtualization for Software-Defined Whole-Stack Slicing

Radio access network (RAN) virtualization is gaining more and more groun...
12/13/2021

Placement and Allocation of Communications Resources in Slicing-aware Flying Networks

Network slicing emerged in 5G networks as a key component to enable the ...
02/01/2019

The Slice Is Served: Enforcing Radio Access Network Slicing in Virtualized 5G Systems

The notions of softwarization and virtualization of the radio access net...
02/27/2020

Beyond the Trees: Resilient Multipath for Last-mile WISP Networks

Expanding the reach of the Internet is a topic of widespread interest to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Growth in wireless networks is being fueled by a multitude of new applications that require a diverse set of link characteristics for optimal operation. However, current wireless infrastructure, and algorithms operating over them, which are geared towards an average or worst case performance, are ill-equipped to deal with such heterogeneity. Particularly on wireless edge (last-hop) networks, tight coupling of PHY/MAC designs have created an ossified architecture under which there is little room for dynamic selection of mechanisms that best serve ongoing sessions. Furthermore, applications have no means of declaring what kind of network resources they actually require, leading to proposals for deep packet inspection with rigid classification of app types such as “Web” and “video” to decide what the applications’ needs are. Apart from privacy and Net Neutrality concerns, such a rigid classification does not provide room for innovative applications that cannot be classified into existing bins. This disconnect raises the question as to whether it is possible to design a dynamic ecosystem that allows end-user value to be translated seamlessly into appropriate per-packet PHY/MAC mechanisms that would enable a rich set of applications to evolve and flourish.

Our focus is on resource allocation at the final wireless hop to/from wireless devices. Our vision for FlowBazaar is to create an agile communications paradigm at this last-hop wireless edge that accounts for the entire chain of value progression illustrated in Figure 1. Per-packet mechanisms employed at the millisecond timescale at the MAC layer result in a certain Quality of Service

(QoS) defined as a vector of statistical of connection properties consisting of

. Selection between available mechanisms on a per-flow basis is enabled by using a QoS Policy at a larger timescale (seconds). Hence, the QoS policy determines which flows or packet aggregates receive what available QoS. The end-user perception of a particular QoS depends on the application being used. For instance, the impact of latency and loss rate on YouTube is unlike that of Web browsing. The relation between QoS and perceived performance is the application Quality of Experience (QoE) defined as an application-specific map between the received QoS vector and an element of the set with indicating the lowest and indicating the highest satisfaction. Yet, end-users might have contrasting priorities for applications, and an application that one user finds important might be irrelevant to another. Thus, there must be a means for articulation of end-user value on a per-application basis. Is it feasible to construct such a framework and develop a prototypical implementation?

Figure 1. Value chain of data communication.

Instantiating Mechanisms via Reconfigurable Hardware: Software defined networking (SDN), a method for enabling dynamic per-packet rules (mechanisms) at the network layer, has received much recent publicity, and there is a growing realization that the concept is equally applicable to the physical and data link layers with programmable hardware. Indeed, in this work we will use software controlled antenna technologies to finely control radiation patterns beyond existing signal processing techniques. We will also leverage differentiated queueing mechanisms currently available in off-the-shelf (OTS) WiFi routers running OpenWRT (a stripped-down Linux version).

Instantiating Policy Decisions via OpenFlow Extensions: While per-packet decisions are best accomplished using a curated set of mechanisms on reconfigurable hardware, the combination of mechanisms across layers is what results in a particular QoS. For instance, jointly using a horizontally polarized antenna beam that aligns well with a receiver, coupled with priority queueing could result in a QoS vector that is better suited to a particular application. Since it is possible for QoS policy decisions to be made exclusively in software, they can be made at timescales corresponding to the aggregation of packets. Indeed, we present results on modifying an off-the-shelf SDN controller implementation to embrace statistical sampling of PHY/MAC to inform QoS policy decision making with the objective of extending the well established OpenFlow protocol to the PHY/MAC layers. Thus, the second step towards our goal will be focused on software-based policy decisions taken at the timescale of seconds pertaining to reconfigurable antennas and MAC operation.

Identifying QoE and Value: A particular QoS vector can be perceived quite differently by different applications. Whether a WiFi router or cellular data link can support a Skype call, a YouTube stream and a software update simultaneously, and with what frequency of re-buffering and dropped conversions is a critical question. Measurements on the relationship between QoS and QoE are key to bridging subscriber value and resource allocation. Given this relationship, a device can compute the value of a wireless connection as a function of their open applications, and their relative priorities. The most efficient way of allowing expression of this value is via an auction.

2. Related Work

Our work brings together several different areas ranging from SDN, QoS, QoE, auctions and value determination. As such, we only cite work in the context of wireless networks that are directly relevant to the problem that we consider.

There has been much recent interest in extending the SDN idea to other layers. For example, CrossFlow (Shome et al., 2015, 2017) uses SDN OpenFlow principles to control networks of Software Defined Radios. In ÆtherFlow (Yan et al., 2015), the SDN/OpenFlow framework is used to bring programmability to the Wireless LAN setting. They show that this type of system can handle hand-offs better than the traditional 802.11 protocol. These SD-X extensions (X being the MAC layer in this case) focus on centralized configuration of the hardware and do not provide sample statistics on performance that we desire.

Closer to our theme, systems such as AeroFlux (Schulz-Zander et al., 2014) and OpenSDWN (Schulz-Zander et al., 2015) develop a wireless SDN framework for enabling prioritization rules for flows that are identified as belonging to selected applications (such as video streaming) via middle-boxes using packet inspection. However, they do not tie such prioritization to the impact on application QoE or end-user value across competing applications from multiple clients. Nor do they use measured QoS statistics as feedback for reconfiguration.

There has been significant work on QoS as a function of the scheduling policy, e.g., a sequence of work starting with (Tassiulas and A.Ephermides, 1992), and follow on work in the wireless context that resulted in algorithms such as backpressure-based scheduling and routing in wireless networks (Eryilmaz et al., 2005) and more recently (Hou et al., 2009) that ensures that strict delay guarantees are met. Most of these works aim at maximizing throughput or loss rate, but they do not consider all the elements of QoS together. Also, they do not map received QoS to application QoE.

The map between QoS and QoE has been studied recently, particularly on the wired network. The work in this space attempts to determine the QoS value of a network, and then based on data obtained directly from an app, match the observed QoS to an appropriate QoE. For example, Mok et al. (Mok et al., 2016) describe a method of determining the QoE for HTTP Streaming. Other work focuses on different applications, such Skype QoE (Spetebroot et al., 2015) or general Web services (Spetebroot et al., 2015). Although the approach of the above work is feasible, there are few results in the space of wireless edge networks.

There has also been work on using price or auction-based resource allocation in the wireless context. On the analytical side, (Sun et al., 2006) considered the problem of auction-based wireless resource allocation. Here, users participate in a second price auction and bid for a channel. It was shown that with finite number of users, a Nash Equilibrium exists and the solution is Pareto optimal. In (Manjrekar et al., 2014), an auction framework is presented in which queues (representing apps on mobile devices) repeatedly bid for service in a second-price auction that determines which set of queues will be selected for service. They show that under a large system scaling (called the mean field game regime), the result of the auction would be the same as that of the longest-queue-first algorithm, and hence ensuring fair service for all. Our design of auction-based scheduling algorithms are motivated by these ideas.

In the context of experiments, a recent trial of a price-based system is described in (Ha et al., 2012). Here, day-ahead prices are announced in advance to users, who can choose to use their cellular data connection based the current price. Thus, the decision makers are the human end-users that essentially have an on/off control. Furthermore, the prices are not dynamic and have to be determined offline based on historical usage.

3. Overview of FlowBazaar

The FlowBazaar architecture is illustrated in Figure 2, in which we have shown the different elements of the architecture in a color coded manner. The three main units of our system are, (i) an off-the-shelf WIFi access point running the OpenWRT operating system and with custom antennas, (ii) a centralized controller hosted on a Linux workstation, and (iii) multiple wireless stations (Windows/Linux/Android) enabled with our Client Middleware and running four applications – Skype, YouTube, Web Browser, and (large) File Download. The units have functionalities pertaining to packet mechanisms, QoS policy, application QoE, and end-user value, which we overview below. Tying together the units are a Controller Database in which we log all events, and a smaller Client Database at each station that obtains a subset of the data that it needs for decision making, both shown as yellow tiles.

Figure 2. FlowBazaar Architecture.

Per-Packet Mechanisms (blue tiles): At the level of data packets, we utilize two layers of software defined infrastructure, namely, (i) reconfigurable antennas, and (ii) reconfigurable queueing. Antennas can be oriented in a horizontally or vertically polarized configuration. Multiple Layer 2 queues can be created, and different per-packet scheduling mechanisms can be applied over them. When such mechanisms are applied to aggregates of packets or flows, the resulting QoS statistics at the queue level can be varied, with higher priority queues that match up to clients with higher signal strengths getting improved performances.

QoS Policy (orange tiles): Policy decisions that result in different QoS vectors via mechanism selection are made at a centralized controller that communicates using the OpenFlow protocol. We use a custom set of messages meant for Layer 1 and Layer 2 functionalities (labeled L1 Flow and L2 Flow, respectively). The Access Point runs SoftStack, an application that interprets OpenFlow messages and instantiates the mechanisms selected by the controller. Statistics relating to QoS are collected periodically by the Access Point including signal strengths, throughput, and RTT and the resulting sample statistics are returned by SoftStack back to the controller using the OpenFlow protocol.

Application QoE (beige tiles):

A smart middleware layer at clients is used to interface with our system such that there is no need for applications (such as YouTube or Skype) to be aware of its existence. Machine learning tools are used offline to determine the map between QoS and QoE on a per-application basis to create lookup tables. The client middleware determines the foreground application on a device, contacts the Controller Database to receive information on possible QoS options, and translates it into the impact on QoE of the foreground application. This layer ensures that all above mentioned steps are transparent to the end user.

End-user Value (pink tiles): Clients are offered feasible QoS vectors under an market framework. The decision on which flows to admit to a high-priority queue is taken via an price auction using a local currency (a token allowance), which is conducted every seconds. The resultant policy decisions in turn lead to a realization of the offered QoS. End-users setup priorities for different applications (at the timescale of weeks or months), and the Controller Database provides statistics of current market conditions (bid distribution), using which a Value Engine at the client middleware determines what the value of winning and losing would be. Finally, a Bid Generator places a bid for service. Auction results translate into QoS policies that remain in place for seconds.

Interactions: The chronological order of the interactions among the three functional units described in this section is shown in Figure 3. The Client Middleware app at each wireless station is responsible for recording the application preferences of the end users. This is the only input required from the end users, and can be considered as an initialization step carried out at the timescale of weeks or months. The Client Middleware, after determining the foreground application on the wireless station, requests the list of available QoS vectors from the Controller. Once it receives the list, it calculates the corresponding QoE values specific to the foreground application. Based on these QoE values, the Middleware determines the value of winning and losing using the Value Engine and places a bid accordingly. Bids from all participating clients are sent to the Controller, which conducts the auction. The results of the auction, which are the policy decisions, are sent to the Access Point using OpenFlow Experimenter messages. SoftStack interprets and implements these policy decisions, which leads to the updation of QoS vectors. These steps are executed once every 10 seconds by the Client Middleware, the Controller and SoftStack. The (updated) list of QoS vectors is then sent back to the Controller every second.

Figure 3. Chronological order of interactions

4. Main Results

Our main results are as follows:

  • SD-X: We develop OpenFlow extensions that can reconfigure antennas, create queues, setup queueing mechanisms, sample statistics and return values back to an OpenFlow controller. Named SoftStack, the novelty of this design is that it enables a platform for experimentation with custom policies and configurations at the MAC and PHY (electromagnetic) layers, using the sample statistics as feedback to inform reconfiguration decisions. We create high and low priority queues, and enable the setting of filters to direct packets from a selected flow into a particular queue.

  • QoS-QoE:

    We develop maps between QoS statistics sampled by SoftStack and performance for four applications, namely, (i) YouTube, (ii) Skype, (iii) Web browsing and (iv) file download, which represent broad classes of current day applications. Our contribution here is to explicitly account for buffered video while determining what the QoE of a particular QoS vector is likely to be for YouTube, which enables the QoE estimate to dynamically adjust to the buffer state.

  • Auction: We design a version of a third-price auction under the mean field game regime with participants whose budgets change dynamically. In doing so, we develop lightweight algorithms for determining the utility of obtaining service from either one of the queues. We use agent surplus as means of tracking the state of each application, with a high QoE increasing surplus and low QoE decreasing it. We consider three methods of resource allocation, namely (i) no differentiation (ii) FlowBazaar using an auction with forward-looking value determination, and (ii) FlowBazaar with greedy utility maximization (which is the best case of an auction with myopic agents).

  • YouTube: Using experiments conducted in an anechoic chamber (to prevent extraneous interference), we show that in a configuration of YouTube flows alone, high QoE matches up with low re-buffering rate, and both the auction system and utility maximization outperform the default vanilla case in choosing the correct sessions to prioritize at each time. Thus, we observe the phenomenon that a session that needs priority service to fill its buffer gets appropriate resources, while those that do not are relegated to low priority.

  • Mixed Traffic: We conduct experiments with heterogeneous applications, roughly corresponding to their popularity in the current Internet. We show how in a high load situation with different application priorities, FlowBazaar with a forward-looking value function can improve the average QoE for all sessions (not just the high priority ones), while greedy utility maximization attains a lower QoE. Finally, we show that in a low load situation, bids drop and “free” service follows, showing that the market is able to adapt to traffic.

  • Reconfigurable Antennas: We conduct experiments with reconfigurable antennas that use analog beam forming to show how SoftStack can be used to change radiation patterns as desired. We show how reconfiguring antennas can increase the received signal strength at stations and can improve QoE at selected stations.

5. Discussion

Our implementation described above is over WiFi, and could directly be applied to congested wireless environments such as public access hotspots and enterprise networks. However, the same framework can be used in future small-cell cellular data networks. One option for implementing our market could be via a token-based auction system (as we do in our prototype) in which users get token allowances as opposed to byte allowances as they do in current data plans. Thus, today’s 3 GB-per-month data plan would be replaced with a 3 kilo-tokens-per-month plan (or 100 tokens-per-day). Since the smart middleware makes bid decisions and the auction implicitly tracks demand, we do not need to set explicit time-of-day prices such as “free nights and weekends.” Finally, all the end-user need do is set up their relative priorities for different applications, and no further user input is needed to see improved performance.

6. Policies and QoS

In this section, we describe the design of SoftStack. The OpenFlow protocol enables a centralized controller to communicate with network switches and modify flow table entries without knowledge of internal (vendor specific) details. We exploit this separation of control and data planes to implement policy decisions using SoftStack. Using experimenter messages to send SoftStack commands ensures that we do not require implementation of specific changes at the controller. We use an off-the-shelf TP-Link WR1043ND v3 router with OpenWRT Chaos Calmer as the firmware for our implementation. We made the choice of OpenWRT because of its support for Linux based utilities like tc (traffic control) for implementing per packet mechanisms. Since OpenWRT does not natively support SDN, we used CPqD SoftSwitch (CPqD, 2015), an OpenFlow 1.3 compatible user-space software switch implementation.

We then made modifications to SoftSwitch to include SoftStack capabilities. Such capabilities include the ability to modify mechanisms across different layers of the network stack. SoftStack empowers us to make configuration changes at both the data link and the physical layers at the Access Point, in addition to the collection of statistics related to the implemented per packet mechanisms and the connected clients. We define two types of SoftStack commands for implementing the described capabilities, Policy commands and Statistics commands. The rationale behind this separation is to differentiate policy decisions from statistics collection. Experimenter messages are used to communicate these commands to the Access Point using OpenFlow.

Figure 4. Policy Command Packet
Figure 5. Queue Statistics Packet
Figure 6. Client-specific Statistics Packet

6.1. Policy Commands

Policy commands allow us to choose between available mechanisms at different layers. Every time a Policy command is sent, it is paired with a Solicited response that is generated by the receiver and sent to the controller using an experimenter message. A Solicited response message thus provides us with the means of retransmission of a failed Policy command, thereby guaranteeing reliability.

We define the format of the policy experimenter messages, which is shown in Figure 6. The Controller packs a policy command in the decided format and sends it to the Access Point using OpenFlow. On receiving the message, SoftStack unpacks it, identifies the specific policy command using the type field, and performs the corresponding operation. Using this framework, we implemented two specific policy commands for MAC and PHY layers.

1. MAC Layer Queue Creation Command: At the data link layer, we need a means of providing variable queueing schemes. These can be implemented on OpenWRT using the Linux utility tc (traffic controller), using which we can create multiple queues with different queuing disciplines and priorities. Decisions at the data link layer involve choosing the optimal queue from the list of available queues, changing the throughput caps on the individual queues and enabling or disabling sharing of excess (unused) throughput between them.

In our experiments, we create two (high and low priority) queues with different token rates using a hierarchical token bucket scheme. Tokens may be borrowed between queues, meaning that queues will share tokens if they have no traffic. We also create a default queue that handles any background traffic. We also create several bins of queues, each corresponding to a particular range of signal strengths. For instance, we can conduct experiments in which stations with RSSI of dBm are eligible for one set of queues, while those with an RSSI of dBM are eligible for another set of queues. We thus maintain fair throughput allocation and ensure that stations with poor signal strength do not negatively affect the throughput of stations that have a higher signal strength.

2. PHY Layer Antenna Configuration Command: At the physical layer, we were able to integrate reconfigurable antennas fabricated at our laboratory. These antennas may be configured toward horizontal or vertical polarization. The value of these antennas is that as nodes move and change in orientation, polarizing the radiation patterns of the antennas at transmitter and receiver in a manner that they are most similar to each other enhances the received signal strength by an order of magnitude.

We replace the generic omnidirectional antennas of our WiFi access point with two such antennas, and use one each at the mobile stations. Reconfiguration is achieved by means of applying an appropriate bias to a diode-based controller. The bias signal is provided via an Arduino microcontroller, which in turn is connected via USB to an off-the-shelf access point or mobile station (an Intel NUC in our case). We also created a custom application antenna-config that runs on OpenWRT (or Linux on a NUC), which is controlled via the Antenna Configuration Command.

6.2. Statistics Commands

Policy changes result in changes to the QoS statistics of the queues and the signal strengths of connected devices. We define Statistics commands to collect these results and send them back to the controller for analysis. Queue statistics include cumulative counts of downlink packets, bytes and dropped packets. Client-specific statistics consist of average Round Trip Times (RTT; which includes both the RTT from the base station to the client as well as the RTT from the base station to the wide-area destinations with which the client communicates) and signal strength (RSSI). Since statistics collection is performed periodically (once every second) and sent to the controller using experimenter messages, we label such messages as Unsolicited response messages.

Similar to Policy commands, we define the structure of both Queue and Client-specific Statistics messages. After collecting the respective statistics, SoftStack packs the data in the desired format and sends them to the Controller using OpenFlow. On receiving these messages, the Controller unpacks them, identifies the type from the header information and then saves the extracted data to the database. The packet formats of the two types of Statistics messages are shown in Figure 6 and Figure 6.

The client-specific statistics, together with statistics of the queue it is placed in, constitute the Quality of Service (QoS) for a client, containing QoS is then used to estimate the Quality of Experience (QoE) for a client depending on the current application. The details of this mapping are explained in the next section.

7. Mapping QoS to QoE

Given a set of QoS parameters, we associate them with discrete QoE scores from the set by leveraging the relationship between QoS and QoE for different applications. We consider four applications— YouTube, File Download, Web Browsing, and Skype—for which we discuss the QoS to QoE mapping next.

YouTube: In (Mok et al., 2016)

, a decision tree classifier is developed to predict the relation between the

Best Initial Bit Rate (BIBR) for streaming YouTube videos with minimal re-buffering and network-level QoS parameters. Using testbed experiments, the authors produced a decision tree that predicts the BIBR based on network level performance parameters. For example, according to (Mok et al., 2016), one way to ensure high QoE (1080p video with essentially no re-buffering) for YouTube is to provide it with at least 3364 kbps of throughput, ms of RTT (mean and median), ms of RTT jitter, and packet loss rate. We map the video bitrate and the resulting video quality to our discrete 1-5 scale based on their results.

In addition to the initial condition, the Youtube videos that we consider for our experiments play for about five minutes on average, and YouTube progressively downloads and buffers several minutes of a video on the end-device if the network conditions permit. Thus, the results in (Mok et al., 2016) (which were geared toward estimating the best initial bit rate) are not quite sufficient for the purpose of tracking QoE in a dynamic fashion, since if some part of a Youtube video is already buffered, then the dependence of the QoE on immediate QoS parameters decreases as the video is played out of the buffer. Hence, we make the assumption that if video is partly buffered, then the user’s desire for better QoS (and by extension, QoE) is reduced proportionally. We use the following parameterization to account for lower QoS and QoE requirements when video is buffered as compared to the “initial” state when the playback buffer is almost empty: if the end-user has a video buffer of 20 seconds or less, we consider this as an the initial condition, and the previously described rules for assigning QoE scores based on the bitrate apply. If the end-user has between 20 to 50 seconds of available video buffer, we assign a multiplier of 0.8 to all QoS parameters. If the video buffer is between 50 and 100 seconds or 100 to 150 seconds, we use multipliers of 0.5 or 0.3, respectively, for all QoS parameters.

To emulate YouTube sessions, we play the videos on the “Trending” list on YouTube.com one at a time at 1080p resolution. We measure the RTT to YouTube.com from the AP every second and add it to the RTT values measured on the wireless edge link. YouTube also allows us to capture the buffer state (seconds buffered), and we use that to determine the QoS to QoE map for that session every 10 seconds. This also allows us to correlate QoE with rebuffering events, which we discuss in Section 9.

File Transfer: In (Khan and Toseef, 2011), the relationship between QoS and QoE for large file transfers is described. The results show that the Mean Opinion Score (MOS; which we relate to QoE in our case) for file transfer is a non-decreasing concave function of throughput and is almost constant when throughput is above 800kbps. Moreover, they find that the RTT has minimal impact on the QoE of file download, provided RTT is less than 200ms, while it has a large impact when RTT is greater than 200ms.

We use 100 MB files on a local server that is connected to the Internet via Gigabit ethernet, and emulate a file download by downloading a file on a wireless station and then deleting it. We add the wireless and wired RTT to obtain the total RTT. The RTT to the local server always less than 200ms in our system, and we disregard its impact on QoE and only consider QoE to be a function of throughput. Hence, we divide our QoS bins into [800, 600, 400, 256, 128]kbps; we assign a QoE score of 5 when throughput is above 800Kbps, a QoE score of 4 when throughput is between 600 and 800Kbps, and so on.

Web Browsing: To emulate a Web browsing session, we selected a list of most popular websites worldwide from Wikipedia, and pick a website uniformly at random to visit, one at a time. After download, we flush the browser cache. We measure the RTT for the website, and add the wireless RTT to it. Following the procedure developed in (Egger et al., 2012), we divided RTT into the range [0, 20, 40, 60, 100]ms. We then assign a QoE score of 5 if RTT is between 0 to 20ms, a QoE score of 4 if RTT is between 20 and 40ms, and so on.

Skype: The behavior of Skype is much like YouTube, except that the video resolution is typically 760p, and rebuffering does not happen. Microsoft provides guidelines on Skype requirements (Microsoft, 2017), and we use them to scale the YouTube initial condition decision tree to obtain an equivalent one for Skype. For example, Skype only requires 1577 kbps throughput for smooth video at 760p. To emulate a Skype session, we use the Web version of Skype on a mobile station that contacts a twin station in our lab that is connected to the Internet using Gigabit ethernet. Each station is provided with an HD camera, so that each call has both voice and video. As before, the RTT is measured as the sum of the wireless and wired parts.

In the experiments that are described in Section 9, we generated traces of the four types of applications based on their relative popularities (time that typical users spend using them per (Ericsson, 2015)). In practice, with a load of 10 sessions this corresponds to 4 YouTube, 2 Skype, 3 Web browsing, and 1 file download session(s).

8. Value and Auctions

We develop an analytical model based on game-theoretical ideas that will enable us to understand how to map QoE to end-user value. The framework that we develop lies in the class of games called Mean Field Games (MFG) (Lasry and Lions, 2007; Iyer et al., 2014; Manjrekar et al., 2014) that apply to strategic behavior in large scale systems. We will first develop the general framework below, while our parameter selections for implementation are presented in Section 8.3.

8.1. Agent Model

We consider a market wherein a large number of agents periodically purchase network data services by using an internal currency (tokens). These agents represent the Bid Generator in each device in the FlowBazaar system, while the tokens are assumed to be issued by the Service Provider at a fixed rate and can be accumulated by the agent. The entire agent set is randomly divided into many subsets, which is naturally induced by geographic constraints with mobile agents. The number of agents in each subset is assumed to be identical and is denoted by . Agents move across different subsets, i.e., they move between access points randomly. This discrete time unit models the time duration between successive auctions in the real system, which we choose to be seconds.

Now, at each discrete time instant and each subset, assume that “high-quality” service is provided to some fixed number of agents, . We assume that Thus, corresponds to the number of sessions that can simultaneously be provided a high QoS in the real system, while is the total number of competing sessions.

Agents in a particular subset participate in an price auction (a generalization of the second-price auction) to compete for the units of high-quality service. In such an auction, the highest bidding agents are selected as winners, and each has to pay a number of tokens equal to the highest bid made across the agents. It is well known that such an auction promotes truth telling (or in this case, bidding one’s true value). The rest of the agents will be served via nominally “low” quality (free).

The idea of the state of happiness at the end-user is modelled in the game theoretic setting by the so-called surplus of the agent, which can be thought of as the net happiness accrued by the agent thus far. Hence, some surplus change will occur at the end of each auction based on the outcome seen by that agent. In other words, winners (who obtain a larger QoE) would potentially see a larger increase in surplus than losers (who obtain a smaller QoE). Obtaining a poor QoE can also cause a decrease in surplus.

Agents in such a market can be specified in terms of the currently active app, the surplus, and the number of tokes that they posses. Each agent encounters a repeated decision making problem, and must place its bid based on the perceived value (accounting for the current suplus and prediction of the future) as well as their belief about what its competitors are likely to bid. This is precisely the setting of the MFG, and our model is as follows.

State: Agent has a private state . Here, stands for the type of the application. is the cumulative surplus of the agent, which may increase or decrease due to the realization of the service qualities. is the token budget for the agent.

Bid Distribution: As mentioned above, the agents must place their bid under some beliefs about their competitors. We denote the assumed bid distribution (common across all agents) in the market as . This belief distribution is obtained from the auction server (Controller), which collects the bids made over intervals of time and provides the empirical distribution back to all agents.

Payment: The number of tokens paid by agent after each auction which must lie in is denoted by .

Income: During each time instant, each agent in the system will receive a fixed number of tokens as an income, .

Utility Function: Based on the surplus of an agent, concave utility functions are defined for different applications, . Further, we also consider the heterogeneous agent preferences, since different agents may value the same application differently. We capture this heterogeneity by using different scalings of the utility function across different agent types.

Regeneration Factor: Since the agents are free to join or leave the market (or disconnect from the current AP), we define a regeneration factor

, such that after each time instant, an agent may quit the system with probability

. Further, we assume that once an agent leaves the system, a new active agent will join immediately to ensure that there are agents in each subset.

State Transition:
After each auction, the surplus and budget are updated:

where represents the expected amount of surplus change based on the auction results and application type.

A particular end user might arbitrarily decide to change from one app to another. The transition of the application type between successive auctions can be modeled via a Markov Chain (MC), whose stationary distribution reflects the popularity of different apps. Since we consider four applications, there are four possible transitions from any given application. The cumulative surplus gained by a particular application is not carried over to a new application. Thus, if an application switch occurs, the surplus will reset to some initial value

.

8.2. Value Computation and Bid Declaration

We study two methods of computing value, which have different levels of complexity and accuracy.

Forward-Looking Value Function:

A fully rational agent needs to account for the future evolution of its state while determining the value of obtaining high or low quality service. Since our system Markovian, this computation is via by a Markov Decision Process (MDP). The agent’s decision problem is:

(1)

where is the set of all bids consistent with the budget available with the agent, and = when an application switch occurs. In our implementation, our first approach will be to solve the above to find the optimal bid (which in turn will lead to selection of flows for prioritization)

Greedy Value Function: While computing the solution of an MDP yields the optimal bid from an agent’s perspective, it is more complex than myopically using the marginal utility function as the value, i.e., simply setting

(2)

Although this approach is not accurate, it can be used to represent boundedly-rational agents, i.e., the agents take a sub-optimal actions in the interest of complexity.

Computation of the value function provides two results namely, (i) the map between state and value, and (ii) the map between state and bid. As mentioned above, since the price auction is incentive compatible (bidding true value is the dominant action) (Manjrekar et al., 2014), the bids follow the value determined above.

Now, the maximum possible system-wide utility that can be attained using the greedy approach is simply

(3)

where the sum is taken over all agents Essentially, the above expression ignores budget constraints in (2), and provides an upper bound on the overall utility feasible with the greedy approach. Note that the upper bound cannot actually be achieved via a budget constrained auction, and requires the presence of “genie” that obtains the state of the agents accurately by some means, and then computes the utilities. Our second approach will be to select flows via such a genie-aided scheme in order to compare the best possible greedy scheme with the forward-looking auction system.

8.3. Numerical Evaluation

Given the MDP formulation above, given the bid distribution , the transition kernel , the best response policy obtained by 1 and the stationary distribution of the state constitute a Mean Field Equilibrium (MFE). The convergence of the optimal value function and existence of an MFE follows from a well established procedure (Manjrekar et al., 2014), which, however, is not the emphasis of this paper.

We numerically simulated the system to find viable parameters to conduct testbed experiments. The parameters chosen in the simulations are as following: income/token rate , initial surplus , initial budget , surplus reset after application switch , regeneration factor , total number of agents is 1 million, the concave utility function of surplus is .

We created two agent types (T-1 and T-2) with different utility scaling parameters () shown in Table 1. Also shown in Table 1 are the values of denoting the surplus increase/decrease parameter and application transition probabilities (common across all agents), respectively. Note the acronyms for the four apps: YouTube (YT), Web Browsing (WB), File Download (FD) and Skype (SP). The app transition probabilities are chosen so as to generate a stationary distribution consistent with measurements on the relative time spent on each as per (Ericsson, 2015).

App q
T-1 T-2 Win Lose YT WB FD SP
YT 1 2.5 4 -1 0.80 0.10 0.05 0.05
WB 2 1.5 3 0 0.10 0.70 0.15 0.05
FD 1.5 2 2 -1 0.05 0.05 0.85 0.05
SP 2.5 1 3 -2 0.01 0.10 0.05 0.84
Table 1. System parameters.

Simulations under these parameters yielded fast convergence and bounded bids, thus providing a sanity check to go ahead with implementation. We do not present the simulation results due to space constraints, and will directly present experimental results in the next section.

9. Experimental Results

We ran a series of experiments to demonstrate the benefit of the market-based system over the vanilla system. We used a WiFi router with SoftStack as the Access Point and three Intel NUCs to simulate up to 10 clients (agents) for our experiments. The three NUCs had 5th generation i7 processors with 8 GB of memory and ran the Ubuntu Operating System. As such, they were powerful enough to run three traffic intensive sessions like Skype and YouTube simultaneously and four less intensive sessions like Web Browsing and Download. It is relatively easy to measure relevant session information such as ports used by an application, time taken to complete a download, page load time for a browsing session and play/load progress for YouTube sessions. All such information is collected every second and written to the database, so that it can be easily shared with the other modules of the system. The NUCs were paired with three laptops connected to the Internet via Gigabit ethernet to serve as receivers for Skype sessions.

We compared the performance of three policies—no differentiation (vanilla), FlowBazaar with MDP-based auction and greedy maximization of system utility—in the presence of different types of traffic. To implement the greedy scheme, we created a genie application at the Controller that is provided with all the (true) surplus values, foreground applications and QoS statistics, and maximizes expression (2) every 10 seconds.

We ran the experiments both in an anechoic chamber and in a production environment to verify the results and the stability of the system. For the first set of experiments, we considered only real-time traffic (YouTube) which had both Throughput and Latency requirements. The next set of experiments had both real-time and non real-time traffic, where the latter served as background traffic. The third set of experiments were conducted to validate the claim that SoftStack can change the radiation pattern to provide better QoS to a subset of clients.

9.1. YouTube Performance

Figure 7. Policies performance comparison
Figure 8. Correlation between QoE and rebuffering
Figure 9. Utility Comparison

The first set of experiments was meant to be a stress test to validate the system presented in the previous sections. This experiment was run in the anechoic chamber to filter out interference from other clients operating in the same channel. Each NUC was running two YouTube sessions to simulate a total of 6 clients. In the case of MDP-based auction, the Access Point was set up with two downlink queues and a default queue for any background traffic. The number of occupants of the high priority queue was set to two. The throughput limit for the high and the low priority queues were set such that clients in the high priority queue would experience better QoS than those in the low priority queue, thus incentivizing users to bid higher to experience better QoE. For the Greedy approach, we used the same setup as that for the MDP-based auction. For the no differentiation case, we just set up a single queue with the same total throughput limit as that of the two queues in the previous scenarios. We ran scripts at the controller to run the MDP-based auction or the Greedy approach and to calculate the QoE for each client based on the measured QoS values.

In Figure 9, we compare the performance of the three policies. Each group of bars represents the average QoE value which is on the right y-axis, along with the QoS values on the left y-axis. It is evident from the graph that both the winner of the MDP-based auction and the Greedy approach experience much better QoE than the no differentiation (or vanilla) case. This validates our claim that the proposed Auction system is superior to the vanilla case. We also captured the actual performance of YouTube on the client by measuring the number of rebuffering events in the lifetime of the agents. Figure 9 shows that the winners in the MDP-based auction experience far fewer rebuffering events, and that the QoE is correlated with the number (or fraction) of rebuffering events.

9.2. Mixed Traffic Scenario

We consider both real-time and non real-time traffic for the next set of experiments. In addition to six clients running a mixture of YouTube and Skype, we have four more clients that generate background web and file download traffic to simulate a more practical scenario. We statically assigned Web Browsing and Download to the low priority queue as both applications have low requirements to achieve good QoE. The queue setup is the same as that of the previous experiment, with a high priority queue and a low priority queue. The maximum number of occupants of the high priority queue is again set to two. In addition to the normal traffic case, we also compare the performance of the same three policies in a low traffic scenario, where the capacity of the system is much higher than the offered load. We doubled the allocations of the normal traffic case to evaluate the low traffic case.

In Figure 9, we compare the average utilities of the clients for the MDP-based auction and the Greedy approach in the normal load and the low load scenarios. The total utility of all clients in the MDP-based auction is better than that in the Greedy approach, which validates the claim that clients participating in the former approach are more satisfied that those in the latter, and hence the optimality of the MDP-based policy.

Figure 10. Performance comparison of YouTube for different policies
Figure 11. Performance comparison of Skype for different policies

We show the QoE and QoS for the three policies in each traffic scenario for YouTube and Skype in Figure 10 and Figure 11. In each group, QoE is scaled on the right y-axis and QoS values on the left y-axis. Again, we see that for both applications, the winner of the MDP-based auction experiences better QoE than the other policies because of better QoS provided by the winner queue. This validates the fact that the MDP-based policy outperforms the other policies. We observe that although individual QoS parameters such as RTT and Drop Rate improve in for low traffic, the average QoE is similar to that of the normal traffic case, suggesting that it is correctly provisioned for the offered load, and higher QoE of the MDP-based policy is obtained through prioritization of the right flows.

Figure 12. Bid Distribution
Figure 13. Budget Distribution
Figure 14. RSSI variation with orientation

We also plotted the bid distributions of the clients in the two load scenarios in Figure 14. When the system is loaded, clients tend to bid higher in order to get into the winner queue and experience better QoE. When the total load is low, everyone experiences good QoE irrespective of the queue they are assigned to, and there is no incentive to bid higher. The mass of the distribution in the low load scenario is located near 0 and 3. This is also reflected in the budget distribution for the corresponding scenarios shown in Figure 14. Since the clients in the low load scenario tend to bid lower, their budget increases and hence, the distribution of budget is shifted towards the higher values as compared to the normal load scenario.

9.3. Reconfigurable Antenna

We ran a last set of experiments to demonstrate the ability of SoftStack to reconfigure the polarization of the directional radiation pattern of the Access Point and hence provide adaptive preferential service to a single or a group of clients. Antenna capability included, orientation can be added into the Auction policy as a natural extension. The implication of this capability is clearer in the presence of multiple RSSI bins. If a client bids high and wins the auction, the Access Point will adapt its radiation pattern to favor the client and as a result, the client will be moved to a more favorable RSSI bin. Since a client with low RSSI value usually experiences higher RTT values in the presence of clients with better RSSI values, clients would alter their bidding profile in order to move to a better RSSI bin to receive better QoS (and in turn better QoE).

A proof-of-concept experiment was performed to validate this claim. This demonstration included the use of software controlled polarization-reconfigurable antennas (with directional radiation patterns) on both the Access Point and NUC (representing the client). When the antenna polarization of the client was aligned with that of the Access Point, the client saw favorable RSSI values (shown in Figure 14). But as soon as the Access Point reconfigured its radiation pattern by changing the polarization of its antennas, there was an appreciable drop in RSSI of the client. The RSSI of the client changed back by more than 10 dBm to the original value as soon as the Access Point changed its radiation pattern again to favor it. This change in RSSI can be traced directly to the observed cross-polarization discrimination amongst the two states of the reconfigurable antenna. The effect of RSSI on experienced RTT values was also measured for a single YouTube session running on the client. The capacity of the Access Point was limited to 3 Mbps, which is sufficient to support a 1080p video. When the RSSI of the client was high (-40 dBm), average RTT value observed was 35 msec. This value increased to 630 msec when the RSSI of the client was low (-55 dBm), thus validating our claim that clients with higher RSSI values experience lower average RTT values.

10. Conclusion

In this paper, we considered the design, development and evaluation of FlowBazaar, a wireless edge platform that can be used to tie end-user value on the one hand, with algorithms on software reconfigurable infrastructure on the other. Working with off-the-shelf hardware and open source operating systems and protocols, we showed how to couple queueing, scheduling, learning and markets to develop a system that is able to reconfigure itself to best suit the needs of applications. As our YouTube observations suggest, such a holistic framework that accounts for this entire chain can reveal efficiencies and interactions that a narrow focus on individual components of the system is incapable of achieving. We demonstrated system performance for a set of representative applications, and illustrated how FlowBazaar adaptively promotes the applications that have the greatest value to the end-users. We believe that the application of our system will be in upcoming small cell wireless architectures such as 5G, and our goal will be to extend our ideas to such settings.

References

  • (1)
  • CPqD (2015) CPqD. 2015. OpenFlow Software Switch. http://cpqd.github.io/ofsoftswitch13/. (2015).
  • Egger et al. (2012) S. Egger, T. Hoßfeld, R. Schatz, and M. Fiedler. 2012. Waiting times in quality of experience for Web based services. In Quality of Multimedia Experience (QoMEX), 2012 Fourth International Workshop on. IEEE, 86–96.
  • Ericsson (2015) Ericsson. 2015. Ericsson Mobility Report: On the Pulse of the Networked Society. https://www.ericsson.com/assets/local/mobility-report/documents/2015/ericsson-mobility-report-june-2015.pdf. (2015).
  • Eryilmaz et al. (2005) A. Eryilmaz, R. Srikant, and J. Perkins. 2005. Stable Scheduling Policies for fading wireless channels. IEEE/ACM Trans. Network. 13 (April 2005), 411–424.
  • Ha et al. (2012) S. Ha, S. Sen, C. Joe-Wong, Y. Im, and M. Chiang. 2012. TUBE: time-dependent pricing for mobile data. In SIGCOMM. 247–258.
  • Hou et al. (2009) I.H. Hou, V. Borkar, and P.R Kumar. 2009. A theory of QoS for wireless. In IEEE INFOCOM 2009. Rio de Janeiro, Brazil.
  • Iyer et al. (2014) K. Iyer, R. Johari, and M. Sundararajan. 2014. Mean field equilibria of dynamic auctions with learning. Management Science 60, 12 (2014), 2949–2970.
  • Khan and Toseef (2011) M.A. Khan and U. Toseef. 2011. User utility function as quality of experience (QoE). In Proceedings of the ICN, Vol. 11. 99–104.
  • Lasry and Lions (2007) J-M. Lasry and P-L. Lions. 2007. Mean field games. Japan Journal of Mathematics (2007).
  • Manjrekar et al. (2014) M. Manjrekar, V. Ramaswamy, and S. Shakkottai. 2014. A mean field game approach to scheduling in cellular systems. In Proceedings of INFOCOM. 1554–1562.
  • Microsoft (2017) Microsoft. 2017. How much bandwidth does Skype need? https://support.skype.com/en/faq/FA1417/how-much-bandwidth-does-skype-need. (2017).
  • Mok et al. (2016) R. K. P. Mok, W. Li, and R. K. C. Chang. 2016. IRate: Initial Video Bitrate Selection System for HTTP Streaming. IEEE Journal on Selected Areas in Communications 34, 6 (June 2016), 1914–1928. https://doi.org/10.1109/JSAC.2016.2559078
  • Schulz-Zander et al. (2015) J. Schulz-Zander, C. Mayer, B. Ciobotaru, S. Schmid, and A. Feldmann. 2015. OpenSDWN: Programmatic control over home and enterprise wifi. In Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research.
  • Schulz-Zander et al. (2014) J. Schulz-Zander, N. Sarrar, and S. Schmid. 2014. AeroFlux: A Near-Sighted Controller Architecture for Software-Defined Wireless Networks. In Presented as part of the Open Networking Summit 2014 (ONS 2014). Santa Clara, CA.
  • Shome et al. (2017) P. Shome, J. Modares, N. Mastronarde, and A. Sprintson. 2017. Enabling Dynamic Reconfigurability of SDRs Using SDN Principles. In Ad Hoc Networks. Springer, 369–381.
  • Shome et al. (2015) P. Shome, M. Yan, S. M. Najafabad, N. Mastronarde, and A. Sprintson. 2015. CrossFlow: A cross-layer architecture for SDR using SDN principles. In 2015 IEEE Conference on Network Function Virtualization and Software Defined Network (NFV-SDN). 37–39. https://doi.org/10.1109/NFV-SDN.2015.7387403
  • Spetebroot et al. (2015) T. Spetebroot, S. Afra, N. Aguilera, D. Saucez, and C. Barakat. 2015. From network-level measurements to expected Quality of Experience: The Skype use case. In IEEE International Workshop on Measurements & Networking (M&N), 2015. 1–6.
  • Sun et al. (2006) J. Sun, E. Modiano, and L. Zheng. 2006. Wireless channel allocation using an auction algorithm. IEEE Journal on Selected Areas in Communications 24, 5 (2006), 1085–1096.
  • Tassiulas and A.Ephermides (1992) L. Tassiulas and A.Ephermides. 1992. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Trans. Automat. Contr. 37, 12 (1992), 1936–1948.
  • Yan et al. (2015) M. Yan, J. Casey, P. Shome, A. Sprintson, and A. Sutton. 2015. ÆtherFlow: Principled Wireless Support in SDN. In 2015 IEEE 23rd International Conference on Network Protocols (ICNP). 432–437. https://doi.org/10.1109/ICNP.2015.9