Shrewd Selection Speeds Surfing: Use Smart EXP3!

12/08/2017
by   Anuja Meetoo Appavoo, et al.
0

In this paper, we explore the use of multi-armed bandit online learning techniques to solve distributed resource selection problems. As an example, we focus on the problem of network selection. Mobile devices often have several wireless networks at their disposal. While choosing the right network is vital for good performance, a decentralized solution remains a challenge. The impressive theoretical properties of multi-armed bandit algorithms, like EXP3, suggest that it should work well for this type of problem. Yet, its real-word performance lags far behind. The main reasons are the hidden cost of switching networks and its slow rate of convergence. We propose Smart EXP3, a novel bandit-style algorithm that (a) retains the good theoretical properties of EXP3, (b) bounds the number of switches, and (c) yields significantly better performance in practice. We evaluate Smart EXP3 using simulations, controlled experiments, and real-world experiments. Results show that it stabilizes at the optimal state, achieves fairness among devices and gracefully deals with transient behaviors. In real world experiments, it can achieve 18 download over alternate strategies. We conclude that multi-armed bandit algorithms can play an important role in distributed resource selection problems, when practical concerns, such as switching costs and convergence time, are addressed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2019

Cooperation Speeds Surfing: Use Co-Bandit!

In this paper, we explore the benefit of cooperation in adversarial band...
research
05/26/2019

Phase Transitions and Cyclic Phenomena in Bandits with Switching Constraints

We consider the classical stochastic multi-armed bandit problem with a c...
research
12/08/2020

A Multi-Armed Bandit-based Approach to Mobile Network Provider Selection

We argue for giving users the ability to lease bandwidth temporarily fro...
research
07/13/2021

Markov Game with Switching Costs

We study a general Markov game with metric switching costs: in each roun...
research
07/20/2023

Decentralized Smart Charging of Large-Scale EVs using Adaptive Multi-Agent Multi-Armed Bandits

The drastic growth of electric vehicles and photovoltaics can introduce ...
research
08/14/2018

Multi-user Communication Networks: A Coordinated Multi-armed Bandit Approach

Communication networks shared by many users are a widespread challenge n...
research
11/07/2016

Reinforcement-based Simultaneous Algorithm and its Hyperparameters Selection

Many algorithms for data analysis exist, especially for classification p...

Please sign up or login with your details

Forgot password? Click here to reset