Decentralized AP selection using Multi-Armed Bandits: Opportunistic ε-Greedy with Stickiness

03/01/2019
by   Marc Carrascosa, et al.
0

WiFi densification leads to the existence of multiple overlapping coverage areas, which allows user stations (STAs) to choose between different Access Points (APs). The standard WiFi association method makes the STAs select the AP with the strongest signal, which in many cases leads to underutilization of some APs while overcrowding others. To mitigate this situation, Reinforcement Learning techniques such as Multi-Armed Bandits can be used to dynamically learn the optimal mapping between APs and STAs, and so redistribute the STAs among the available APs accordingly. This is an especially challenging problem since the network response observed by a given STA depends on the behavior of the others, and so it is very difficult to predict without a global view of the network. In this paper, we focus on solving this problem in a decentralized way, where STAs independently explore the different APs inside their coverage range, and select the one that better satisfy its needs. To do it, we propose a novel approach called Opportunistic ϵ-greedy with Stickiness that halts the exploration when a suitable AP is found, then, it remains associated to it while the STA is satisfied, only resuming the exploration after several unsatisfactory association periods. With this approach, we reduce significantly the network response variability, improving the ability of the STAs to find a solution faster, as well as achieving a more efficient use of the network resources.

READ FULL TEXT
research
01/02/2020

Multi-Armed Bandits for Decentralized AP selection in Enterprise WLANs

WiFi densification leads to the existence of multiple overlapping covera...
research
06/05/2020

Concurrent Decentralized Channel Allocation and Access Point Selection using Multi-Armed Bandits in multi BSS WLANs

Enterprise Wireless Local Area Networks (WLANs) consist of multiple Acce...
research
08/23/2018

Diversity-Driven Selection of Exploration Strategies in Multi-Armed Bandits

We consider a scenario where an agent has multiple available strategies ...
research
10/31/2017

Collaborative Spatial Reuse in Wireless Networks via Selfish Multi-Armed Bandits

Next-generation wireless deployments are characterized by being dense an...
research
07/04/2022

Autonomous Drug Design with Multi-armed Bandits

Recent developments in artificial intelligence and automation could pote...
research
02/10/2021

Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits

This paper explores multi-armed bandit (MAB) strategies in very short ho...

Please sign up or login with your details

Forgot password? Click here to reset