Homomorphically Encrypted Linear Contextual Bandit

03/17/2021
by   Evrard Garcelon, et al.
0

Contextual bandit is a general framework for online learning in sequential decision-making problems that has found application in a large range of domains, including recommendation system, online advertising, clinical trials and many more. A critical aspect of bandit methods is that they require to observe the contexts – i.e., individual or group-level data – and the rewards in order to solve the sequential problem. The large deployment in industrial applications has increased interest in methods that preserve the privacy of the users. In this paper, we introduce a privacy-preserving bandit framework based on asymmetric encryption. The bandit algorithm only observes encrypted information (contexts and rewards) and has no ability to decrypt it. Leveraging homomorphic encryption, we show that despite the complexity of the setting, it is possible to learn over encrypted data. We introduce an algorithm that achieves a O(d√(T)) regret bound in any linear contextual bandit problem, while keeping data encrypted.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset