Stackelberg Risk Preference Design

06/26/2022
by   Shutian Liu, et al.
0

Risk measures are commonly used to capture the risk preferences of decision-makers (DMs). The decisions of DMs can be nudged or manipulated when their risk preferences are influenced by factors like the perception of losses and the availability of information about the uncertainties. In this work, we propose a Stackelberg risk preference design (STRIPE) problem to capture a designer's incentive to influence DMs' risk preferences. STRIPE consists of two levels. In the lower-level, individual DMs in a population, known as the followers, respond to uncertainties according to their risk preference types that specify their risk measures. In the upper-level, the leader influences the distribution of the types to induce targeted decisions and steers the follower's preferences to it. Our analysis centers around the solution concept of approximate Stackelberg equilibrium that yields suboptimal behaviors of the players. We show the existence of the approximate Stackelberg equilibrium. The primitive risk perception gap, defined as the Wasserstein distance between the original and the target type distributions, is important in estimating the optimal design cost. We connect the leader's optimality tolerance on the cost with her ambiguity tolerance on the follower's approximate solutions leveraging Lipschitzian properties of the lower-level solution mapping. To obtain the Stackelberg equilibrium, we reformulate STRIPE into a single-level optimization problem using the spectral representations of law-invariant coherent risk measures. We create a data-driven approach for computation and study its performance guarantees. We apply STRIPE to contract design problems to mitigate the intensity of moral hazard. Moreover, we connect STRIPE with meta-learning problems and derive adaptation performance estimates of the meta-parameter using the sensitivity of the optimal value function in the lower-level.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2019

Robo-advising: Learning Investor's Risk Preferences via Portfolio Choices

We introduce a reinforcement learning framework for retail robo-advising...
research
09/01/2023

Learning Risk Preferences in Markov Decision Processes: an Application to the Fourth Down Decision in Football

For decades, National Football League (NFL) coaches' observed fourth dow...
research
09/30/2022

Beyond Bayes-optimality: meta-learning what you know you don't know

Meta-training agents with memory has been shown to culminate in Bayes-op...
research
03/03/2021

A Pessimistic Bilevel Stochastic Problem for Elastic Shape Optimization

We consider pessimistic bilevel stochastic programs in which the followe...
research
05/17/2018

Preference Elicitation and Robust Optimization with Multi-Attribute Quasi-Concave Choice Functions

Decision maker's preferences are often captured by some choice functions...
research
08/23/2023

SafeAR: Towards Safer Algorithmic Recourse by Risk-Aware Policies

With the growing use of machine learning (ML) models in critical domains...
research
09/24/2019

Scalable Fair Division for 'At Most One' Preferences

Allocating multiple scarce items across a set of individuals is an impor...

Please sign up or login with your details

Forgot password? Click here to reset