DeepAI AI Chat
Log In Sign Up

Learning Time Dependent Choice

09/10/2018
by   Zachary Chase, et al.
California Institute of Technology
0

We explore questions dealing with the learnability of models of choice over time. We present a large class of preference models defined by a structural criterion for which we are able to obtain an exponential improvement over previously known learning bounds for more general preference models. This in particular implies that the three most important discounted utility models of intertemporal choice -- exponential, hyperbolic, and quasi-hyperbolic discounting -- are learnable in the PAC setting with VC dimension that grows logarithmically in the number of time periods. We also examine these models in the framework of active learning. We find that the commonly studied stream-based setting is in general difficult to analyze for preference models, but we provide a redeeming situation in which the learner can indeed improve upon the guarantees provided by PAC learning. In contrast to the stream-based setting, we show that if the learner is given full power over the data he learns from -- in the form of learning via membership queries -- even very naive algorithms significantly outperform the guarantees provided by higher level active learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/06/2021

Active Learning for Sound Negotiations

We present two active learning algorithms for sound deterministic negoti...
10/10/2018

Batch Active Preference-Based Learning of Reward Functions

Data generation and labeling are usually an expensive part of learning f...
06/08/2015

Convergence Rates of Active Learning for Maximum Likelihood Estimation

An active learner is given a class of models, a large set of unlabeled e...
06/22/2019

Flattening a Hierarchical Clustering through Active Learning

We investigate active learning by pairwise similarity over the leaves of...
01/31/2021

Exponential Savings in Agnostic Active Learning through Abstention

We show that in pool-based active classification without assumptions on ...
05/15/2020

Stopping criterion for active learning based on deterministic generalization bounds

Active learning is a framework in which the learning machine can select ...
05/20/2020

Batch Decorrelation for Active Metric Learning

We present an active learning strategy for training parametric models of...