
Adaptive Experimental Design for Efficient Treatment Effect Estimation: Randomized Allocation via Contextual Bandit Algorithm
Many scientific experiments have an interest in the estimation of the av...
read it

Theoretical and Experimental Comparison of OffPolicy Evaluation from Dependent Samples
We theoretically and experimentally compare estimators for offpolicy ev...
read it

Optimising IndividualTreatmentEffect Using Bandits
Applying causal inference models in areas such as economics, healthcare ...
read it

Adaptive normalization for IPW estimation
Inverse probability weighting (IPW) is a general tool in survey sampling...
read it

A nonparametric projectionbased estimator for the probability of causation, with application to water sanitation in Kenya
Current estimation methods for the probability of causation (PC) make st...
read it

Doubly Robust Interval Estimation for Optimal Policy Evaluation in Online Learning
Evaluating the performance of an ongoing policy plays a vital role in ma...
read it

Normalized Augmented Inverse Probability Weighting with Neural Network Predictions
The estimation of Average Treatment Effect (ATE) as a causal parameter i...
read it
Counterfactual Inference of the Mean Outcome under a Convergence of Average Logging Probability
Adaptive experiments, including efficient average treatment effect estimation and multiarmed bandit algorithms, have garnered attention in various applications, such as social experiments, clinical trials, and online advertisement optimization. This paper considers estimating the mean outcome of an action from samples obtained in adaptive experiments. In causal inference, the mean outcome of an action has a crucial role, and the estimation is an essential task, where the average treatment effect estimation and offpolicy value estimation are its variants. In adaptive experiments, the probability of choosing an action (logging probability) is allowed to be sequentially updated based on past observations. Due to this logging probability depending on the past observations, the samples are often not independent and identically distributed (i.i.d.), making developing an asymptotically normal estimator difficult. A typical approach for this problem is to assume that the logging probability converges in a timeinvariant function. However, this assumption is restrictive in various applications, such as when the logging probability fluctuates or becomes zero at some periods. To mitigate this limitation, we propose another assumption that the average logging probability converges to a timeinvariant function and show the doubly robust (DR) estimator's asymptotic normality. Under the assumption, the logging probability itself can fluctuate or be zero for some actions. We also show the empirical properties by simulations.
READ FULL TEXT
Comments
There are no comments yet.