-
Estimating individual treatment effect: generalization bounds and algorithms
There is intense interest in applying machine learning to problems of ca...
read it
-
Estimation of Utility-Maximizing Bounds on Potential Outcomes
Estimation of individual treatment effects is often used as the basis fo...
read it
-
Estimating Average Treatment Effects via Orthogonal Regularization
Decision-making often requires accurate estimation of treatment effects ...
read it
-
Balance Regularized Neural Network Models for Causal Effect Estimation
Estimating individual and average treatment effects from observational d...
read it
-
Adversarial Balancing-based Representation Learning for Causal Effect Inference with Observational Data
Learning causal effects from observational data greatly benefits a varie...
read it
-
Learning Weighted Representations for Generalization Across Designs
Predictive models that generalize well under distributional shift are of...
read it
-
Learning Overlapping Representations for the Estimation of Individualized Treatment Effects
The choice of making an intervention depends on its potential benefit or...
read it
Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects
Practitioners in diverse fields such as healthcare, economics and education are eager to apply machine learning to improve decision making. The cost and impracticality of performing experiments and a recent monumental increase in electronic record keeping has brought attention to the problem of evaluating decisions based on non-experimental observational data. This is the setting of this work. In particular, we study estimation of individual-level causal effects, such as a single patient's response to alternative medication, from recorded contexts, decisions and outcomes. We give generalization bounds on the error in estimated effects based on distance measures between groups receiving different treatments, allowing for sample re-weighting. We provide conditions under which our bound is tight and show how it relates to results for unsupervised domain adaptation. Led by our theoretical results, we devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance, and encourage sharing of information between treatment groups. We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances. Finally, an experimental evaluation on real and synthetic data shows the value of our proposed representation architecture and regularization scheme.
READ FULL TEXT
Comments
There are no comments yet.