Non-Identifiability in Network Autoregressions

11/22/2020
by   Federico Martellosio, et al.
0

We study identification in autoregressions defined on a general network. Most identification conditions that are available for these models either rely on repeated observations, are only sufficient, or require strong distributional assumptions. We derive conditions that apply even if only one observation of a network is available, are necessary and sufficient for identification, and require weak distributional assumptions. We find that the models are generically identified even without repeated observations, and analyze the combinations of the interaction matrix and the regressor matrix for which identification fails. This is done both in the original model and after certain transformations in the sample space, the latter case being important for some fixed effects specifications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2022

A sufficient and necessary condition for identification of binary choice models with fixed effects

We study the identification of binary choice models with fixed effects. ...
research
08/11/2018

Identification and Bayesian inference for heterogeneous treatment effects under non-ignorable assignment condition

We provide a sufficient condition for the identification of heterogeneou...
research
01/31/2022

On solutions of the distributional Bellman equation

In distributional reinforcement learning not only expected returns but t...
research
08/24/2023

Optimal Shrinkage Estimation of Fixed Effects in Linear Panel Data Models

Shrinkage methods are frequently used to estimate fixed effects to reduc...
research
04/16/2020

Identification of a class of index models: A topological approach

We establish nonparametric identification in a class of so-called index ...
research
12/21/2020

Weak Identification with Bounds in a Class of Minimum Distance Models

When parameters are weakly identified, bounds on the parameters may prov...
research
06/02/2022

Weakly Supervised Representation Learning with Sparse Perturbations

The theory of representation learning aims to build methods that provabl...

Please sign up or login with your details

Forgot password? Click here to reset