On Spatial Lag Models estimated using crowdsourcing, web-scraping or other unconventionally collected data

10/11/2020
by   Giuseppe Arbia, et al.
0

The Big Data revolution is challenging the state-of-the-art statistical and econometric techniques not only for the computational burden connected with the high volume and speed which data are generated, but even more for the variety of sources through which data are collected (Arbia, 2021). This paper concentrates specifically on this last aspect. Common examples of non traditional Big Data sources are represented by crowdsourcing (data voluntarily collected by individuals) and web scraping (data extracted from websites and reshaped in a structured dataset). A common characteristic to these unconventional data collections is the lack of any precise statistical sample design, a situation described in statistics as 'convenience sampling'. As it is well known, in these conditions no probabilistic inference is possible. To overcome this problem, Arbia et al. (2018) proposed the use of a special form of post-stratification (termed 'post-sampling'), with which data are manipulated prior their use in an inferential context. In this paper we generalize this approach using the same idea to estimate a Spatial Lag Model (SLM). We start showing through a Monte Carlo study that using data collected without a proper design, parameters' estimates can be biased. Secondly, we propose a post sampling strategy to tackle this problem. We show that the proposed strategy indeed achieves a bias-reduction, but at the price of a concomitant increase in the variance of the estimators. We thus suggest an MSE-correction operational strategy. The paper also contains a formal derivation of the increase in variance implied by the post-sampling procedure and concludes with an empirical application of the method in the estimation of a hedonic price model in the city of Milan using web scraped data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset