How response designs and class proportions affect the accuracy of validation data

06/27/2019
by   Julien Radoux, et al.
0

Reference data collected to validate land cover maps are generally considered free of errors. In practice, however, they contain errors despite all efforts to minimise them. These errors then propagate up to the accuracy assessment stage and impact the validation results. For photo-interpreted reference data, the three most widely studied sources of error are systematic incorrect labelling, vigilance drops, and demographic factors. How internal estimation errors, i.e., errors intrinsic to the response design, affect the accuracy of reference data is far less understood. We analysed the impact of estimation errors for two types of legends as well as for point-based and partition-based response designs with a range of sub-sample sizes. We showed that the accuracy of response designs depends on the class proportions within the sampling units, with complex landscapes being more prone to errors. As a result, response designs where the number of sub-samples are fixed are inefficient, and the labels of reference data sets have inconsistent confidence levels. To control estimation errors, to guarantee high accuracy standards of validation data, and to minimise data collection efforts, we proposed to rely on confidence intervals of the photo-interpreted data to define how many sub-samples should be labelled. In practice, sub-samples are iteratively selected and labelled until the estimated class proportions reach the desired level of confidence. As a result, less effort is spent on labelling obvious cases and the spared effort can be allocated to more complex cases. This approach could reduce the labelling effort by 50 homogeneous landscapes. We contend that adopting this optimisation approach will not only increase the efficiency of reference data collection but will also help deliver reliable accuracy estimates to the user community.

READ FULL TEXT

page 4

page 5

page 8

page 12

research
06/10/2019

Confidence intervals for class prevalences under prior probability shift

Point estimation of class prevalences in the presence of data set shift ...
research
02/07/2020

Trust Your Model: Iterative Label Improvement and Robust Training by Confidence Based Filtering and Dataset Partitioning

State-of-the-art, high capacity deep neural networks not only require la...
research
09/20/2019

A novel algorithm for confidence sub-contour box estimation: an alternative to traditional confidence intervals

The factor estimation process is a really challenging task for non-linea...
research
12/09/2018

On the estimation of the Lorenz curve under complex sampling designs

This paper focuses on the estimation of the concentration curve of a fin...
research
05/07/2022

A gentle tutorial on accelerated parameter and confidence interval estimation for hidden Markov models using Template Model Builder

A very common way to estimate the parameters of a hidden Markov model (H...

Please sign up or login with your details

Forgot password? Click here to reset