Adaptive Local Kernels Formulation of Mutual Information with Application to Active Post-Seismic Building Damage Inference

05/24/2021
by   Mohamadreza Sheibani, et al.
6

The abundance of training data is not guaranteed in various supervised learning applications. One of these situations is the post-earthquake regional damage assessment of buildings. Querying the damage label of each building requires a thorough inspection by experts, and thus, is an expensive task. A practical approach is to sample the most informative buildings in a sequential learning scheme. Active learning methods recommend the most informative cases that are able to maximally reduce the generalization error. The information theoretic measure of mutual information (MI) is one of the most effective criteria to evaluate the effectiveness of the samples in a pool-based sample selection scenario. However, the computational complexity of the standard MI algorithm prevents the utilization of this method on large datasets. A local kernels strategy was proposed to reduce the computational costs, but the adaptability of the kernels to the observed labels was not considered in the original formulation of this strategy. In this article, an adaptive local kernels methodology is developed that allows for the conformability of the kernels to the observed output data while enhancing the computational complexity of the standard MI algorithm. The proposed algorithm is developed to work on a Gaussian process regression (GPR) framework, where the kernel hyperparameters are updated after each label query using the maximum likelihood estimation. In the sequential learning procedure, the updated hyperparameters can be used in the MI kernel matrices to improve the sample suggestion performance. The advantages are demonstrated on a simulation of the 2018 Anchorage, AK, earthquake. It is shown that while the proposed algorithm enables GPR to reach acceptable performance with fewer training data, the computational demands remain lower than the standard local kernels strategy.

READ FULL TEXT

page 13

page 17

page 18

research
10/22/2020

Pool-based sequential active learning with multi kernels

We study a pool-based sequential active learning (AL), in which one samp...
research
04/27/2020

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

The information-theoretic framework of Russo and J. Zou (2016) and Xu an...
research
06/16/2023

Amortized Inference for Gaussian Process Hyperparameters of Structured Kernels

Learning the kernel parameters for Gaussian processes is often the compu...
research
11/23/2022

Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems

As one of the central tasks in machine learning, regression finds lots o...
research
05/29/2018

Active and Adaptive Sequential learning

A framework is introduced for actively and adaptively solving a sequence...
research
06/14/2021

Marginalising over Stationary Kernels with Bayesian Quadrature

Marginalising over families of Gaussian Process kernels produces flexibl...
research
08/31/2022

Active Learning with Effective Scoring Functions for Semi-Supervised Temporal Action Localization

Temporal Action Localization (TAL) aims to predict both action category ...

Please sign up or login with your details

Forgot password? Click here to reset