-
Active Learning for Gaussian Process Considering Uncertainties with Application to Shape Control of Composite Fuselage
In the machine learning domain, active learning is an iterative data sel...
read it
-
Active Classification: Theory and Application to Underwater Inspection
We discuss the problem in which an autonomous vehicle must classify an o...
read it
-
OFAI-UKP at HAHA@IberLEF2019: Predicting the Humorousness of Tweets Using Gaussian Process Preference Learning
Most humour processing systems to date make at best discrete, coarse-gra...
read it
-
GPIRT: A Gaussian Process Model for Item Response Theory
The goal of item response theoretic (IRT) models is to provide estimates...
read it
-
Fast Information-theoretic Bayesian Optimisation
Information-theoretic Bayesian optimisation techniques have demonstrated...
read it
-
Active learning for efficiently training emulators of computationally expensive mathematical models
An emulator is a fast-to-evaluate statistical approximation of a detaile...
read it
-
Finding Convincing Arguments Using Scalable Bayesian Preference Learning
We introduce a scalable Bayesian preference learning method for identify...
read it
Bayesian Active Learning for Classification and Preference Learning
Information theoretic active learning has been widely studied for probabilistic models. For simple regression an optimal myopic policy is easily tractable. However, for other tasks and with more complex models, such as classification with nonparametric models, the optimal solution is harder to compute. Current approaches make approximations to achieve tractability. We propose an approach that expresses information gain in terms of predictive entropies, and apply this method to the Gaussian Process Classifier (GPC). Our approach makes minimal approximations to the full information theoretic objective. Our experimental performance compares favourably to many popular active learning algorithms, and has equal or lower computational complexity. We compare well to decision theoretic approaches also, which are privy to more information and require much more computational time. Secondly, by developing further a reformulation of binary preference learning to a classification problem, we extend our algorithm to Gaussian Process preference learning.
READ FULL TEXT
Comments
There are no comments yet.