Quantifying error contributions of computational steps, algorithms and hyperparameter choices in image classification pipelines

02/25/2019
by   Aritra Chowdhury, et al.
0

Data science relies on pipelines that are organized in the form of interdependent computational steps. Each step consists of various candidate algorithms that maybe used for performing a particular function. Each algorithm consists of several hyperparameters. Algorithms and hyperparameters must be optimized as a whole to produce the best performance. Typical machine learning pipelines typically consist of complex algorithms in each of the steps. Not only is the selection process combinatorial, but it is also important to interpret and understand the pipelines. We propose a method to quantify the importance of different layers in the pipeline, by computing an error contribution relative to an agnostic choice of algorithms in that layer. We demonstrate our methodology on image classification pipelines. The agnostic methodology quantifies the error contributions from the computational steps, algorithms and hyperparameters in the image classification pipeline. We show that algorithm selection and hyper-parameter optimization methods can be used to quantify the error contribution and that random search is able to quantify the contribution more accurately than Bayesian optimization. This methodology can be used by domain experts to understand machine learning and data analysis pipelines in terms of their individual components, which can help in prioritizing different components of the pipeline.

READ FULL TEXT

page 1

page 9

research
07/13/2021

Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges

Most machine learning algorithms are configured by one or several hyperp...
research
10/21/2021

Viash: from scripts to pipelines

Most bioinformatics pipelines consist of software components that are ti...
research
11/07/2022

Using Set Covering to Generate Databases for Holistic Steganalysis

Within an operational framework, covers used by a steganographer are lik...
research
04/12/2019

Guidelines for data analysis scripts

Unorganized heaps of analysis code are a growing liability as data analy...
research
02/01/2023

Faster Convergence with Lexicase Selection in Tree-based Automated Machine Learning

In many evolutionary computation systems, parent selection methods can a...
research
02/20/2018

AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning

Clinical prognostic models derived from largescale healthcare data can i...

Please sign up or login with your details

Forgot password? Click here to reset