Making a Science of Model Search

09/23/2012
by   J. Bergstra, et al.
0

Many computer vision algorithms depend on a variety of parameter choices and settings that are typically hand-tuned in the course of evaluating the algorithm. While such parameter tuning is often presented as being incidental to the algorithm, correctly setting these parameter choices is frequently critical to evaluating a method's full potential. Compounding matters, these parameters often must be re-tuned when the algorithm is applied to a new problem domain, and the tuning process itself often depends on personal experience and intuition in ways that are hard to describe. Since the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning, it can be difficult to determine whether a given technique is genuinely better, or simply better tuned. In this work, we propose a meta-modeling approach to support automated hyper parameter optimization, with the goal of providing practical tools to replace hand-tuning with a reproducible and unbiased optimization process. Our approach is to expose the underlying expression graph of how a performance metric (e.g. classification accuracy on validation examples) is computed from parameters that govern not only how individual processing steps are applied, but even which processing steps are included. A hyper parameter optimization algorithm transforms this graph into a program for optimizing that performance metric. Our approach yields state of the art results on three disparate computer vision problems: a face-matching verification task (LFW), a face identification task (PubFig83) and an object recognition task (CIFAR-10), using a single algorithm. More broadly, we argue that the formalization of a meta-model supports more objective, reproducible, and quantitative evaluation of computer vision algorithms, and that it can serve as a valuable tool for guiding algorithm development.

READ FULL TEXT
research
03/04/2020

On Hyper-parameter Tuning for Stochastic Optimization Algorithms

This paper proposes the first-ever algorithmic framework for tuning hype...
research
05/09/2023

Reducing the Cost of Cycle-Time Tuning for Real-World Policy Optimization

Continuous-time reinforcement learning tasks commonly use discrete steps...
research
03/02/2020

Adaptive Structural Hyper-Parameter Configuration by Q-Learning

Tuning hyper-parameters for evolutionary algorithms is an important issu...
research
07/06/2017

ACO for Continuous Function Optimization: A Performance Analysis

The performance of the meta-heuristic algorithms often depends on their ...
research
12/12/2017

In a Nutshell: Sequential Parameter Optimization

The performance of optimization algorithms relies crucially on their par...
research
02/24/2019

Automatic ISP image quality tuning using non-linear optimization

Image Signal Processor (ISP) comprises of various blocks to reconstruct ...
research
03/16/2022

Privacy-preserving Online AutoML for Domain-Specific Face Detection

Despite the impressive progress of general face detection, the tuning of...

Please sign up or login with your details

Forgot password? Click here to reset