DeepAI AI Chat
Log In Sign Up

AutoDispNet: Improving Disparity Estimation with AutoML

by   Tonmoy Saikia, et al.
University of Freiburg

Much research work in computer vision is being spent on optimizing existing network architectures to obtain a few more percentage points on benchmarks. Recent AutoML approaches promise to relieve us from this effort. However, they are mainly designed for comparatively small-scale classification tasks. In this work, we show how to use and extend existing AutoML techniques to efficiently optimize large-scale U-Net-like encoder-decoder architectures. In particular, we leverage gradient-based neural architecture search and Bayesian optimization for hyperparameter search. The resulting optimization does not require a large company-scale compute cluster. We show results on disparity estimation that clearly outperform the manually optimized baseline and reach state-of-the-art performance.


page 1

page 12


Darts-Conformer: Towards Efficient Gradient-Based Neural Architecture Search For End-to-End ASR

Neural architecture search (NAS) has been successfully applied to tasks ...

SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization

Convolutional neural networks typically encode an input image into a ser...

Encoder-Decoder Neural Architecture Optimization for Keyword Spotting

Keyword spotting aims to identify specific keyword audio utterances. In ...

Auto-MVCNN: Neural Architecture Search for Multi-view 3D Shape Recognition

In 3D shape recognition, multi-view based methods leverage human's persp...

HMCNAS: Neural Architecture Search using Hidden Markov Chains and Bayesian Optimization

Neural Architecture Search has achieved state-of-the-art performance in ...

HANF: Hyperparameter And Neural Architecture Search in Federated Learning

Automated machine learning (AutoML) is an important step to make machine...