DeepAI
Log In Sign Up

A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks

12/22/2019
by   Angelos Filos, et al.
86

Evaluation of Bayesian deep learning (BDL) methods is challenging. We often seek to evaluate the methods' robustness and scalability, assessing whether new tools give `better' uncertainty estimates than old ones. These evaluations are paramount for practitioners when choosing BDL tools on-top of which they build their applications. Current popular evaluations of BDL methods, such as the UCI experiments, are lacking: Methods that excel with these experiments often fail when used in application such as medical or automotive, suggesting a pertinent need for new benchmarks in the field. We propose a new BDL benchmark with a diverse set of tasks, inspired by a real-world medical imaging application on diabetic retinopathy diagnosis. Visual inputs (512x512 RGB images of retinas) are considered, where model uncertainty is used for medical pre-screening—i.e. to refer patients to an expert when model diagnosis is uncertain. Methods are then ranked according to metrics derived from expert-domain to reflect real-world use of model uncertainty in automated diagnosis. We develop multiple tasks that fall under this application, including out-of-distribution detection and robustness to distribution shift. We then perform a systematic comparison of well-tuned BDL techniques on the various tasks. From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit' their uncertainty to the dataset—when evaluated on our benchmark these underperform in comparison to simpler baselines. The code for the benchmark, its baselines, and a simple API for evaluating new BDL tools are made available at https://github.com/oatml/bdl-benchmarks.

READ FULL TEXT

page 3

page 4

07/08/2020

A Benchmark of Medical Out of Distribution Detection

There is a rise in the use of deep learning for automated medical diagno...
06/07/2021

Uncertainty Baselines: Benchmarks for Uncertainty Robustness in Deep Learning

High-quality estimates of uncertainty and robustness are crucial for num...
06/04/2019

Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision

While Deep Neural Networks (DNNs) have become the go-to approach in comp...
11/23/2022

Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks

Bayesian deep learning seeks to equip deep neural networks with the abil...
07/14/2022

BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks

High-quality calibrated uncertainty estimates are crucial for numerous r...
12/08/2022

Graph Learning Indexer: A Contributor-Friendly and Metadata-Rich Platform for Graph Learning Benchmarks

Establishing open and general benchmarks has been a critical driving for...
12/02/2021

How not to Lie with a Benchmark: Rearranging NLP Leaderboards

Comparison with a human is an essential requirement for a benchmark for ...