DeepAI AI Chat
Log In Sign Up

Evaluating saliency methods on artificial data with different background types

by   Céline Budding, et al.
Berlin Institute of Technology (Technische Universität Berlin)
TU Eindhoven

Over the last years, many 'explainable artificial intelligence' (xAI) approaches have been developed, but these have not always been objectively evaluated. To evaluate the quality of heatmaps generated by various saliency methods, we developed a framework to generate artificial data with synthetic lesions and a known ground truth map. Using this framework, we evaluated two data sets with different backgrounds, Perlin noise and 2D brain MRI slices, and found that the heatmaps vary strongly between saliency methods and backgrounds. We strongly encourage further evaluation of saliency maps and xAI methods using this framework before applying these in clinical or other safety-critical settings.


Saliency for free: Saliency prediction as a side-effect of object recognition

Saliency is the perceptual capacity of our visual system to focus our at...

Quantitative Analysis of Saliency Models

Previous saliency detection research required the reader to evaluate per...

Saliency Integration: An Arbitrator Model

Saliency integration approaches have aroused general concern on unifying...

Towards explainable artificial intelligence (XAI) for early anticipation of traffic accidents

Traffic accident anticipation is a vital function of Automated Driving S...

Network Analysis for Explanation

Safety critical systems strongly require the quality aspects of artifici...

Revisiting Sanity Checks for Saliency Maps

Saliency methods are a popular approach for model debugging and explaina...

Saliency difference based objective evaluation method for a superimposed screen of the HUD with various background

The head-up display (HUD) is an emerging device which can project inform...