Jindong Gu

is this you? claim profile


  • Understanding Bias in Machine Learning

    Bias is known to be an impediment to fair decisions in many domains such as human resources, the public sector, health care etc. Recently, hope has been expressed that the use of machine learning methods for taking such decisions would diminish or even resolve the problem. At the same time, machine learning experts warn that machine learning models can be biased as well. In this article, our goal is to explain the issue of bias in machine learning from a technical perspective and to illustrate the impact that biased data can have on a machine learning model. To reach such a goal, we develop interactive plots to visualizing the bias learned from synthetic data.

    09/02/2019 ∙ by Jindong Gu, et al. ∙ 66 share

    read it

  • Understanding Individual Decisions of CNNs via Contrastive Backpropagation

    A number of backpropagation-based approaches such as DeConvNets, vanilla Gradient Visualization and Guided Backpropagation have been proposed to better understand individual decisions of deep convolutional neural networks. The saliency maps produced by them are proven to be non-discriminative. Recently, the Layer-wise Relevance Propagation (LRP) approach was proposed to explain the classification decisions of rectifier neural networks. In this work, we evaluate the discriminativeness of the generated explanations and analyze the theoretical foundation of LRP, i.e. Deep Taylor Decomposition. The experiments and analysis conclude that the explanations generated by LRP are not class-discriminative. Based on LRP, we propose Contrastive Layer-wise Relevance Propagation (CLRP), which is capable of producing instance-specific, class-discriminative, pixel-wise explanations. In the experiments, we use the CLRP to explain the decisions and understand the difference between neurons in individual classification decisions. We also evaluate the explanations quantitatively with a Pointing Game and an ablation study. Both qualitative and quantitative evaluations show that the CLRP generates better explanations than the LRP.

    12/05/2018 ∙ by Jindong Gu, et al. ∙ 20 share

    read it

  • Saliency Methods for Explaining Adversarial Attacks

    In this work, we aim to explain the classifications of adversary images using saliency methods. Saliency methods explain individual classification decisions of neural networks by creating saliency maps. All saliency methods were proposed for explaining correct predictions. Recent research shows that many proposed saliency methods fail to explain the predictions. Notably, the Guided Backpropagation (GuidedBP) is essentially doing (partial) image recovery. In our work, our numerical analysis shows the saliency maps created by GuidedBP do contain class-discriminative information. We propose a simple and efficient way to enhance the created saliency maps. The proposed enhanced GuidedBP is the state-of-the-art saliency method to explain adversary classifications.

    08/22/2019 ∙ by Jindong Gu, et al. ∙ 0 share

    read it