Hidetoshi Furukawa

is this you? claim profile

0 followers

  • Deep Learning for Target Classification from SAR Imagery: Data Augmentation and Translation Invariance

    This report deals with translation invariance of convolutional neural networks (CNNs) for automatic target recognition (ATR) from synthetic aperture radar (SAR) imagery. In particular, the translation invariance of CNNs for SAR ATR represents the robustness against misalignment of target chips extracted from SAR images. To understand the translation invariance of the CNNs, we trained CNNs which classify the target chips from the MSTAR into the ten classes under the condition of with and without data augmentation, and then visualized the translation invariance of the CNNs. According to our results, even if we use a deep residual network, the translation invariance of the CNN without data augmentation using the aligned images such as the MSTAR target chips is not so large. A more important factor of translation invariance is the use of augmented training data. Furthermore, our CNN using augmented training data achieved a state-of-the-art classification accuracy of 99.6 results show an importance of domain-specific data augmentation.

    08/26/2017 ∙ by Hidetoshi Furukawa, et al. ∙ 0 share

    read it

  • Deep Learning for End-to-End Automatic Target Recognition from Synthetic Aperture Radar Imagery

    The standard architecture of synthetic aperture radar (SAR) automatic target recognition (ATR) consists of three stages: detection, discrimination, and classification. In recent years, convolutional neural networks (CNNs) for SAR ATR have been proposed, but most of them classify target classes from a target chip extracted from SAR imagery, as a classification for the third stage of SAR ATR. In this report, we propose a novel CNN for end-to-end ATR from SAR imagery. The CNN named verification support network (VersNet) performs all three stages of SAR ATR end-to-end. VersNet inputs a SAR image of arbitrary sizes with multiple classes and multiple targets, and outputs a SAR ATR image representing the position, class, and pose of each detected target. This report describes the evaluation results of VersNet which trained to output scores of all 12 classes: 10 target classes, a target front class, and a background class, for each pixel using the moving and stationary target acquisition and recognition (MSTAR) public dataset.

    01/25/2018 ∙ by Hidetoshi Furukawa, et al. ∙ 0 share

    read it

  • Bias Estimation for Decentralized Sensor Fusion -- Multi-Agent Based Bias Estimation Method

    In multi-sensor data fusion (or sensor fusion), sensor biases (or offsets) often affect the accuracy of the correlation and integration results of the tracking targets. Therefore, to estimate and compensate the bias, several methods are proposed. However, most methods involve bias estimation and sensor fusion simultaneously by using Kalman filter after collecting the plot data together. Hence, these methods cannot support to fuse the track data prepared by tracking filter at each sensor node. This report proposes the new bias estimation method based on multi-agent model, in order to estimate and compensate the bias for decentralized sensor fusion.

    07/31/2017 ∙ by Hidetoshi Furukawa, et al. ∙ 0 share

    read it

  • SAVERS: SAR ATR with Verification Support Based on Convolutional Neural Network

    We propose a new convolutional neural network (CNN) which performs coarse and fine segmentation for end-to-end synthetic aperture radar (SAR) automatic target recognition (ATR) system. In recent years, many CNNs for SAR ATR using deep learning have been proposed, but most of them classify target classes from fixed size target chips extracted from SAR imagery. On the other hand, we proposed the CNN which outputs the score of the multiple target classes and a background class for each pixel from the SAR imagery of arbitrary size and multiple targets as fine segmentation. However, it was necessary for humans to judge the CNN segmentation result. In this report, we propose a CNN called SAR ATR with verification support (SAVERS), which performs region-wise (i.e. coarse) segmentation and pixel-wise segmentation. SAVERS discriminates between target and non-target, and classifies multiple target classes and non-target class by coarse segmentation. This report describes the evaluation results of SAVERS using the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset.

    05/14/2018 ∙ by Hidetoshi Furukawa, et al. ∙ 0 share

    read it