Hardware Acceleration of Explainable Machine Learning using Tensor Processing Units

by   Zhixin Pan, et al.

Machine learning (ML) is successful in achieving human-level performance in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While existing explainable ML is promising, almost all of these methods focus on formatting interpretability as an optimization problem. Such a mapping leads to numerous iterations of time-consuming complex computations, which limits their applicability in real-time applications. In this paper, we propose a novel framework for accelerating explainable ML using Tensor Processing Units (TPUs). The proposed framework exploits the synergy between matrix convolution and Fourier transform, and takes full advantage of TPU's natural ability in accelerating matrix computations. Specifically, this paper makes three important contributions. (1) To the best of our knowledge, our proposed work is the first attempt in enabling hardware acceleration of explainable ML using TPUs. (2) Our proposed approach is applicable across a wide variety of ML algorithms, and effective utilization of TPU-based acceleration can lead to real-time outcome interpretation. (3) Extensive experimental results demonstrate that our proposed approach can provide an order-of-magnitude speedup in both classification time (25x on average) and interpretation time (13x on average) compared to state-of-the-art techniques.


page 1

page 2


Guidelines for Responsible and Human-Centered Use of Explainable Machine Learning

Explainable machine learning (ML) has been implemented in numerous open ...

Does Explainable Machine Learning Uncover the Black Box in Vision Applications?

Machine learning (ML) in general and deep learning (DL) in particular ha...

Explaining Machine Learning DGA Detectors from DNS Traffic Data

One of the most common causes of lack of continuity of online systems st...

Helix: Accelerating Human-in-the-loop Machine Learning

Data application developers and data scientists spend an inordinate amou...

First Step Towards EXPLAINable DGA Multiclass Classification

Numerous malware families rely on domain generation algorithms (DGAs) to...

On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

Machine Learning (ML) models now inform a wide range of human decisions,...

Understanding Interpretability by generalized distillation in Supervised Classification

The ability to interpret decisions taken by Machine Learning (ML) models...