Human Interpretable AI: Enhancing Tsetlin Machine Stochasticity with Drop Clause

by   Jivitesh Sharma, et al.

In this article, we introduce a novel variant of the Tsetlin machine (TM) that randomly drops clauses, the key learning elements of a TM. In effect, TM with drop clause ignores a random selection of the clauses in each epoch, selected according to a predefined probability. In this way, additional stochasticity is introduced in the learning phase of TM. Along with producing more distinct and well-structured patterns that improve the performance, we also show that dropping clauses increases learning robustness. To explore the effects clause dropping has on accuracy, training time, and interpretability, we conduct extensive experiments on various benchmark datasets in natural language processing (NLP) (IMDb and SST2) as well as computer vision (MNIST and CIFAR10). In brief, we observe from +2 4x faster learning. We further employ the Convolutional TM to document interpretable results on the CIFAR10 dataset. To the best of our knowledge, this is the first time an interpretable machine learning algorithm has been used to produce pixel-level human-interpretable results on CIFAR10. Also, unlike previous interpretable methods that focus on attention visualisation or gradient interpretability, we show that the TM is a more general interpretable method. That is, by producing rule-based propositional logic expressions that are human-interpretable, the TM can explain how it classifies a particular instance at the pixel level for computer vision and at the word level for NLP.


page 6

page 7

page 9


Superpixelizing Binary MRF for Image Labeling Problems

Superpixels have become prevalent in computer vision. They have been use...

Distributed Word Representation in Tsetlin Machine

Tsetlin Machine (TM) is an interpretable pattern recognition algorithm b...

HIVE: Evaluating the Human Interpretability of Visual Explanations

As machine learning is increasingly applied to high-impact, high-risk do...

Demystify Self-Attention in Vision Transformers from a Semantic Perspective: Analysis and Application

Self-attention mechanisms, especially multi-head self-attention (MSA), h...

Towards Interpretable Ensemble Learning for Image-based Malware Detection

Deep learning (DL) models for image-based malware detection have exhibit...

ANTONIO: Towards a Systematic Method of Generating NLP Benchmarks for Verification

Verification of machine learning models used in Natural Language Process...

A Formal Framework to Characterize Interpretability of Procedures

We provide a novel notion of what it means to be interpretable, looking ...

Please sign up or login with your details

Forgot password? Click here to reset