Background Knowledge Injection for Interpretable Sequence Classification

06/25/2020
by   Severin Gsponer, et al.
0

Sequence classification is the supervised learning task of building models that predict class labels of unseen sequences of symbols. Although accuracy is paramount, in certain scenarios interpretability is a must. Unfortunately, such trade-off is often hard to achieve since we lack human-independent interpretability metrics. We introduce a novel sequence learning algorithm, that combines (i) linear classifiers - which are known to strike a good balance between predictive power and interpretability, and (ii) background knowledge embeddings. We extend the classic subsequence feature space with groups of symbols which are generated by background knowledge injected via word or graph embeddings, and use this new feature space to learn a linear classifier. We also present a new measure to evaluate the interpretability of a set of symbolic features based on the symbol embeddings. Experiments on human activity recognition from wearables and amino acid sequence classification show that our classification approach preserves predictive power, while delivering more interpretable models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2019

Interpretable Multiple-Kernel Prototype Learning for Discriminative Representation and Feature Selection

Prototype-based methods are of the particular interest for domain specia...
research
04/04/2019

Learning to Decipher Hate Symbols

Existing computational models to understand hate speech typically frame ...
research
08/12/2018

Interpretable Time Series Classification using All-Subsequence Learning and Symbolic Representations in Time and Frequency Domains

The time series classification literature has expanded rapidly over the ...
research
02/23/2017

GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification

We consider the problem of estimating a regression function in the commo...
research
09/23/2018

Learning and Evaluating Sparse Interpretable Sentence Embeddings

Previous research on word embeddings has shown that sparse representatio...
research
10/02/2019

Learning Maximally Predictive Prototypes in Multiple Instance Learning

In this work, we propose a simple model that provides permutation invari...
research
09/23/2022

Incorporation of Human Knowledge into Data Embeddings to Improve Pattern Significance and Interpretability

Embedding is a common technique for analyzing multi-dimensional data. Ho...

Please sign up or login with your details

Forgot password? Click here to reset