Leveraging Semantic Embeddings for Safety-Critical Applications

05/19/2019 ∙ by Thomas Brunner, et al. ∙ Technische Universität München 0

Semantic Embeddings are a popular way to represent knowledge in the field of zero-shot learning. We observe their interpretability and discuss their potential utility in a safety-critical context. Concretely, we propose to use them to add introspection and error detection capabilities to neural network classifiers. First, we show how to create embeddings from symbolic domain knowledge. We discuss how to use them for interpreting mispredictions and propose a simple error detection scheme. We then introduce the concept of semantic distance: a real-valued score that measures confidence in the semantic space. We evaluate this score on a traffic sign classifier and find that it achieves near state-of-the-art performance, while being significantly faster to compute than other confidence scores. Our approach requires no changes to the original network and is thus applicable to any task for which domain knowledge is available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Despite their remarkable performance, deep neural networks often produce errors (i.e. mispredictions) that seem illogical to a human observer. Why was a traffic sign misclassified? Why was a pedestrian not detected? What was the internal state of the network at the time, and what information was contained?

Naturally, these questions are of great interest when developing safety-critical applications. Consider the field of automated driving: In an industry that depends not only on safety, but also on its perception by customers, there is a need for systems that can explain their decisions, and do so in a way that looks rational to humans.

But this is currently not the case. In large neural networks, knowledge is typically so entangled that it cannot be easily interpreted [Bengio:2013:RLR:2498740.2498889]. Even worse, when a mistake is made, predictors often report high confidence scores (e.g. through softmax activations), when in reality they should report significant uncertainty [Gal:2016:DBA:3045390.3045502].

So what is the missing link? Humans are often equipped with additional domain knowledge that captures the semantics of the task at hand. This allows them to judge whether a result seems plausible and to discard it otherwise. It would be desirable to have systems with neural networks do the same: capture semantics of the current situation, use this knowledge to perform sanity checks and finally report the confidence they have in their own decisions.

To achieve this goal, we draw inspiration from the field of zero-shot learning. There, a variety of methods exists for constructing so-called semantic embeddings [Palatucci2009]

, which encode semantic information into vector spaces and can easily be applied to neural network features.

In zero-shot learning, these projections are used to recognize images of previously unseen classes, based on their semantic attributes. Here, we propose to leverage the same representations in a safety context, and thereby gain interpretability and error detection capabilities.

Figure 1: Semantic embedding for a traffic sign classifier. Features are projected to a representation which is directly derived from domain knowledge about the classification task.