Thought Flow Nets: From Single Predictions to Trains of Model Thought

07/26/2021
by   Hendrik Schuff, et al.
0

When humans solve complex problems, they rarely come up with a decision right-away. Instead, they start with an intuitive decision, reflect upon it, spot mistakes, resolve contradictions and jump between different hypotheses. Thus, they create a sequence of ideas and follow a train of thought that ultimately reaches a conclusive decision. Contrary to this, today's neural classification models are mostly trained to map an input to one single and fixed output. In this paper, we investigate how we can give models the opportunity of a second, third and k-th thought. We take inspiration from Hegel's dialectics and propose a method that turns an existing classifier's class prediction (such as the image class forest) into a sequence of predictions (such as forest → tree → mushroom). Concretely, we propose a correction module that is trained to estimate the model's correctness as well as an iterative prediction update based on the prediction's gradient. Our approach results in a dynamic system over class probability distributions x2014 the thought flow. We evaluate our method on diverse datasets and tasks from computer vision and natural language processing. We observe surprisingly complex but intuitive behavior and demonstrate that our method (i) can correct misclassifications, (ii) strengthens model performance, (iii) is robust to high levels of adversarial attacks, (iv) can increase accuracy up to 4 setting and (iv) provides a tool for model interpretability that uncovers model knowledge which otherwise remains invisible in a single distribution prediction.

READ FULL TEXT

page 8

page 13

research
03/08/2020

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

We develop an effective generation of adversarial attacks on neural mode...
research
03/16/2022

Shepherd Pre-trained Language Models to Develop a Train of Thought: An Iterative Prompting Approach

While Pre-trained Language Models (PLMs) internalize a great amount of w...
research
08/16/2018

Paraphrase Thought: Sentence Embedding Module Imitating Human Language Recognition

Sentence embedding is an important research topic in natural language pr...
research
03/07/2022

Deep Neural Decision Forest for Acoustic Scene Classification

Acoustic scene classification (ASC) aims to classify an audio clip based...
research
06/09/2017

Rethinking Skip-thought: A Neighborhood based Approach

We study the skip-thought model with neighborhood information as weak su...
research
11/16/2017

Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

The lack of interpretability remains a key barrier to the adoption of de...
research
05/25/2023

Prototype-Based Interpretability for Legal Citation Prediction

Deep learning has made significant progress in the past decade, and demo...

Please sign up or login with your details

Forgot password? Click here to reset