diagNNose: A Library for Neural Activation Analysis

11/13/2020
by   Jaap Jumelet, et al.
0

In this paper we introduce diagNNose, an open source library for analysing the activations of deep neural networks. diagNNose contains a wide array of interpretability techniques that provide fundamental insights into the inner workings of neural networks. We demonstrate the functionality of diagNNose with a case study on subject-verb agreement within language models. diagNNose is available at https://github.com/i-machine-think/diagnnose.

READ FULL TEXT
research
08/31/2018

Full Workspace Generation of Serial-link Manipulators by Deep Learning based Jacobian Estimation

Apart from solving complicated problems that require a certain level of ...
research
03/24/2022

minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models

We present minicons, an open source library that provides a standard API...
research
07/28/2018

TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing

Machine learning models are notoriously difficult to interpret and debug...
research
05/27/2017

BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet

Binary Neural Networks (BNNs) can drastically reduce memory size and acc...
research
08/27/2023

Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP

Mechanistic interpretability seeks to understand the neural mechanisms t...
research
09/13/2020

Understanding Boolean Function Learnability on Deep Neural Networks

Computational learning theory states that many classes of boolean formul...
research
10/30/2021

Equinox: neural networks in JAX via callable PyTrees and filtered transformations

JAX and PyTorch are two popular Python autodifferentiation frameworks. J...

Please sign up or login with your details

Forgot password? Click here to reset