Letters of the Alphabet: Discovering Natural Feature Sets

02/18/2022
by   Ezana N. Beyenne, et al.
5

Deep learning networks find intricate features in large datasets using the backpropagation algorithm. This algorithm repeatedly adjusts the network connections.' weights and examining the "hidden" nodes behavior between the input and output layer provides better insight into how neural networks create feature representations. Experiments built on each other show that activity differences computed within a layer can guide learning. A simple neural network is used, which includes a data set comprised of the alphabet letters, where each letter forms 81 input nodes comprised of 0 and 1s and a single hidden layer and an output layer. The first experiment explains how the hidden layers in this simple neural network represent the input data's features. The second experiment attempts to reverse-engineer the neural network to find the alphabet's natural feature sets. As the network interprets features, we can understand how it derives the natural feature sets for a given data. This understanding is essential to delve deeper into deep generative models, such as Boltzmann machines. Deep generative models are a class of unsupervised deep learning algorithms. The primary function of deep generative models is to find the natural feature sets for a given data set.

READ FULL TEXT

page 5

page 9

page 10

page 11

page 12

research
04/25/2023

Discovering Graph Generation Algorithms

We provide a novel approach to construct generative models for graphs. I...
research
04/01/2023

Hidden Layer Interaction: A Co-Creative Design Fiction for Generative Models

This paper presents a speculation on a fictive co-creation scenario that...
research
01/21/2021

Generative Autoencoder Kernels on Deep Learning for Brain Activity Analysis

Deep Learning (DL) is a two-step classification model that consists feat...
research
03/11/2020

Deep generative models in DataSHIELD

The best way to calculate statistics from medical data is to use the dat...
research
05/25/2020

Network Bending: Manipulating The Inner Representations of Deep Generative Models

We introduce a new framework for interacting with and manipulating deep ...
research
11/21/2017

Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data

The paper introduces the Hidden Tree Markov Network (HTN), a neuro-proba...
research
06/01/2015

Learning with hidden variables

Learning and inferring features that generate sensory input is a task co...

Please sign up or login with your details

Forgot password? Click here to reset