Learning the Enigma with Recurrent Neural Networks

08/24/2017
by   Sam Greydanus, et al.
0

Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms -- the mappings from plaintext to ciphertext -- for three polyalphabetic ciphers (Vigenère, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at 'cracking' the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenère and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.

READ FULL TEXT

page 1

page 3

page 4

page 5

research
02/27/2018

On Extended Long Short-term Memory and Dependent Bidirectional Recurrent Neural Network

In this work, we investigate the memory capability of recurrent neural n...
research
04/09/2016

Learning Compact Recurrent Neural Networks

Recurrent neural networks (RNNs), including long short-term memory (LSTM...
research
04/11/2021

Memory Capacity of Neural Turing Machines with Matrix Representation

It is well known that recurrent neural networks (RNNs) faced limitations...
research
05/03/2020

Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

The ability to store and manipulate information is a hallmark of computa...
research
05/29/2015

A Critical Review of Recurrent Neural Networks for Sequence Learning

Countless learning tasks require dealing with sequential data. Image cap...
research
02/01/2020

Model Extraction Attacks against Recurrent Neural Networks

Model extraction attacks are a kind of attacks in which an adversary obt...
research
07/20/2020

DiffRNN: Differential Verification of Recurrent Neural Networks

Recurrent neural networks (RNNs) such as Long Short Term Memory (LSTM) n...

Please sign up or login with your details

Forgot password? Click here to reset