
Pseudo Random Number Generation: a Reinforcement Learning approach
PseudoRandom Numbers Generators (PRNGs) are algorithms produced to gene...
read it

Avoiding Jammers: A Reinforcement Learning Approach
This paper investigates the antijamming performance of a cognitive rada...
read it

Memorybased Deep Reinforcement Learning for POMDP
A promising characteristic of Deep Reinforcement Learning (DRL) is its c...
read it

Reinforcement Learning with Temporal Logic Constraints for PartiallyObservable Markov Decision Processes
This paper proposes a reinforcement learning method for controller synth...
read it

Long Distance Relationships without Time Travel: Boosting the Performance of a Sparse Predictive Autoencoder in Sequence Modeling
In sequence learning tasks such as language modelling, Recurrent Neural ...
read it

A Tree Search Algorithm for Sequence Labeling
In this paper we propose a novel reinforcement learning based model for ...
read it

Sequence Tagging with PolicyValue Networks and Tree Search
In this paper we propose a novel reinforcement learning based model for ...
read it
Pseudo Random Number Generation through Reinforcement Learning and Recurrent Neural Networks
A PseudoRandom Number Generator (PRNG) is any algorithm generating a sequence of numbers approximating properties of random numbers. These numbers are widely employed in midlevel cryptography and in software applications. Test suites are used to evaluate PRNGs quality by checking statistical properties of the generated sequences. These sequences are commonly represented bit by bit. This paper proposes a Reinforcement Learning (RL) approach to the task of generating PRNGs from scratch by learning a policy to solve a partially observable Markov Decision Process (MDP), where the full state is the period of the generated sequence and the observation at each time step is the last sequence of bits appended to such state. We use a LongShort Term Memory (LSTM) architecture to model the temporal relationship between observations at different time steps, by tasking the LSTM memory with the extraction of significant features of the hidden portion of the MDP's states. We show that modeling a PRNG with a partially observable MDP and a LSTM architecture largely improves the results of the fully observable feedforward RL approach introduced in previous work.
READ FULL TEXT
Comments
There are no comments yet.