A Deep Reinforcement Learning Trader without Offline Training

03/01/2023
by   Boian Lazov, et al.
0

In this paper we pursue the question of a fully online trading algorithm (i.e. one that does not need offline training on previously gathered data). For this task we use Double Deep Q-learning in the episodic setting with Fast Learning Networks approximating the expected reward Q. Additionally, we define the possible terminal states of an episode in such a way as to introduce a mechanism to conserve some of the money in the trading pool when market conditions are seen as unfavourable. Some of these money are taken as profit and some are reused at a later time according to certain criteria. After describing the algorithm, we test it using the 1-minute-tick data for Cardano's price on Binance. We see that the agent performs better than trading with randomly chosen actions on each timestep. And it does so when tested on the whole dataset as well as on different subsets, capturing different market trends.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2022

Applications of Reinforcement Learning in Finance – Trading with a Double Deep Q-Network

This paper presents a Double Deep Q-Network algorithm for trading single...
research
07/08/2018

Financial Trading as a Game: A Deep Reinforcement Learning Approach

An automatic program that generates constant profit from the financial m...
research
09/06/2023

An Offline Learning Approach to Propagator Models

We consider an offline learning problem for an agent who first estimates...
research
04/01/2023

Mastering Pair Trading with Risk-Aware Recurrent Reinforcement Learning

Although pair trading is the simplest hedging strategy for an investor t...
research
12/26/2020

Deep reinforcement learning for portfolio management

The objective of this paper is to verify that current cutting-edge artif...
research
11/22/2019

Deep Reinforcement Learning for Trading

We adopt Deep Reinforcement Learning algorithms to design trading strate...
research
10/05/2021

A study of first-passage time minimization via Q-learning in heated gridworlds

Optimization of first-passage times is required in applications ranging ...

Please sign up or login with your details

Forgot password? Click here to reset