Optimal stopping via deeply boosted backward regression

08/07/2018
by   Denis Belomestny, et al.
0

In this note we propose a new approach towards solving numerically optimal stopping problems via boosted regression based Monte Carlo algorithms. The main idea of the method is to boost standard linear regression algorithms in each backward induction step by adding new basis functions based on previously estimated continuation values. The proposed methodology is illustrated by several numerical examples from finance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2020

Randomized optimal stopping algorithms and their convergence analysis

In this paper we study randomized optimal stopping problems and consider...
research
09/16/2013

Sequential Design for Optimal Stopping Problems

We propose a new approach to solve optimal stopping problems via simulat...
research
11/24/2020

Reinforced optimal control

Least squares Monte Carlo methods are a popular numerical approximation ...
research
12/01/2020

mlOSP: Towards a Unified Implementation of Regression Monte Carlo Algorithms

We introduce mlOSP, a computational template for Machine Learning for Op...
research
06/04/2021

Unbiased Optimal Stopping via the MUSE

We propose a new unbiased estimator for estimating the utility of the op...
research
01/24/2021

Solving optimal stopping problems with Deep Q-Learning

We propose a reinforcement learning (RL) approach to model optimal exerc...
research
04/28/2021

Optimal Stopping via Randomized Neural Networks

This paper presents new machine learning approaches to approximate the s...

Please sign up or login with your details

Forgot password? Click here to reset