Dreaming machine learning: Lipschitz extensions for reinforcement learning on financial markets

by   J. M. Calabuig, et al.

We develop a new topological structure for the construction of a reinforcement learning model in the framework of financial markets. It is based on Lipschitz type extension of reward functions defined in metric spaces. Using some known states of a dynamical system that represents the evolution of a financial market, we use our technique to simulate new states, that we call "dreams". These new states are used to feed a learning algorithm designed to improve the investment strategy.



There are no comments yet.


page 1

page 2

page 3

page 4


Deep Reinforcement Learning in Financial Markets

In this paper we explore the usage of deep reinforcement learning algori...

ABIDES-Gym: Gym Environments for Multi-Agent Discrete Event Simulation and Application to Financial Markets

Model-free Reinforcement Learning (RL) requires the ability to sample tr...

Confronting Machine Learning With Financial Research

This study aims to examine the challenges and applications of machine le...

Lipschitz Continuity in Model-based Reinforcement Learning

Model-based reinforcement-learning methods learn transition and reward m...

Embracing advanced AI/ML to help investors achieve success: Vanguard Reinforcement Learning for Financial Goal Planning

In the world of advice and financial planning, there is seldom one right...

Policy Gradient Stock GAN for Realistic Discrete Order Data Generation in Financial Markets

This study proposes a new generative adversarial network (GAN) for gener...

The Artificial Regression Market

The Artificial Prediction Market is a recent machine learning technique ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction and basic definitions

The use of McShane-Whitney type extension of Lipschitz functions on metric spaces is a theoretical tool that have been often considered since the beginning of the development of the so called reinforcement learning methods in machine learning. Indeed, several theoretical aspects on Lipschitz extension of maps that are found in the fundamentals of reinforcement learning techniques were published in some early papers many years ago. The reader can find some information about in [2] and [11] for the mathematical results on the so called absolutely minimal extensions, and [3, 9] and the references therein for the concrete application for machine learning. Often, the metric space structure underlying the Lipschitz extension of maps is the usual finite dimensional space with the Euclidean norm, or some classical modifications of this metric considering non-canonical scalar products acting in . Information about other related metric structures on which Lipschitz extensions of reward functions have been considered can be found for example in [8, 14], where metric graphs are studied.

Following the same general framework, the aim of this paper is to show a new theoretical environment for the development of mathematical tools for reinforcement learning. However, our ideas —which can be applied in much more general contexts— will focus on the rather specific issue of designing expert systems for the analysis of financial markets. In particular, we will model the set of strategies to be applied in a financial market —a dynamical system— as a metric space of finite sequences of items —states of the system—, where is the number of times that a change of state (purchase/sale event) could occur in the market. We will consider also a reward function, that is supposed to be known for a certain subset of strategies —initial “training set”—. Using well-known theoretical techniques of extension of Lipschitz functions on metric spaces, we will construct the necessary tools for computing improved reward functions for bigger sets of strategies by means of the search of “similarities” among different pieces of these items. This will be used to feed the algorithm for creating new situations —“dreams”— that will allow to increase the efficiency of the process by increasing the size of the training set. The final result will be the definition of a new reinforcement learning method.

Our arguments bring together ideas from abstract topology on quasi-pseudo-metric spaces and Lipschitz maps and practical computational tools for extending Lipschitz functions on metric vector spaces in which the distance is

not given by a standard norm coming from an inner product. In fact, our metric is not one of the classical distances used in machine learning (see for example the comments is Section I and Section II in [6]). We use the McShane and the Whitney extensions for Lipschitz maps in a special way in order to extend some reward functions defined by a novel design. The process of introduction of “dreams” to increase the size of the training set needs also some topological tools based on average values computed on equivalence classes constructed by a specific metric similarity method. Although our approach is new, the reader can find some related ideas in [3, 4].

Concerning related work on mathematical economy and models for financial markets, we develop our method in a rather classical framework. The definition of our reward function begins with a relationship of duality similar to that of the commodity-prize duality that is at the core of market models based on functional analytic tools (see for example [1, Ch.8]). Although our method refers to some probabilistic tools, we do not consider our learning method as based on stochastic arguments. However, philosophically we may refer to some links with stochastic market modeling —concretely to the so called continuous-time market model, see for example [7, Ch.2]—, since the decision on the following step is given exactly in the previous one, based in our case in a predictive reward function.

For clarity in the explanation of our technique, we will focus our presentation on a particular problem related to the dynamics of a financial market. In general terms, our technique is based on significantly extending the reward function by creating new simulated situations to provide an improved tool for decision making. As we said, this allows to mix known original situations with new created states (dreams) to design a typical reinforcement learning procedure. The calculations are simple, as the extension formulas are simple, so the technique could be applied when dealing with a large amount of data. The results will be presented in four sections. After this Introduction, we will explain the topological foundations on the metric representation spaces that will be used in the preliminary Section 2. In Section 3 we will describe the general facts for the definition of our procedure —mainly of mathematical nature—, and the model will be presented in a very concrete way in Section 4. The paper ends with some conclusions in Section 5.

We should note that the objective of the present paper is theoretical in nature, although a very explicit example is given. We do not intend to give an efficient algorithm for computing the mathematical elements that appear in the model in order to provide a concrete and effective tool: instead we are interested in explaining the fundamentals of our method.

2. Preliminaries and topological tools

Let us present some relevant mathematical concepts. A quasi-pseudo-metric on a set is a function —the set of non-negative real numbers— such that

  1. if , and

for . A topology is defined by such a function : the open balls define the basis of neighborhoods. For , we define the ball of radius and center in as

is called a quasi-pseudo-metric space. We will work in this paper mainly with pseudo-metrics, that is, for all , or metrics, that in addition satisfy that if and only if In this case, the topology defined by satisfies the Hausdorff separation axiom. However, we prefer to present some of our ideas in this more general context, since the basic elements of our technique can be easily extrapolated to the more general quasi-pseudo-metric case. This fact is relevant, since asymmetry in the definition of metric notions (quasi-metric case) could be crucial for the modeling of dynamical processes, in which the dependence on the time variable changes the concepts related to distance. As usual, we will use both the words metric and distance as synonyms. We will use also classical notation for distances from a point to a set: if is a (pseudo-)metric in a set , and we will write for the distance from to , that is

Let us recall now some definitions regarding functions. Let be a metric space. A function is a Lipschitz function if there is a positive constant such that

The infimum of such constants as is called the Lipschitz constant of .

Regarding extensions of Lipschitz maps, recall that the classical McShane-Whitney theorem states that if is a subspace of a metric space and is a Lipschitz function with Lipschitz constant , there always exists a Lipschitz function extending and with the same Lipschitz constant. There are also known extensions of this result to the general setting of real-valued semi-Lipschitz functions acting in quasi-pseudo-metric spaces; see for example [2, 10, 12, 13, 15] and the references therein. The function

provides such an extension; it is sometimes called the McShane extension. We will use it for giving a constructive tool for our approximation. The Whitney formula, given by

provides also an extension. We will use the first one in this paper, although some results are also true when using the second, as will be explained.

Regarding references to some previous work on reinforcement learning, the reader can find some recent information directly related with our ideas in [3, 9] and the references therein. Concretely, some applications of Lipschitz extensions of functions to machine learning can be found in [5, 8, 9]. General explanations about applications of mathematical analysis in Machine Learning can be found in [17]; in particular, basic definitions, examples and results on Lipschitz maps can be found in Section 5.10 of this book.

3. Metric spaces of states and Lipschitz maps: an algorithm for machine learning

3.1. Mathematical framework

Consider a subset of vectors of the finite dimensional real linear space not containing the . Let us write We start by defining an adequate metric on . As the reader will see, the difference of our technique with other methods of reinforcement learning begins at this point. The main reason is that our choice does not allow to define the distance by means of a norm in . We mix the angular pseudo-distance —geodesic distance— and the Euclidean norm in this space. Thus, since the cosine of the angle among elements and in is given by

we define a distance by mixing this angle

and an Euclidean component

where and This Euclidean term can be substituted by any other norm in . For each we define now the function


that will become the general formula for the distance we want to use in our model. As usual, we use the same symbol when it is restricted to any subset of

Lemma 3.1.
  • Let . With the definitions given above, the following statements hold.

  • The function is a pseudo-metric on for every Moreover, it is a metric on if and only if

  • For every , the metric space is (topologically) equivalent to

  • Let and a set that includes an open segment containing . Then, for any extension of to the metrics and are not equivalent on .


(i) Note first that is well-defined on The triangle inequality and the symmetry are satisfied by both the functions and . Indeed, it is known that is a metric on the Euclidean unit sphere, and so if ,

Moreover, any linear combination with non-negative coefficients of and is a pseudo-metric. Also, if then implies , and so . The converse is obvious too.

(ii) Take an element and an open ball of radius for the metric . Take the elements in this set satisfying that and and note that all of them are in Then, since by the continuity of with respect to the Euclidean metric we can find a ball of radius such that

Thus, taking we get that The obvious inequality

gives the converse relation needed for the equivalence.

(iii) Consider without loss of generality the vectors for some . It is enough to notice that we can construct a sequence converging to with respect to and which does not converge for Indeed,


Thus, both metrics cannot be equivalent.

Of course, Lemma 3.1 can be automatically stated if we change the Euclidean norm by any other norm on , since all norms are equivalent on finite dimensional spaces. The metric is defined to indicate the Euclidean distance among states and but also the trend that they represent: indeed, in terms of the financial model we are constructing, if two vectors have small size —in fact as small as we want—, but they represent opposite trends in the market, the distance among them is always bigger or equal than The relative weight of and in the definition of is modulated by the parameter We will fix in the present paper, since we are mainly interested in analyzing the behavior of the market under small changes in the trends, trying to focus the model to be sensitive to these trends.

We will define a reward function acting in that will be given, as a primary formula, by a duality relation among the elements and partitions of the unity acting on these elements given by constant coefficients. We will call these elements actions, and they will be represented by vectors of the unit sphere of the space having all the coordinates bigger or equal than , that will be called .

We will define the reward function as a mean of actions like

where is an -dependent set defined using a mix among some experience on the system and a random procedure. The final function will be called , and will be the real function to be extended with the McShane formula for getting the reward function acting in all the space . In any case, as we will see in Section 4, it will be always possible to write as for a given action of the selected set of actions for the elements of . We will define the set by , in order to work with bets given as .

This representation formula for a certain will not be always possible to get for all the extended values Let us show this fact with the following very simple example. However, due to the particular formula that we have used for the definition of it is possible to get a meaningful bound at least.

Example 3.2.

Fix . Consider a market with two products and just two states (). Consider the set Both vectors represent increasing states of the market. Consider the reward function given for both states by the actions and That is,

Note that The Lipschitz constant is given by

Therefore, the McShane extension of is given by

for any possible state Take now and note that

Then we have

Take now Then

Since all the actions in belong to the ball of radius of , we cannot write

for any

To get the bound it is necessary to prove that the model is consistent, in the sense that the size of the extension is coherent with the size of and respects the proportionality with that appears in the seminal set We write for the -norm as usual.

We also define the “dual set” of with respect to as

Proposition 3.3.

Let be a compact subset of . Consider a function such that for each there is a functional such that

Then for each there is a functional such that


Fix First note that, since is a Lipschitz function with the same Lipschitz constant than , for each element we have

Fix now Then by hypothesis there is a functional such that


Since this happens for all the elements , we have that the inequality holds for the infimum. Finally, note that the set is compact. Indeed, by Lemma 3.1 and are equivalent metrics on . we have that the infimum is attained, and so we get the result by taking for the state that attains the minimum. ∎

Using this result with some restrictions on the geometry of the set and the relation with the particular elements we obtain useful bounds for the formulas that approximate We write one of them in the next corollary. Essentially, it reflects what happens with the extension of the reward function for a state that represents the same market trend as another state belonging to but with different norm.

Corollary 3.4.

Let be a compact subset of . Consider a function satisfying the requirements in Proposition 3.3.

Suppose that an element belongs to Then there is a functional such that


By assumption, for a given and For such an we have that The rest of the right hand term in the inequality in Proposition 3.3 can be rewritten as

for This can be rewritten as

This gives the result.

Depending on the geometry of the set and its relation with the chosen state we can also obtain a lower bound for the approximation formula for using actions

Proposition 3.5.

Let be a compact subset of , and . Consider a function satisfying the requirements in Proposition 3.3. Let and such that

Then for and we have that


Take and as in the statement of the result. Then, using again compactness of we get an element such that We know that by hypothesis there is an element such that and so we have that

and the lower bound is proved. ∎

In particular cases, this bound can be used for getting clear negative results on the possibility of approximating the extended reward function by means of actions. We show one of them in the following corollary, which proof is obvious.

Corollary 3.6.

Let be a compact subset of , and . Consider a function satisfying the requirements in Proposition 3.3. Let and

  • If for all and then

  • If where is a closed convex cone (with vertex in ) that do not contain , then

Remark 3.7.

As we have demonstrated, The mathematical model imposes the restriction that valid states are always different than . That is, there are no states that represent that the system has not changed, or that there is no trend. Therefore, these states must be eliminated if they appear in the experience.

3.2. The procedure

We will work with the following metric space structure as a model for the dynamical system defined by a financial market with products. We will assume that there are times in which there are share purchase/sale events.

  • Take a subset of vectors of representing the states of the market. Each of the vectors in describes a state of the market in the following way: each coordinate gives the value of the increment

    of the corresponding product at this moment. In fact, we will write at each coordinate

    the difference of the value at the moment and the value at This means, in particular, that the original values of the products is not relevant for defining the states, just the variations.

    We will fix the value for the definition of the metric in the next sections. That is, we will use

  • We are interested in measuring the success of a concrete action in the market, that is, the success of a share purchase/sale event that a decision-maker has executed on the market. So we have to define what an action is in the model. Formally, we have already defined them as elements of the dual of As we said, at each step the state of the system is defined by an -coordinate vector; each coordinate represents the increase/decrease of the value of each product with respect to the previous step. An action is a suitable share purchase/sale event that the decision maker could execute, represented as follows: it is supposed that he has 100 monetary units to invest at every step, so an action is a vector of -coordinates ( if we want to consider leaving some of the money out of the buying process). In Section 4 we will call “bets” to the actions to reinforce their meaning in the model. Mathematically, they are positive elements of the algebraic dual of having -norm equal to . Let us write for the set of all the actions.

    The natural reward function to be defined in the model must be related to the evaluation of the success of an action when it is applied to a certain state of the system. Therefore, it must be defined as a functional acting in once a given state of the system has been fixed, and so it is a two-vector-variables function acting in

    However, the reward function must evaluate states of the market —an element of —, taking into account how the decision maker acts in it and the success of his actions. Therefore, we will finally consider a reward function acting in

    , but we will use all the information we have about the system to estimate it. That is, we will use the function

    for defining the function . We will see that, finally, for each state there is an action such that or a mean of such values.

  • After this, we are interested in extending the reward function to the whole linear space preserving the Lipschitz constant. In order to assure that this constant is a (positive) real number, it is enough to take into account that the set is defined by a finite set of vectors. In the model, is supposed to measure “how successful” is a given state. We will use the McShane formula for the extension. The extension is supposed to extrapolate the same concept —success of a given state—, preserving the metric relations of and . Since it appears explicitly in the formula, we have to compute the Lipschitz constant for the reward function in order to get the extension for which the same works. The way we have defined the metric in the space allows to obtain a theoretical bound for this extension, as stated in Proposition 3.3. However, note that in general we cannot expect that can be represented as an action belonging to the positive part of This was shown in the previous section in Example 3.2; the general behavior of such kind of representation formula was discussed also there, as a consequence of Proposition 3.5 and its corollaries.

  • Finally, we will use for simulating the reward of new time sequences of states in order to perform our reinforcement learning algorithm. In order to do this, we generate randomly new states for increasing the set . We create in this way a new seminal set bigger than in which we are mixing “known situations” () and new ones, that we call “dreams” () The rate of elections of known cases and dreams that we have chosen is

4. Training and dreaming: a Lipschitz approximation to a real market reward function for reinforcement learning

Let us continue with the explanation of the procedure by further specifying the example explained on the financial market that we started in the previous section. Suppose that we are analyzing a market with four similar product. In fact, there is a clear correlation among their values, as the reader can see in the figures that will be shown below. We have the complete behavior of the values of all of them each minute of a sequence of minutes. As we said in Section 3 and for the aim of simplicity, we assume that at the values of all the products equal .

The set of known states for which the reward function can be calculated is defined as the first of the states that have been registered in the experience of a day.

  • A state of the system is given by a four-coordinate vector : as we explained in Section 3, each minute the vector gives the cumulative increase or decrease of the values of each product. Since we want to define the reward function using the scalar product with a vector representing an action we will need to enlarge the vector by adding a new coordinate, with the value . We preserve the same symbol for the extended vector.

    We consider series of “bets” applied at each minute. They correspond to series of what we called “actions” in Section 3, that in this particular case are described as the of the money that the decision maker wants to apply in each market this minute (including not investing a certain part). As we said, it is supposed the decision maker is investing monetary units at each step. A bet is then given by a five positive coordinates vector such that they sum ; recall we have five coordinates because the decision maker could decide not to invest a part of the money.

  • Fix now a (five-coordinate) state of the system . The reward function is then defined as a two-(vector)-variable function given by the scalar product of the state and the action ,

    where is the five-coordinate extension of the original -coordinate vector representing the state .

    At this point we introduce our first arguments regarding reinforcement learning. The main idea is to use the information that is known for similar situations in order to compute a reward function , depending only on the state. This is relevant, since we are going to evaluate the state of the system using this reward function. In order to define it, we use the following procedure. For a state of the system we define

    where the mean is computed over two sets and constructed as explained below, whose sizes are in a relation of and respectively.

    a) The first set — is defined by using actions/bets that have been already checked and have obtained good enough values of the reward functions when acting in states that are similar to . This is done by choosing the bets that give the highest values of the reward function when they act on these states The similarity relation is given by proximity with respect to the distance , that is for a given (for example, ).

    b) The second one — is randomly obtained.

    Note that this way of defining the reward function is not mathematically optimal since, given a state, the definition of the reward function allows to compute the better bet for it using elementary calculus. However, this is an example for which the function can be directly computed—an explicit formula is considered—, and in the general case this would not be the case. Our method aims to introduce some “empirical information” from the system. Moreover, the given procedure allows some random elements to be introduced into the process, which is necessary to avoid, for example, overlearning.

    This method is used for computing the reward function for the elements of . For states which do not belong to we will use the McShane formula for obtaining the extended function as explained in Section 3.

  • We design in this way a procedure for obtaining a reward function on the whole set of possible states of the system. We use the first positions of the market (Figure 1) as the set ; it can be considered as the training set, and the function is defined by the procedure explained in Point 2 above. Note that, although the original value is considered to be for all the products, we represent the cumulative value from the starting point in Figure 1, that is, the sum of each coordinate of the vectors (states) representing the consecutive steps. The same is done in Figure 2, in which the testing set of states is given.

    Figure 1. Real market experience: set of states for training the model. The cumulative value for all the products of the market is represented.

    Using it, and after computing the corresponding Lipschitz constant we use the McShane formula to obtain the extension that is

    The additional (Figure 2) is used to check the performance of the model and the quality of the results by comparison.

    Figure 2. Real market experience: set of states for testing the model (minutes from 400 to 800). Again, the accumulated value for all the products of the market is represented. In this case, the starting value is given by the last value of the previous states, given in Figure 1.
  • Taking into account the procedure for obtaining the reward function , given a state we can find an action/bet such that and is as good (high) as possible. Of course, is not unique. However, a random function can be defined from to in such a way that for a state the assignment provides a successful bet for . One of these randomly defined functions is shown in Figure 3. Note that, certainly, all of them are successful actions/bets, since they provide the maximum possible value of the reward function for each element of

    A similar definition can be done for suitable states that do not belong to . We call dreams to such states. In this case, the reward function that should be considered is since this function plays the role of for states that have not been found in the experience in the market. However, note that we cannot say that, if there is a positive functional —an action— in the unit ball of such that as happens for and This problem is solved just by taking a suitable “norm ” functional such that attains its minimum value. We have already proved that in general, cannot be attained by a value as However, Proposition 3.3 gives precise bounds for this difference.

    The set of all —randomly chosen but optimal— bets as and represents how the decision maker should act when he faces the problem of investing in the market. Figure 3 shows a representation of a suitable set of optimal bets for the states represented in Figure 1. Figure 4 represents a sequence of optimal actions in mixed situation of real states and dreams. At each time, the sum of the values in the five graphics sum

    As we have shown, the main tool of our technique is the computation of the McShane extension of the reward function. In order to clarify this computation based on the McShane-Whitney extension theorem, we provide an scheme of the algorithm (Algorithm 1).

    1:  Fix .
    2:  while  do
    3:     For to define
    6:     Define randomly a subset of elements   
    8:  end while
    10:  if   then
    11:     return  
    12:  else
    14:  end if
    Algorithm 1 Computation of the McShane extension
    Figure 3. Sequence of (randomly chosen) actions that optimize the bets when applied to the set of real states (first minutes). Note that for each fixed time, the five values equals
    Figure 4. Sequence of randomly chosen actions that optimize the bets when applied to a mixed set given by of real states in and of dreams. (First minutes, randomly changed from the original experience by “dreams.”
  • Finally, we check the results of the model. We assume that we start betting on the market at the time with of monetary units and we stop when we loose all of them. In order to check the success of the model, we produce a simulation when the reward function is purely obtained by the information of the market (Figure 3), and using of dreamed states (Figure 4).

    We use the second part of the experience that was shown in Figure 2 for checking our results. The system has been trained using all the information of the first minutes in the first case (Figure 5), and with just of these states + of dreams in the second (Figure 6).Thus, in Figure 5 and Figure 6 we have presented the value of the sum of the four products of the market at each state, where the investment that has been made in each of them has been the result of the application of the action/bet obtained in the previous steps. The measure of the success of the models is given by the survival time.

    For the first case (Figure 5) we have used the set of actions obtained for the set , which was shown in Figure 3. It is supposed that the situations should be similar than in the training part of the experience. However, in case the state was not exactly appearing in the market situations that was recorded in the first part of the experience, we approximate its value by distance similarity applying the action where is the element of that satisfies that attains its minimum.

    The second figure (Figure 6) shows the same cumulative result: the total value obtained at each state by applying to the same sequence of states the optimal sequence of actions, that has been obtained in this case with a of dreams. As the reader can see, the evolution and the surveyance time are similar, and so the success of both models is comparable. That is, the same result can be obtained by using the McShane extension of the of known data instead of of real data.

    Figure 5. Simulation with real data obtained from the experience.
    Figure 6. Simulation with of real data + of dream.

    5. Conclusions

    We have shown a reinforcement learning method to provide an expert system for investing in a financial market. The first introduced tool, that involves approximation of a reward function by using metric similarity with other known states of the system, is based on a classic machine learning scheme on metric spaces. Regarding this point, the main novelty is the non-standard metric that is used, that combines a geodesic distance —directly related with the cosine similarity of vectors and that models the directions of the trends of the market—, and the Euclidean distance, which cannot be defined as associated to a norm in the underlying finite dimensional linear space.

    The second part of our technique consists on the development of a new reinforcement learning procedure that allows the use of a smaller set of experiences on the financial market to obtain a good investment tool to act in the market. Basically, we combine the use of approximation of the reward function on neighbors of with a Lipschitz-preserving extension of the reward function by using the McShane formula. Thus, the main contribution of the present paper is to show that an expert system for investment in financial markets can be done by substituting a great set of experiences on the particular markets by a reinforcement learning method based on the extension of Lipschitz maps. Since the results obtained are comparable, our technique opens up the possibility of building models of similar efficiency using much less data from experience.

5. Conclusions

We have shown a reinforcement learning method to provide an expert system for investing in a financial market. The first introduced tool, that involves approximation of a reward function by using metric similarity with other known states of the system, is based on a classic machine learning scheme on metric spaces. Regarding this point, the main novelty is the non-standard metric that is used, that combines a geodesic distance —directly related with the cosine similarity of vectors and that models the directions of the trends of the market—, and the Euclidean distance, which cannot be defined as associated to a norm in the underlying finite dimensional linear space.

The second part of our technique consists on the development of a new reinforcement learning procedure that allows the use of a smaller set of experiences on the financial market to obtain a good investment tool to act in the market. Basically, we combine the use of approximation of the reward function on neighbors of with a Lipschitz-preserving extension of the reward function by using the McShane formula. Thus, the main contribution of the present paper is to show that an expert system for investment in financial markets can be done by substituting a great set of experiences on the particular markets by a reinforcement learning method based on the extension of Lipschitz maps. Since the results obtained are comparable, our technique opens up the possibility of building models of similar efficiency using much less data from experience.


  • [1] Aliprantis, C. D., and Burkinshaw, O. Locally solid Riesz spaces with applications to economics. Mathematical Surveys and Monographs No. 105. American Mathematical Soc., Providence, Rhode Island, 2003.
  • [2] Aronsson, G. Extension of functions satisfying lipschitz conditions. Arkiv för Matematik, 6(6) (1967): 551-561.
  • [3] Asadi, K., Dipendra, M., and M.L. Littman. Lipschitz continuity in model-based reinforcement learning. arXiv preprint arXiv:1804.07193 (2018).
  • [4] Kurt Driessens, Jan Ramon, Thomas Gärtner Graph kernels and Gaussian processes for relational reinforcement learning, Machine Learning (2006) 64:91-119 DOI 10.1007/s10994-006-8258-y
  • [5] Gottlieb, L.-A., Kontorovich, A., and Krauthgamer, A. Efficient classification for metric data. IEEE Transactions on Information Theory 60.9 (2014): 5750-5759.
  • [6]

    Jia, H., Cheung, Y.-M. and Liu, J. A new distance metric for unsupervised learning of categorical data. IEEE transactions on neural networks and learning systems 27.5 (2016): 1065-1079.

  • [7] Korn, R., and Korn E. Option pricing and portfolio optimization: modern methods of financial mathematics. Graduate Studies in Mathematics, Vol. 31. American Mathematical Soc., Providence, Rhode Island 2001.
  • [8] Kyng, K., Rao, A., Sachdeva S., Spielman, D.A. Algorithms for Lipschitz Learning on Graphs. Journal of Machine Learning Research: Workshop and Conference Proceedings, vol 40:1-34, 2015.
  • [9] von Luxburg, U. and Bousquet, O. Distance-based classification with Lipschitz functions. Journal of Machine Learning Research 5.Jun (2004): 669-695.
  • [10] McShane, E. J. Extension of range of functions, Bull. Amer. Math. Soc. 40 (1934): 837-842.
  • [11] Milman V.A. Absolutely minimal extensions of functions on metric spaces Sbornik: Mathematics 190,6 (1999): 859-885.
  • [12] Mustata, C. Extensions of semi-Lipschitz functions on quasi-metric spaces. Rev. Anal. Numer. Theor. Approx. 30.1 (2001): 61-67.
  • [13] Mustata, C. On the extremal semi-Lipschitz functions. Rev. Anal. Numer. Theor. Approx. 31.1 (2002): 103-108.
  • [14] Rao, A. Algorithms for Lipschitz Extensions on Graphs, Yale University, ProQuest Dissertations Publishing, New Haven, 2015.
  • [15] Romaguera, S. and Sanchis, M. Semi-Lipschitz Functions and Best Approximation in Quasi-Metric Spaces. Journal of Approximation Theory 103, (2000): 292-301.
  • [16] Shaw, B., Huang, B., and Jebara, T. Learning a distance metric from a network. Advances in Neural Information Processing Systems, (2011): 1899-1907.
  • [17] Simovici, D. Mathematical analysis for machine learning and data mining, World Scientific Pub, Singapore, 2018.