# The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations

This article presents a reformulation of the Theory of Functional Connections: a general methodology for functional interpolation that can embed a set of user-specified linear constraints. The reformulation presented in this paper exploits the underlying functional structure presented in the seminal paper on the Theory of Functional Connections to ease the derivation of these interpolating functionals–called constrained expressions–and provides rigorous terminology that lends itself to straightforward derivations of mathematical proofs regarding the properties of these constrained expressions. Furthermore, the extension of the technique to and proofs in n-dimensions is immediate through a recursive application of the univariate formulation. In all, the results of this reformulation are compared to prior work to highlight the novelty and mathematical convenience of using this approach. Finally, the methodology presented in this paper is applied to two partial differential equations with different boundary conditions, and, when data is available, the results are compared to state-of-the-art methods.

• 9 publications
• 4 publications
• 10 publications
11/12/2020

### Symbolically Solving Partial Differential Equations using Deep Learning

We describe a neural-based method for generating exact or approximate so...
05/02/2022

### PFNN-2: A Domain Decomposed Penalty-Free Neural Network Method for Solving Partial Differential Equations

A new penalty-free neural network method, PFNN-2, is presented for solvi...
11/13/2020

### Free Boundary Formulations for two Extended Blasius Problems

In this paper, we have defined the free boundary formulation for two ext...
03/25/2020

### Spectral methods for nonlinear functionals and functional differential equations

We develop a rigorous convergence analysis for finite-dimensional approx...
02/10/2020

### Least-squares Solutions of Eighth-order Boundary Value Problems using the Theory of Functional Connections

This paper shows how to obtain highly accurate solutions of eighth-order...
07/01/2020

### Performance of Borel-Laplace integrator for the resolution of stiff and non-stiff problems

A stability analysis of the Borel-Laplace series summation technique, us...
03/17/2021

### On the Whitney extension problem for near isometries and beyond

This paper is an exposition of work of the author et al. detailing fasci...

## 1 Introduction

The Theory of Functional Connections (TFC) is a mathematical framework used to construct functionals, functions of functions, that represent the family of all possible functions that satisfy some user-defined constraints; these functionals are referred to as “constrained expressions” in the context of the TFC. In other words, the TFC is a framework for performing functional interpolation. In the seminal paper on TFC Mortari (2017a), a univariate framework was presented that could construct constrained expressions for constraints on the values of points or arbitrary order derivatives at points. Furthermore, Reference Mortari (2017a) showed how to construct constrained expressions for constraints consisting of linear combinations of values and derivatives at points, called linear constraints; for example, , for some points and , where symbolizes the second order derivative of with respect to . In the current formulation, the univariate constrained expression has been used for a variety of applications, including solving linear and non-linear differential equations Mortari (2017b); Mortari et al. (2019), hybrid systems Johnston and Mortari (2019), optimal control problems Furfaro and Mortari (2019); Johnston et al. (2020), in quadratic and nonlinear programming Mai and Mortari , and other applications Johnston et al. (2019).

Recently, the TFC method has been extended to -dimensions Mortari and Leake (2019). This multivariate framework can provide functionals representing all possible -dimensional manifolds subject to constraints on the value and arbitrary order derivative of dimensional manifolds. However, Reference Mortari and Leake (2019)

does not discuss how the multivariate framework can be used to construct constrained expressions for linear constraints. Regardless, these multivariate constrained expressions have been used to embed constraints into machine learning frameworks

Leake et al. (2019); Leake and Mortari (2020); Schiassi et al. (2020) for use in solving partial differential equations (PDEs). Moreover, it was shown that this framework may be combined with orthogonal basis functions to solve PDEs Leake and Mortari (2019); this is essentially the

-dimensional equivalent of the ordinary differential equations (ODEs) solved using the univariate formulation

Mortari (2017b); Mortari et al. (2019).

The contributions of this article are threefold. First, this article examines the underlying structure of univariate constrained expressions and provides an alternative method for deriving them. This structure is leveraged to derive mathematical proofs regarding the properties of univariate constrained expressions. Second, using the aforementioned structure, this article extends the multivariate formulation presented in Reference Mortari and Leake (2019) to include linear constraints by introducing the recursive application of univariate constrained expressions as a method for generating multivariate constrained expressions. Further, mathematical proofs are provided that prove the resultant constrained expressions indeed represent all possible manifolds subject to the given constraints. Thirdly, this article presents how the multivariate constrained expressions can be combined with a linear expansion of

-dimensional orthogonal basis functions to numerically estimate the solutions of PDEs. While Reference

Leake and Mortari (2019) showed that solving PDEs with the multivariate TFC is possible, it merely gave a cursory overview, skipping some rather important details; this article fills in those gaps.

The remainder of this article is structured as follows. Section 2 introduces the univariate constrained expression, examines its underlying structure, and provides an alternative method to derive univariate constrained expressions. Then, in Section 3, this structure is leveraged to rigorously define the univariate TFC constrained expression and provide some related mathematical proofs. In Section 4, this new structure and the mathematical proofs are extended to

-dimensions, and a compact tensor form of the multivariate constrained expression is provided. Section

5 discusses how to combine the multivariate constrained expression with multivariate basis functions to estimate the solutions of PDEs. Then, in Section 6, this method is used to estimate the solution of two PDEs, and the results are compared with state-of-the-art methods when data is available. Finally, Section 7 summarizes the article and provides some potential future directions for follow-on research.

## 2 Univariate TFC

Extending the multivariate TFC to include linear constraints requires recursive applications of the univariate TFC. Hence, it is paramount the reader understand univariate TFC before moving to the multivariate case. First, the general form of the univariate constrained expression will be presented, followed by a few examples. These examples serve to solidify the readers understanding of the univariate constrained expression, as well as highlight nuances of deriving constrained expressions. In addition, this section includes mathematical proofs that univariate TFC constrained expressions indeed represent the family of all possible functions that satisfy the constraints.

Given a set of constraints, the univariate constrained expression takes the following form,

 y(x,g(x))=g(x)+k∑j=1sj(x)ηj, (1)

where is a free function, are mutually linearly independent functions called support functions, and are coefficient functionals that are solved by imposing the constraints. The free function can be chosen to be any function provided that it is defined at the constraints’ locations.

The following examples start from Equation (1), the framework proposed in the seminal paper on TFC Mortari (2017a), and highlight a unified structure that underlies the univariate TFC constrained expressions. Following these examples is a section that rigorously defines this structure and provides important mathematical proofs.

### 2.1 Univariate Example # 1: Constraints at a Point

Constraints at a point consist of constraints on the value at a point and constraints on a derivative at a point. Take for example the follow constraints,

 y(0)=1,yx(1)=2,y(2)=3.

For this example, the support functions are chosen to be , , and . Following Equation (1) and imposing the three constraints leads to the simultaneous set of equations

 y(0) =1=g(0)+η1 yx(1) =2=gx(1)+2η2+3η3 y(2) =3=g(2)+η1+4η2+8η3.

Solving this set of equations for the unknowns leads to the solution,

 η1 =1−g(0) η2 =10−3g(0)+3g(2)−8gx(1)4 η3 =g(0)−g(2)+2gx(1)2.

Substituting the coefficient functionals back into Equation (1) and simplifying yields,

 y(x,g(x))= g(x)+−2x3+3x2+44(1−g(0))+(−x3+2x2)(2−gx(1)) (2) +2x3−3x24(3−g(2)).

It is simple to verify that regardless of how is chosen, provided exists at the constraint points, Equation (2) always satisfies the given constraints.

The support functions in the previous example were selected as , , and . However, these support functions could have been any mutually linearly independent set of functions that permits a solution for the coefficient functionals : to clarify the latter of these requirements, consider the same constraints with support functions , , and . Then, the set of equations with unknowns is,

 ⎡⎢⎣100012124⎤⎥⎦⎧⎪⎨⎪⎩η1η2η3⎫⎪⎬⎪⎭=⎧⎪⎨⎪⎩1−g(0)2−gx(1)3−g(2)⎫⎪⎬⎪⎭.

Notice that when using these support functions the matrix that multiplies the coefficient functionals is singular. Thus, no solution exists, and therefore, the support functions , , and are an invalid set for these constraints.

Note that the matrix singularity does not depend on the free function. This means that the singularity arises when a linear combination of the selected support functions cannot be used to interpolate the constraints. Therefore, the singularity of the support function matrix is dependent on both the support functions chosen and the specific constraints to be embedded. This raises another important restriction on the expression of the support functions, not only must they be linearly independent, but they must constitute an interpolation model that is consistent for the specified constraints.

Notice that each term, except the term containing only the free function, in the constrained expression is associated with a specific constraint and has a particular structure. To illustrate, examine the first constraint term from Equation (2),

 −2x3+3x2+44ϕ1(x)(1−g(0))ρ1(x,g(x)).

The first term in the product, , is called a switching function—Reference Mortari (2017a) introduced these switching functions as “coefficient” functions, —and is a function that is equal to when evaluated at the constraint it is referencing, and equal to when evaluated at all the other constraints. The second term of the product, , is called a projection functional, and is derived by setting the constraint function equal to zero and replacing with . In the case of constraints at a point, this is simply the difference between the constraint value and the free function evaluated at that constraint point. It is called the projection functional because it projects the free function to the set of functions that vanish at the constraint. When evaluating the switching function, , at the constraint it is referencing it is equal to 1 (i.e., ), and when it is evaluated at the other constraints it is equal to (i.e., and ). The projection functional, , is just the difference between the constraint and the free function evaluated at the constraint point, . This structure is important, as it shows up in the other constraint types too. Property 2.1 follows from the definition of the projection functional.

The projection functionals for constraints at a point are always equal to zero if the free function, , is selected such that it satisfies the associated constraint.

For example, if is selected such that , then the first projection functional in this example becomes . Based on this structure, an alternate way to define the constrained expression, shown in Equation (3), can be derived,

 y(x,g(x))=g(x)+k∑j=1ϕj(x)ρj(x,g(x)). (3)

The projection functionals are simple to derive, but the switching functions require some attention. From their definition, these functions must go to at their associated constraint and at all other constraints. Based on this definition, the following algorithm for deriving the switching functions is proposed:

1. Choose support functions, .

2. Write each switching function as a linear combination of the support functions with unknown coefficients.

3. Based on the switching function definition, write a system of equations to solve for the unknown coefficients.

To validate that this algorithm works, we will use the same constraints and support functions and rederive the constrained expression shown in Equation (2). Hence, , , and , for some as yet unknown coefficients . Note that in the previous mathematical expressions and throughout the remainder of the paper, the Einstein summation convention is used to improve readability. Now, the definition of the switching function is used to come up with a set of equations. For example, the first switching function has the three equations,

 ϕ1(0)=1,∂ϕ1∂x(1)=0,andϕ1(2)=0.

These equations are expanded in terms of the support functions,

 ϕ1(0) =(1)⋅α11+(0)⋅α21+(0)⋅α31=1 ∂ϕ1∂x(1) =(0)⋅α11+(2)⋅α21+(3)⋅α31=0 ϕ1(2) =(1)⋅α11+(4)⋅α21+(8)⋅α31=0,

which can be compactly written as,

 ⎡⎢⎣100023148⎤⎥⎦⎧⎪⎨⎪⎩α11α21α31⎫⎪⎬⎪⎭=⎧⎪⎨⎪⎩100⎫⎪⎬⎪⎭.

The same is done for the other two switching functions to produce a set of equations that can be solved by matrix inversion.

 ⎡⎢⎣100023148⎤⎥⎦⎡⎢⎣α11α12α13α21α22α23α31α32α33⎤⎥⎦ =⎡⎢⎣100010001⎤⎥⎦ ⎡⎢⎣α11α12α13α21α22α23α31α32α33⎤⎥⎦ =⎡⎢⎣100023148⎤⎥⎦−1=⎡⎢ ⎢ ⎢⎣100342−34−12−112⎤⎥ ⎥ ⎥⎦

Substituting the constants back into the switching functions and simplifying yields,

 ϕ1(x)=−2x3+3x2+44,ϕ2(x)=−x3+2x2,andϕ3(x)=2x3−3x24.

Substituting the projection functionals and switching functions back into the constrained expression yields,

 y(x,g(x))=g(x)+−2x3+3x2+44(1−g(0))+(−x3+2x2)(2−gx(1))+2x3−3x24(3−g(2)),

which is identical to Equation (2). This approach to derive constrained expressions using switching functions has, similar to the first approach, the risk of obtaining a singular matrix if the support functions selected are not able to interpolate the constraints. However, as will be demonstrated in the coming sections, this approach can be easily extended to multivariate domains via recursive applications of the univariate theory, and this approach lends itself nicely to mathematical proofs.

### 2.2 Univariate Example # 2: Linear Constraints

Linear constraints consist of linear combinations of the previous types of constraints. Note that by this definition, relative constraints such as are just a special case of linear constraints. Take for example the following two constraints,

 y(0)=y(1),and3=2y(2)+πyxx(0).

To generate a constrained expression, the projection functionals and switching functions must be found. Similar to the constraints at a point, first the constraints are arranged such that one side of the constraint is equal to zero; for example,

 y(0)−y(1)=0and3−2y(2)−πyxx(0)=0.

Then, the projection functionals can be defined by replacing with . Thus,

 ρ1(x,g(x))=g(0)−g(1)andρ2(x,g(x))=3−2g(2)−πgxx(0).

The switching functions are again defined such that they are equal to when evaluated with their associated constraint, and equal to when evaluated at all other constraints. The word “evaluation” in the previous sentence requires clarification. Substitution of the constrained expression back into the constraint should result in the expression (i.e., the constraint is satisfied). When doing so, the switching functions, , will be evaluated in the same way is evaluated in the constraint. Thus, the constants within the constraint are not used in the evaluation. Moreover, because the projection functional is designed to exactly cancel the values of the free function in the constraint, the switching function equations should have the opposite sign. Hence, evaluation means to replace the function with the switching function, remove any terms not multiplied by the switching function, and multiply the entire equation by . Any reader confused by this linguistic definition of switching function evaluation may refer to Property 3, which defines the switching function evaluation mathematically. For this example, this leads to,

 ϕ1(1)−ϕ1(0)=1,2ϕ1(2)+π∂2ϕ1∂x2(0)=0,

for the first switching function, and

 ϕ2(1)−ϕ2(0)=0,2ϕ2(2)+π∂2ϕ2∂x2(0)=1,

for the second switching function. Note that while this “evaluation” definition may seem convoluted at first, it is in fact exactly what was done for the constraints at a point case. However, in that case, due to the simple nature of the constraints and the way the projection functionals were defined, this was simply the switching function evaluated at the point.

Similar to the constraints at a point case, the switching functions are defined as a linear combination of support functions with unknown coefficients. Again, this can be written compactly in matrix form. For this example, the support functions and are chosen. Then,

 [0124][α11α12α21α22] =[1001] [α11α12α21α22] =[0124]−1=[−21210],

which results in the switching functions,

 ϕ1(x)=x−2,ϕ2(x)=12.

Substituting the switching and projection functionals back into the constrained expression form given in Equation (3) yields,

 y(x,g(x))=g(x)+(x−2)(g(0)−g(1))+12(3−2g(2)−πgxx(0)).

By substituting this expression for back into the constraints, one can verify that indeed this constraint expression satisfies the constraints regardless of the choice of free function . Property 2.2 extends Property 2.1 to linear constraints.

The projection functionals for linear constraints are always equal to zero if the free function is selected such that it satisfies the associated constraint. For example, if is selected such that , then the first projection functional in this example becomes .

## 3 General Formulation of the Univariate TFC

This section rigorously defines the TFC constrained expression and provides some relevant proofs. First, a functional is defined and its properties are investigated. A functional, e.g., , has independent variable(s) and function(s) as inputs, and produces a function as an output.

Note that a function as defined here is coincident with the computer science definition of a functional. One can think of a functional as a map for functions. That is, the functional takes a function, , as its input and produces a function, for any specified , as its output. Since this article is focused on constraint embedding, or in other words functional interpolation, we will not concern ourselves with the domain/range of the input and output functions. Rather, we will discuss functionals only in the context of their potential input functions, hereon referred to as the domain of the functional, and potential output functions, hereon referred to as the codomain of the functional.

Next, the definitions of injective, surjective, and bijective are extended from functions to functionals. A functional, , is said to be injective if every function in its codomain is the image of at most one function in its domain. A functional, , is said to be surjective if for every function in the codomain, , there exists at least one such that . A functional, , is said to be bijective if it is both injective and surjective. To elaborate, Figure 1 gives a graphical representation of each of these functionals, and examples of each of these functionals follow. Note that the phrase “smooth functions” is used here to denote continuous, infinitely differentiable, real valued functions. Consider the functional whose domain is all smooth functions and whose codomain is all smooth functions. The functional is injective because for every in the codomain there is at most one that maps to .

However, the functional is not surjective, because the functional does not span the space of the codomain. For example, consider the desired output function : there is no that produces this output. Next, consider the functional whose domain is all smooth functions and whose codomain is the set of all smooth functions such that . This functional is surjective because it spans the space of all smooth functions that are when , but it is not injective. For example, the functions and produce the same result, i.e., . Finally, consider the functional whose domain is all smooth functions and whose codomain is all smooth functions. This functional is bijective, because it is both injective and surjective.

In addition, the notion of projection is extended to functionals. Consider the typical definition of a projection matrix for some . In other words, when operates on itself, it produces itself: a projection property for functionals can be defined similarly.

A functional is said to be a projection functional if it produces itself when operating on itself. For example, consider a functional operating on itself, . Then, if , then the functional is a projection functional. Note that proving automatically extends to a functional operating on itself times: for example, , and so on.

Now that a functional and some properties of a functional have been investigated, the notation used in the prior section can be leveraged to rigorously define TFC related concepts. First, it is useful to define the constraint operator, denoted by the symbol . The constraint operator, , is a linear operator that when operating on a function returns the function evaluated at the -th specified constraint.

As an example, consider the 2nd linear constraint () given in Section 2.2, . For this problem, it follows that,

 \tensor∗[]C2[y(x)]=2y(2)+πyxx(0).

The constraint operator is a linear operator, as it satisfies the two properties of a linear operator: (1)  and (2) . For example, again consider the 2 linear constraint given in Section 2.2,

 \tensor∗[]C2[f(x)+g(x)] =\tensor∗[]C2[f(x)]+\tensor∗[]C2[g(x)]=2f(2)+πfxx(0)+2g(2)+πgxx(0) \tensor∗[]C2[af(x)] =a\tensor∗[]C2[f(x)]=a(2f(2)+πfxx(0)).

Naturally, the constraint operator has specific properties when operating on the support functions, switching functions, and projection functionals.

The constraint operator acting on the support functions produces the matrix

 Sij=\tensor∗[]Ci[sj(x)].

Again, consider the example from Section 2.2 where the support functions were and . By applying the constraint operator,

which is identical to the matrix derived in Section 2.2. In fact, the matrix is simply the matrix multiplying the matrix in all the previous examples. Therefore, it follows that, , where is the Kroneker delta, and the solution of the coefficients are precisely the inverse of the constraint operator operating on the support functions.

The constraint operator acting on the switching functions produces the Kronecker delta.

 \tensor∗[]Ci[ϕj(x)]=δij

This property is just a mathematical restatement of the linguistic definition of the switching function given earlier. One can intuit this property from the switching function definition, since they evaluate to at their specified constraint condition (i.e., ) and to at all other constraint conditions (i.e., ).

Using this definition of the constraint operator, one can define the projection functional in a compact and precise manner. Let be the free function where , and let be the numerical portion of the constraint. Then,

 ρi(x,g(x))=κi−\tensor∗[]Ci[g(x)]

Following the example from Section 2.2, the projection functional for the second constraint is,

 ρ2(x,g(x))=κ2−\tensor∗[]C2[g(x)]=3−2g(2)−πgxx(0).

Note that in the univariate case is a scalar value, but in the multivariate case is a function.

For any function, , satisfying the constraints, there exists at least one free function, , such that the TFC constrained expression .

###### Proof.

As highlighted in Properties 2.1 and 2.2, the projection functionals are equal to zero whenever satisfies the constraints. Thus, if is a function that satisfies the constraints, then the constrained expression becomes . Hence, by choosing , the constrained expression becomes . Therefore, for any function satisfying the constraints, , there exists at least one free function , such that the constrained expression is equal to the function satisfying the constraints, i.e., . ∎

The TFC univariate constrained expression is a projection functional.

###### Proof.

To prove Theorem 3, one must show that . By definition, the constrained expression returns a function that satisfies the constraints. In other words, for any , is a function that satisfies the constraints. From Theorem 3, if the free function used in the constrained expression satisfies the constraints, then the constrained expression returns that free function exactly. Hence, if the constrained expression functional is given itself as the free function, it will simply return itself. ∎

For a given function, , satisfying the constraints, the free function, , in the TFC constrained expression  is not unique. In other words, the TFC constrained expression is a surjective functional.

###### Proof.

Consider the free function choice where are scalar values on and are the support functions used to construct the switching functions .

 y(x)=g(x)+ϕi(x)ρi(x,g(x)).

Substituting the chosen yields,

 y(x)=f(x)+βjsj(x)+ϕi(x)ρi(x,f(x)+βjsj(x)).

Now, according to Definition 3 of the projection functional,

 y(x)=f(x)+βjsj(x)+ϕi(x)(κi−\tensor∗[]Ci[f(x)+βjsj(x)]).

Since the constraint operator is a linear operator,

 y(x)=f(x)+βjsj(x)+ϕi(x)(κi−\tensor∗[]Ci[f(x)]−\tensor∗[]Ci[sj(x)]βj).

Since is defined as a function satisfying the constraints, then , and,

 y(x)=f(x)+βjsj(x)−ϕi(x)\tensor∗[]Ci[sj(x)]βj.

Now, according to Property 3 of the constraint operator, and by decomposing the switching functions ,

 y(x)=f(x)+βjsj(x)−αkisk(x)Sijβj.

Collecting terms results in,

 y(x)=f(x)+βj(δjk−αkiSij)sk(x).

However, because is the inverse of . Therefore, by the definition of inverse, , and thus,

 y(x)=f(x)+βj(δjk−δjk)sk(x).

Simplifying yields the result,

 y(x)=f(x),

which is independent of the terms in the free function. Therefore, the free function is not unique. ∎

Notice that the non-uniqueness of depends on the support functions used in the constrained expression, which has an immediate consequence when using constrained expressions in optimization. If any terms in are linearly dependent to the support functions used to construct the constrained expression, their contribution is negated and thus arbitrary. For some optimization techniques it is critical that the linearly dependent terms that do not contribute to the final solution be removed, else, the optimization technique becomes impaired. For example, prior research focused on using this method to solve ODEs Mortari (2017b); Mortari et al. (2019) through a basis expansion of and least-squares, and the basis terms linearly dependent to the support functions had to be omitted from in order to maintain full rank matrices in the least-squares.

The previous proofs coupled with the functional and functional property definitions given earlier provide a more rigorous definition for the TFC constrained expression: the TFC constrained expression is a surjective, projection functional whose domain is the space of all real-valued functions that are defined at the constraints and whose codomain is the space of all real-valued functions that satisfy the constraints. It is surjective because it spans the space of all functions that satisfy the constraints, its codomain, based on Theorem 3, but is not injective, because Theorem 3 shows that functions in the codomain are the image of more than one function in the domain: the functional is thus not bijective either because it is not injective. Moreover, the TFC constrained expression is a projection functional as shown in Theorem 3.

## 4 Multivariate TFC

Consider the general multivariate function where . In this definition, is composed of the real-values functions such that where are the independent variables. In terms of the TFC, the functions can be expressed as individual constrained expressions, and therefore the extension to multidimensional functions only involves extending the original method developed in Section 2 for to . Once completed, the extension to the original definition of is immediate: simply write a multivariate constrained expression for every in .

In the following section, the multivariate TFC is developed using a recursive application of the univariate TFC. In this manner, it can be shown that this approach is a generalization of the original theory, and that all mathematical proofs for the univariate constrained expressions can easily be extended to the multivariate constrained expressions. Then, a tensor form of the multivariate constrained expression is introduced by simplifying the recursive method. The tensor formulation provides a succinct way to write multivariate constrained expressions.

### 4.1 Recursive Application of Univariate TFC

As discussed above, our extension to the multivariate case is concerned with deriving the constrained expression for the form . For a set of constraints in the multivariate case, one can first create the constrained expression all constraints on using the univariate TFC formulation. The resulting univariate constrained expression, which we can denote as, is then used as the free function in a constrained expression that includes all the constraints on to produce the expression . This method carries on until the final independent variable, , is reached and the expression and is the multivariate constrained expression.

This concept is best shown through some simple examples. These examples have two spatial dimensions only one dependent variable (i.e., ) we adopt the following notation:

 F=f1 :=u (x1,x2) :=(x,y).

#### 4.1.1 Multivariate Example # 1: Value and Derivative Constraints

Take for example the following constraints in two dimensions,

 u(0,y)=sin(2πy),ux(0,y)=0,u(x,0)=x2,andu(x,1)=cos(x)−1.

First, the constrained expression is built for the constraints involving . Using, the univariate TFC, this can be written as,

 \tensor∗[1]u(x,y,g(x,y))=g(x,y)+sin(2πy)−g(0,y)−xgx(0,y).

Then, is used as the free function in the constrained expression involving the constraints on . Since this problem is two-dimensional, the resultant expression is the multivariate TFC constrained expression.

 (4)

Substituting into Equation (4) and simplifying yields,

 u(x,y,g(x,y))= g(x,y)+sin(2πy)−g(0,y)−xgx(0,y)+(1−y)(x2−g(x,0)+g(0,0) (5) +xgx(0,0))+y(cos(x)−1−g(x,1)+g(0,1)+xgx(0,1)).

Alternatively, one could first write the expression for the constraints on ,

and use as the free function in a constrained expression for the constraints on ,

 u(x,y,\tensor∗[2]u(x,y))=\tensor∗[2]u(x,y)+sin(2πy)−\tensor∗[2]u(0,y)−x∂(\tensor∗[2]u)∂x(0,y). (6)

Substituting into Equation (6) and simplifying yields,

 u(x,y,g(x,y))= g(x,y)+sin(2πy)−g(0,y)−xgx(0,y)+(1−y)(x2−g(x,0)+g(0,0) +xgx(0,0))+y(cos(x)−1−g(x,1)+g(0,1)+xgx(0,1)),

the exact same result as in Equation (5). Therefore, it does not matter in what order recursive univariate TFC is applied to produce multivariate constrained expressions, as the final result will be the same.

Figure 2 shows the constrained expression evaluated with the free function . The constraints that can be visualized easily are shown as black lines. As expected, the TFC constrained expression satisfies these constraints exactly.

#### 4.1.2 Multivariate Example # 2: Linear Constraints

Take for example the following constraints in two dimensions,

 u(0,y)=y2sin(πy),u(1,y)+u(2,y)=ysin(πy),uy(x,0)=0,andu(x,0)=u(x,1).

As in the first example, the univariate constrained expression is built for the constraints in ,

 \tensor∗[1]u(x,y,g(x,y))=g(x,y)+3−2x3(y2sin(πy)−g(0,y))+x3(cos(πy)−g(2,y)−g(1,y)).

Then, is used as the free function in the constrained expression for the constraints in ,

 u(x,y,g(x,y))=\tensor∗[1]u(x,y)−(y−y2)\tensor∗[1]uy(x,0)−y2(\tensor∗[1]u(x,1)−\tensor∗[1]u(x,0)).

Substituting in and simplifying yields,

 u(x,y,\tensor∗[1]u(x,y))= g(x,y)+(y−y2)(3−2x3gy(0,0)−x3(−gy(1,0)−gy(2,0))−gy(x,0)) (7) −y2(3−2x3g(0,0)−3−2x3g(0,1)−x3(−g(1,0)−g(2,0))+x3(−g(1,1) −g(2,1))−g(x,0)+g(x,1))+3−2x3(y2sin(πy)−g(0,y)) +x3(−g(1,y)−g(2,y)+ysin(πy)).

Just as in the previous example, one could first write the constrained expression for the constraints in , call it , and then use as the free function in the constrained expression for the constraints in : the result, after simplifying, would be identical to Equation (7). Figure 3 shows the constrained expression for the specific , where the blue line signifies the constraint on , the black lines signify the derivative constraint on , and the magenta lines signify the relative constraint . The linear constraint is not easily visualized, but is nonetheless satisfied by the constrained expression.

#### 4.1.3 Multivariate Constrained Expression Theorems

For any function, , satisfying the constraints, there exists at least one free function, , such that the multivariate TFC constrained expression .

###### Proof.

Based on Theorem 3, the univariate constrained expression will return the free function if the free function satisfies the constraints. Let represent the univariate constrained expression for the independent variable that uses the free function , represent the univariate constrained expression for the independent variable that uses the free function , and so on up to which is simply the total constrained expression . If we choose , then based on Theorem 3 . Applying Theorem 3 recursively leads to and so on until . Hence, for any function satisfying the constraints, , there exists a free function, , such that the multivariate constrained expression is equal to the function satisfying the constraints, i.e., . ∎

The TFC multivariate constrained expression is a projection functional.

###### Proof.

To prove Theorem 4.1.3, one must show that . By definition, the constrained expression returns a function that satisfies the constraints. In other words, for any , is a function that satisfies the constraints. From Theorem 4.1.3, if the free function used in the constrained expression satisfies the constraints, then the constrained expression returns that free function exactly. Hence, if the constrained expression function is given itself as the free function, it will simply return itself. ∎

For a given function, , satisfying the constraints, the free function, , in the TFC constrained expression  is not unique. In other words, the multivariate TFC constrained expression is a surjective functional.

###### Proof.

Since each expression used in deriving the multivariate constrained expression is derived through the univariate formulation, then the results of the proof of Theorem 3 apply for each each , and therefore the free function is not unique. ∎

Just like in the univariate case, this proof has immediate implications when using the constrained expression for optimization. Through the recursive application of the univariate TFC approach, any terms in that are linearly dependent to the the support functions, , , … , , will not contribute to the solution. In the multivariate case, this also includes products of the support functions that include one and only one support function from each independent variable, e.g., .

In addition, just as in the univariate case, Theorems 4.1.3, 4.1.3, and 4.1.3 allow for a more rigorous definition of the multivariate TFC constrained expression. The multivariate TFC constrained expression is a surjective, projection functional whose domain is the space of all real-valued functions that are defined at the constraints and whose codomain is the space of all real-valued functions that satisfy the constraints.

### 4.2 Tensor Form

Recursive applications of the univariate TFC lead to expressions that lend themselves nicely to mathematical proofs, such as those in the previous section. However, for applications, it is typically more convenient to expression the constrained expression in a more compact form. Conveniently, the multivariate constrained expressions that are formed from recursive applications of the univariate TFC can be succinctly expressed in the following tensor form,

 u(x)=g(x)+M(ρ(x,g(x))i1i2…inΦi1(x1)Φi2(x2)…Φin(xn) (8)

where are indices associated with the -dimensions that have constraints, is an -dimensional tensor whose elements are based on the projection functionals, , and the vectors are vectors whose elements are based on the switching functions for the associated dimension.

The tensor can be constructed using a simple two-step process. Note that the arguments of functionals are dropped in this explanation for clarity.

1. The elements of the first order sub-tensors of acquired by setting all but one index equal to one are a zero followed by the projection functionals for the dimension associated with that index. Mathematically,

 M1…ik…1={0,\tensor∗[k]ρ1,⋯,\tensor∗[k]ρℓk},

where indicates the -th projection functional of the -independent variable and is the number of constraints associated with the -th independent variable.

2. The remaining elements of the tensor, those that have more than one index not equal to one, are the geometric intersection of the associated projection functionals multiplied by a sign ( or ). Mathematically, this can be written as,

 Mi1i2…in=\tensor∗[j]Cij−1[\tensor∗[k]Cik−1[⋯[\tensor∗[h]ρih−1]⋯]](−1)m, (9)

where , , , are the indices of that are not equal to one and is equal to the number of non-one indices. If the constraint functions and free function are differential up to the order of derivatives required to compute Equation (9), then by multiple applications of Clairaut’s Theorem the constraint operators can be freely permuted Mortari and Leake (2019). For example, Equation (9) could be re-written as,

 Mi1i2…in=\tensor∗[h