## 1. Introduction.

The linear regression problem in two-dimensional case (i. e. on a plane) typically arises when approximating experimental data with a linear function (see [1]). Its solution using least squares method was first published by Legendre in 1805 (see [2]). In an unpublished form the least squares method is attributed to Carl Friedrich Gauss 1795. His work was published only in 1809 (see [3]).

There are various fitting problems in three-dimensional Euclidean space (see plane, circle and ellipse fitting problems in [4] and [5], see ellipsoid fitting problem in [6] and [7]). The linear regression problem in three-dimensional case is the problem of best fitting some straight line to a group of points in three-dimensional Euclidean space. A solution of this problem is given by Jean Jacquelin in [8]. His method is substantially based on direct calculations using coordinates. Our goal in the present paper is to give a coordinate-free solution to the problem.

## 2. Parametric and non-parametric vectorial equations of a straight line.

Let’s consider the straight line in Fig 2.1. The point

is a fixed point of this line, its radius-vector is

. The point is a variable point, its radius-vector is . These two radius-vectors are related to each other by means of the equationwhere is some non-zero vector on the line and is a scalar parameter. The equality 2.1 is called the vectorial parametric equation of the line in the space (see [9]).

The choice of the point on the line is not unique. Therefore the equation
2.1 has some extent of ambiguity. In order to avoid this ambiguity
non-parametric equations are used. Let’s multiply both sides of the
equality 2.1 by the vector using the vector product^{\normalfont1}^{\normalfont1}1 It is also called the cross product, i. e. . operation. As a result we get

The product of two constant vectors in the right hand side of 2.2 is a constant vector. If we denote it through , we get the equality

The equality 2.3 is known as the non-parametric vectorial equation of the line in the space (see [9]). Note that the vector has no ambiguity arising from the uncertainty in choosing the initial point on the line. Indeed, it is easy to see that is invariant with respect to the transformation .

## 3. The statement of the problem.

Let be a group of points in the space given by their radius-vectors . The linear regression problem consists in finding a line given by the equation 2.2 such that the root mean square of the distances from the points to the line 2.2 takes its minimal value:

## 4. The solution of the problem.

The distance from the point to the line 2.1 is given by the formula

Without loss of generality we can assume that

Then, taking into account and 4.2, from 4.1 we derive

Now we substitute 4.3 into 3.1. As a result we obtain

The formula 4.4 is an analog of the formula 2.3 in [4].
The round brackets in 4.4 denote the scalar product^{\normalfont2}^{\normalfont2}2 It is also called the dot product, i. e. . operation.
-2

###### Definition Definition 4.1

A line given by the equation 2.3 with is called an optimal root mean square line if the quantity 4.4 takes its minimal value.

The right hand side of 4.4 is a quadratic polynomial with respect to the components of the vector . It takes its minimal value if is given by the formula

Substituting 4.5 back into 4.3, we derive

The formula 4.6 is an analog of the formula 2.5 in [4]. Its right hand side is a quadratic form wit respect to the vector . We denote it through :

and call the non-linearity form for a group of points in three-dimensional Euclidean space. Like the non-flatness form 2.14 in [4], the non-linearity form 4.7 is positive in the sense of the following inequality:

Like in [4]

one can draw some analogy to mechanics using the inertia tensor. However, we shall not do it now. We just note that like any quadratic form

diagonalizes in some orthonormal basis associated with its primary axes.Let’s introduce the following notation analogous to 2.6 in [4]:

The vector in 4.8 is the radius-vector of the center of mass of a group of points if assume that unit masses are placed at each of these points. In terms of 4.8 the formula 4.5 is written as

Comparing 4.9 with , we conclude that the optimal line should pass through the center of mass of a group of points. Its direction is determined by the non-linearity form according to the following theorem.

###### Theorem 4.1

A line is an optimal root mean square line for a group of points if and only if it passes through the center of mass of these points and if its direction vector is directed along the primary axis of the non-linearity form

of these points corresponding to its minimal eigenvalue.

## 5. Conclusion.

Theorem 4.1 solves the linear regression problem formulated in Section 3. Its proof is obvious from the consideration preceding it. Practically this theorem means that in order to find a line best fitting a group of points in three-dimensional Euclidean space one should find their center of mass and diagonalize the symmetric matrix associated with their non-linearity form 4.7. In some cases this matrix can have two minimal eigenvalues . In these cases the shape of the group of points resembles a disc and hence there is no preferable direction for the optimal line within the plane of this disc.

If , the shape of the group of points resembles a ball. In this case we have no preferable direction for the optimal line at all.

## References

- 1 , Linear least squares, Wikipedia, Wikimedia Foundation Inc..
- 2 Nouvelles méthodes pour la détermination des orbites des cometes, F. Didot, 1805. ,
- 3 Theoria motus corporum coelestium in sectionibus conicis solem ambientium, Perthes & Besser, 1809. ,
- 4 Algorithms for laying points optimally on a plane and a circle, e-print arXiv:0705.0350. ,
- 5 BIT Numerical Mathematics 34 (1994), no. 4, 558–578. ,
- 6 Least square ellipsoid fitting using iterative orthogonal transformations, e-print arXiv:1704.04877. ,
- 7 Fast ellipsoidal fitting of discrete multidimensional data, e-print arXiv:1901.05511. ,
- 8 Regressions et trajectoires en 3D, Online resource doc/31477970 at scribd.com, (2002, 2011.). ,
- 9 Course of analytical geometry, Bashkir State University, 2010. , , see also arXiv:1111.6521.

## References

- 1 , Linear least squares, Wikipedia, Wikimedia Foundation Inc..
- 2 Nouvelles méthodes pour la détermination des orbites des cometes, F. Didot, 1805. ,
- 3 Theoria motus corporum coelestium in sectionibus conicis solem ambientium, Perthes & Besser, 1809. ,
- 4 Algorithms for laying points optimally on a plane and a circle, e-print arXiv:0705.0350. ,
- 5 BIT Numerical Mathematics 34 (1994), no. 4, 558–578. ,
- 6 Least square ellipsoid fitting using iterative orthogonal transformations, e-print arXiv:1704.04877. ,
- 7 Fast ellipsoidal fitting of discrete multidimensional data, e-print arXiv:1901.05511. ,
- 8 Regressions et trajectoires en 3D, Online resource doc/31477970 at scribd.com, (2002, 2011.). ,
- 9 Course of analytical geometry, Bashkir State University, 2010. , , see also arXiv:1111.6521.

Comments

There are no comments yet.