In physical problems such as reflection of light in a three dimensional environment, aircraft simulations, or image recognition, we are searching for methods to numerically solve linear systems of equations of the form . Recently, optical analog computing has been introduced as an alternative paradigm to classical computational linear algebra in order to contribute to computing technology. In , the authors design a two dimensional structure with a physical property, which allows for solving a predetermined linear system of equations. In general, structures with such favourable physical properties are called meta-structures or meta-surfaces and are under very active research [3, 7, 8]. To be precise in their set-up, sending specific waves across the meta-structure modifies those waves, such that they represent the solution to the problem.
One issue with this method is to quickly find the accurate structure for a given matrix . In  the authors use physics software to gradually form such a structure. Here we demonstrate another method, which relies on using asymptotic expansions of solutions to partial differential equation. Such expansions have already been studied in different papers [2, 1, 4, 5]. With that tool we can position and scale objects, on which the waves scatter, such that the resulting structure satisfies the desired requirements.
In the process of developing these asymptotic formulas, we have realized that we can compute the scattered wave using a method which is similar to the explicit Euler scheme. There we can numerically compute the solution to an ordinary differential equation by successively progressing in time with small time-steps until we reach the desired time. With our method we solve the partial differential equation around an obstacle by progressing the object-radius with small radius increments until we reach the full extent. We present the numerical application of that method on a circular domain.
This paper is organized as follows. In Section 2 we model the mathematical foundation for the underlying physical problem and define the fundamental partial differential equations for the asymptotic expansions. This leads us to the definition for the Neumann function on the outside. We then explain the connection of the Neumann function and the linear system of equations. In Section 3 we prove the asymptotic formulas concerning the Neumann function. There we discover special singular behaviours, which are essential to prove the asymptotic expansions. In Section 4, we first show the method to solve for the wave by increasing the radius by small steps and discuss the numerical error. Afterwards, we explain how we numerically build the meta-structure to solve the linear system of equations and discuss how well it operates. In Section 5, we conclude the paper with final considerations, open questions and possible future research directions. In the appendix we provide an interesting proof of a technical result and a modification of the trapezoidal rule, when we apply a logarithmic singularity.
Let and let be a finite union of disjoint, non-touching, simply connected, bounded and open -domains in . Let be source points on the horizontal axis . We have functions , for , which solve the following partial differential equation:
where denotes the limit from the outside of to , and denotes the outside normal derivative on . denotes the Dirac delta function at point and denotes the intensity at the source
. The first condition is the Helmholtz equation which represents the time-independent wave equation, and arises from the wave equation using the Fourier transform in time on the wave equation. The second condition is known as the Neumann condition and models a material with a high electrical impedance. The third and fourth conditions model an absorbing layer onand a Neumann condition on . The fifth condition is known as the Sommerfeld radiation condition, and originates from a physical constraint for the behaviour of an outgoing wave.
We define to be the fundamental solution to the Helmholtz equation, that is solves PDE (2.1) without the Neumann boundary condition and a source at the origin. Furthermore we define for , . Then we define the Neumann function to be the solution to
In contrast with , is not only defined on the upper half. We recall that , which we can readily see using a Green’s identity. We can express as a sum of and a smooth remainder, which satisfies PDE (2.2) with a vanishing right-hand side in the first equation. The same holds true for .
Using Green’s identity on the convolution of with , we can infer for that
Using the trapezoidal rule we can approximate the integral in the last equation up to an error in . Here we note that and have a logarithmic singularity at , hence is not well defined. Thus we use a slight modification in the trapezoidal rule, which is elaborated in Appendix A
. After such modification, we define the complex column vector, the complex column vector and the complex matrix . Then we have that
where denotes the identity matrix.
Our objective is to solve a linear system of equation using a physical procedure, in which an electrical signal is applied at , for every , and then is measured again at those points, for and , where and are given. The scattered wave, originated at , is in its Fourier space the function . Thus we are especially looking for a domain , which yields
and in searching so we keep track of the vector with the intention of rapidly determining the intensities such that . In this paper we primarily consider rapidly finding the domain such that Equation (2.3) holds.
3 Asymptotic Formula for the Perturbation of the Neumann Function
Let be a finite union of disjoint, non-touching, simply connected, bounded and open -domains in . Let , then for , where is the topological closure of the open set , we define the outside Neumann function , for , as the solution to the partial differential equation (2.2). Let be a ball centred at with radius . We define as the union of some set as defined above and of the ball , where does not intersect . Let and be the outside Neumann function to and , respectively.
For small enough and for all , , we have that
where denotes the gradient to the second input in , ’’ denotes the dot-product and denotes the limiting behaviour for . Additionally, for , we have that
where denotes the Hessian matrix, denotes the Euler–Mascheroni constant, and has a removable singularity at .
For small enough and for all , , we have that
Using Green’s identity and the PDE (2.2) we readily see that
where the normal vector still points outwards. Using an analogous argument by integrating with itself, we see that . We let go to and apply the gradient on both sides and apply the normal at . Then we obtain the equation
where as well as are elements of . We remark here that we cannot pull the normal derivative inside the integral. Let us consider next the decomposition , where is the fundamental solution to the Helmholtz equation, that means that , and is the remaining part of the PDE (2.2). can be expressed through , where is the Hankel function of first kind of order zero and is smooth . From the decomposition in Equation (3.5) to arrive at
by using the fact that the integral over decays linearly for . Transforming the normal derivative in the integral using polar coordinates, where we use that we have an integral over the boundary of a circle, we can infer that
where , where and and .
With the Taylor series for the Hankel function and Hankel function , for small enough, and considering the asymptotic behaviour of the forthcoming terms and applying some trigonometric identities we readily see that
Using integration by parts, where we consider that is a periodic function, we obtain that
Before we can proceed, we have to study a linear operator we call , which takes a periodic function and maps it to
We can readily show that
where ’p.v.’ stands for the ’principle value’. This equation follows by integration by parts on both sides of the equation, and by using the integrability of the logarithm function. Now, we can state that is an invertible operator up to a constraint, according to [10, § 28], and that the solution to is given through , where the constraint is that . Then we can infer that
where we used that . Thus it follows that
for a constant function in . Next, we approximate the known function through
Using that , we see that
Analogously to Equation (3), we can formulate the statement that
where and , and where we use that the Dirac measure located at , which is at the boundary of the integration domain, which is a boundary, yields only half of the evaluation of the integrand at . We then apply Equation (3.6) to the last equation and see that . Applying it again for the second order term, while using Taylor expansions and comparing coefficients of the same order in , we readily obtain the second equation in Theorem 3. For the first equation in Theorem 3, we apply the formula for , and the Taylor expansion up to second order for to Equation (3) to obtain
where denotes the Hessian matrix which emerges from the Taylor expansion. We evaluate the two integrals explicitly, use that and obtain Equation (3). For Equation (3.3) we use Green’s identity and obtain
Solving for and substituting we obtain Equation (3.3).
Let and let be defined in the way that was introduced, that is is a ball of radius at adjoined to the domain , hence . Then for any , we define to be the projection of along the normal vector to . Thus we have that , for .
Let , for all , we have that
where have removable singularities at , when .
Equations (3.8) and (3.9) follow by readily using Green’s identity on the convolution of with , and PDE (2.2), where we have to consider that an integral whose integration-boundary is over the singularity of the Dirac measure leads to half of the evaluation of the integrand.
Splitting in its singular part and its smooth remainder and subsequently extracting the singularity in , and doing so for as well, where we use Equation (3.9), we obtain that
We decompose , for , into its singular part and a smooth enough part, that is,
and furthermore we express through a Fourier series as
For small enough and for all , , we have that
where the term is a function with a norm, which is in , in the variable. Moreover,
Furthermore, we have
The idea of proving this theorem is to extract the singularities developed in Lemma 3 in the integral expression for . Then any explicitly appearing integrals are solved in a similar way as described in Appendix A by using Fourier series.
Assuming , we can use Taylor’s theorem to obtain that