# Optimal design of optical analog solvers of linear systems

In this paper, given a linear system of equations A x = b, we are finding locations in the plane to place objects such that sending waves from the source points and gathering them at the receiving points solves that linear system of equations. The ultimate goal is to have a fast physical method for solving linear systems. The issue discussed in this paper is to apply a fast and accurate algorithm to find the optimal locations of the scattering objects. We tackle this issue by using asymptotic expansions for the solution of the underlyingpartial differential equation. This also yields a potentially faster algorithm than the classical BEM for finding solutions to the Helmholtz equation.

## Authors

• 3 publications
03/19/2018

### A New Bound on Hrushovski's Algorithm for Computing the Galois Group of a Linear Differential Equation

The complexity of computing the Galois group of a linear differential eq...
06/18/2021

### Searching for Point Locations using Lines

Versions of the following problem appear in several topics such as Gamma...
11/14/2017

### On the Numerical Solution of Fourth-Order Linear Two-Point Boundary Value Problems

This paper introduces a fast and numerically stable algorithm for the so...
07/17/2019

### Optimization of a partial differential equation on a complex network

Differential equations on metric graphs can describe many phenomena in t...
12/13/2018

### Theory of Connections Applied to Support Vector Machines to Solve Differential Equations

Differential equations are used as numerical models to describe physical...
02/18/2019

### Ordered Line Integral Methods for Solving the Eikonal Equation

We present a family of fast and accurate Dijkstra-like solvers for the e...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In physical problems such as reflection of light in a three dimensional environment, aircraft simulations, or image recognition, we are searching for methods to numerically solve linear systems of equations of the form . Recently, optical analog computing has been introduced as an alternative paradigm to classical computational linear algebra in order to contribute to computing technology. In [9], the authors design a two dimensional structure with a physical property, which allows for solving a predetermined linear system of equations. In general, structures with such favourable physical properties are called meta-structures or meta-surfaces and are under very active research [3, 7, 8]. To be precise in their set-up, sending specific waves across the meta-structure modifies those waves, such that they represent the solution to the problem.

One issue with this method is to quickly find the accurate structure for a given matrix . In [9] the authors use physics software to gradually form such a structure. Here we demonstrate another method, which relies on using asymptotic expansions of solutions to partial differential equation. Such expansions have already been studied in different papers [2, 1, 4, 5]. With that tool we can position and scale objects, on which the waves scatter, such that the resulting structure satisfies the desired requirements.

In the process of developing these asymptotic formulas, we have realized that we can compute the scattered wave using a method which is similar to the explicit Euler scheme. There we can numerically compute the solution to an ordinary differential equation by successively progressing in time with small time-steps until we reach the desired time. With our method we solve the partial differential equation around an obstacle by progressing the object-radius with small radius increments until we reach the full extent. We present the numerical application of that method on a circular domain.

This paper is organized as follows. In Section 2 we model the mathematical foundation for the underlying physical problem and define the fundamental partial differential equations for the asymptotic expansions. This leads us to the definition for the Neumann function on the outside. We then explain the connection of the Neumann function and the linear system of equations. In Section 3 we prove the asymptotic formulas concerning the Neumann function. There we discover special singular behaviours, which are essential to prove the asymptotic expansions. In Section 4, we first show the method to solve for the wave by increasing the radius by small steps and discuss the numerical error. Afterwards, we explain how we numerically build the meta-structure to solve the linear system of equations and discuss how well it operates. In Section 5, we conclude the paper with final considerations, open questions and possible future research directions. In the appendix we provide an interesting proof of a technical result and a modification of the trapezoidal rule, when we apply a logarithmic singularity.

## 2 Preliminaries

Let and let be a finite union of disjoint, non-touching, simply connected, bounded and open -domains in . Let be source points on the horizontal axis . We have functions , for , which solve the following partial differential equation:

 (2.1)

where denotes the limit from the outside of to , and denotes the outside normal derivative on . denotes the Dirac delta function at point and denotes the intensity at the source

. The first condition is the Helmholtz equation which represents the time-independent wave equation, and arises from the wave equation using the Fourier transform in time on the wave equation. The second condition is known as the Neumann condition and models a material with a high electrical impedance. The third and fourth conditions model an absorbing layer on

and a Neumann condition on . The fifth condition is known as the Sommerfeld radiation condition, and originates from a physical constraint for the behaviour of an outgoing wave.

We define to be the fundamental solution to the Helmholtz equation, that is solves PDE (2.1) without the Neumann boundary condition and a source at the origin. Furthermore we define for , . Then we define the Neumann function to be the solution to

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩(△+k2)NkΩ(z,x)=δz(x)in R2∖¯¯¯¯Ω,∂νxNkΩ(z,x)∣+=0on ∂Ω,(∂∂|x|−ik)NkΩ(z,x)→0% for |x|→∞. (2.2)

In contrast with , is not only defined on the upper half. We recall that , which we can readily see using a Green’s identity. We can express as a sum of and a smooth remainder, which satisfies PDE (2.2) with a vanishing right-hand side in the first equation. The same holds true for .

Using Green’s identity on the convolution of with , we can infer for that

 12uj(zi)= ∫R2+∖¯¯¯Ωuj(x)(△+k2)NkΩ(zi,x)dx, = IjNkΩ(zj,zi)−∫Λuj(x)∂x2NkΩ(zi,x)dσx.

Using the trapezoidal rule we can approximate the integral in the last equation up to an error in . Here we note that and have a logarithmic singularity at , hence is not well defined. Thus we use a slight modification in the trapezoidal rule, which is elaborated in Appendix A

. After such modification, we define the complex column vector

, the complex column vector and the complex matrix . Then we have that

 (12IN+2N+1S)uj=IjNj+O(N−1),

where denotes the identity matrix.

Our objective is to solve a linear system of equation using a physical procedure, in which an electrical signal is applied at , for every , and then is measured again at those points, for and , where and are given. The scattered wave, originated at , is in its Fourier space the function . Thus we are especially looking for a domain , which yields

 S=N+12(A−12IN), (2.3)

and in searching so we keep track of the vector with the intention of rapidly determining the intensities such that . In this paper we primarily consider rapidly finding the domain such that Equation (2.3) holds.

## 3 Asymptotic Formula for the Perturbation of the Neumann Function NkΩ

Let be a finite union of disjoint, non-touching, simply connected, bounded and open -domains in . Let , then for , where is the topological closure of the open set , we define the outside Neumann function , for , as the solution to the partial differential equation (2.2). Let be a ball centred at with radius . We define as the union of some set as defined above and of the ball , where does not intersect . Let and be the outside Neumann function to and , respectively.

###### Theorem

For small enough and for all , , we have that

 NkΩr(z,x)= NkΩ(z,x)+πr2(k2NkΩ(z,ζ)NkΩ(x,ζ)−2∇NkΩ(z,ζ)⋅∇NkΩ(x,ζ)) +O(r3log(r)), (3.1)

where denotes the gradient to the second input in , ’’ denotes the dot-product and denotes the limiting behaviour for . Additionally, for , we have that

 NkΩr(z,y)= −r2k212NkΩ(z,ζ)(iπ2−γ−0.5−log(kr2)−2πRkΩ(ζ,ζ)) −r22π∇wRkΩ(ζ,w)∣w=ζ⋅∇NkΩ(z,ζ)+O(r3log(r)), (3.2)

where denotes the Hessian matrix, denotes the Euler–Mascheroni constant, and has a removable singularity at .
For small enough and for all , , we have that

 NkΩr(y,w)= log(1−cos(θy−θw))2π+2log(kr)+2γ−iπ4π+RkΩ(ζ,ζ)+O(rlog(r)). (3.3)

###### Proof

Using Green’s identity and the PDE (2.2) we readily see that

 NkΩr(z,x) =∫R2∖Ωr(△+k2)NkΩ(x,y)NkΩr(z,y)dy =NkΩ(x,z)−∫∂Br(ζ)∂νyNkΩ(x,y)NkΩr(z,y)dσy, (3.4)

where the normal vector still points outwards. Using an analogous argument by integrating with itself, we see that . We let go to and apply the gradient on both sides and apply the normal at . Then we obtain the equation

 −∂νyNkΩ(z,y)=−∂νy∫∂Br(ζ)∂νwNkΩ(y,w)NkΩr(z,w)dσw, (3.5)

where as well as are elements of . We remark here that we cannot pull the normal derivative inside the integral. Let us consider next the decomposition , where is the fundamental solution to the Helmholtz equation, that means that , and is the remaining part of the PDE (2.2). can be expressed through , where is the Hankel function of first kind of order zero and is smooth [6]. From the decomposition in Equation (3.5) to arrive at

 ∂νyNkΩ(z,y)=∂νy∫∂Br(ζ)∂νwΓk(y,w)NkΩr(z,w)dσw+O(r),

by using the fact that the integral over decays linearly for . Transforming the normal derivative in the integral using polar coordinates, where we use that we have an integral over the boundary of a circle, we can infer that

 ∂νyNkΩ(z,y(τ)) + (−2r(h+r)+(h2+2hr+2r2)cos(Δ))H(1)1(k|⋅|)|⋅|3]NkΩr(z,w(t))dt+O(r),

where , where and and .

With the Taylor series for the Hankel function and Hankel function , for small enough, and considering the asymptotic behaviour of the forthcoming terms and applying some trigonometric identities we readily see that

 ∂νy NkΩ(z,y(τ))=limh↘0r2π∫2π0h2−2sin(Δ2)2(h2+2hr+2r2)(h2+2sin(Δ2)2(2hr+2r2))2NkΩr(z,w(t))dt+O(rlog(r)).

Using integration by parts, where we consider that is a periodic function, we obtain that

 ∂νy NkΩ(z,y(τ))=limh↘0−14πr∫2π0sin(t−τ)∂tNkΩr(z,y(t))2sin(t−τ2)2+hdt+O(rlog(r)).

Before we can proceed, we have to study a linear operator we call , which takes a periodic function and maps it to

 ˚H[φ](τ)\coloneqq12πlimh↘0∫2π0sin(t−τ)2sin(t−τ2)2+hφ(t)dt.

 ˚H[φ](τ)=12π\vbox\raisebox−3.87ptp.v.∫2π0φ(t)cot(t−τ2)dt,

where ’p.v.’ stands for the ’principle value’. This equation follows by integration by parts on both sides of the equation, and by using the integrability of the logarithm function. Now, we can state that is an invertible operator up to a constraint, according to [10, § 28], and that the solution to is given through , where the constraint is that . Then we can infer that

 ˚H[∂νyNkΩ(z,y(⋅))](t)=12r∂tNkΩr(z,y(t))+O(rlog(r)),

where we used that . Thus it follows that

 NkΩr(z,y(t))=C+2r∫˚H[∂νyNkΩ(z,y(⋅))]+O(r2log(r)),

for a constant function in . Next, we approximate the known function through

Using that , we see that

 (3.6)

Analogously to Equation (3), we can formulate the statement that

 NkΩr(z,y) =2NkΩ(z,y)−2∫∂Br(ζ)∂νwNkΩ(y,w)NkΩr(z,w)dσw, (3.7)

where and , and where we use that the Dirac measure located at , which is at the boundary of the integration domain, which is a boundary, yields only half of the evaluation of the integrand at . We then apply Equation (3.6) to the last equation and see that . Applying it again for the second order term, while using Taylor expansions and comparing coefficients of the same order in , we readily obtain the second equation in Theorem 3. For the first equation in Theorem 3, we apply the formula for , and the Taylor expansion up to second order for to Equation (3) to obtain

 NkΩr(z,x)=NkΩ(z,x)−2r∫2π0r∂νNkΩ(x,ζ)(cos(t)sin(t))⋅∇NkΩ(z,ζ)dt+O(r3log(r)) −NkΩ(z,ζ)∫2π0r((cos(t)sin(t))⋅∇NkΩ(x,ζ)+r(cos(t)sin(t))T(∇∇T)NkΩ(x,ζ)(cos(t)sin(t)))dt,

where denotes the Hessian matrix which emerges from the Taylor expansion. We evaluate the two integrals explicitly, use that and obtain Equation (3). For Equation (3.3) we use Green’s identity and obtain

 NkΩr(y,w) =2NkΩ(y,w)−2∫∂Ωr∂νuNkΩ(w,u)NkΩr(y,u)dσu =12πlog(1−cos(θy−θw))+log(kr√2)π+2γ−iπ2π+2RkΩ(ζ,ζ) −2r∫π−π12π2rNkΩr(y,u(t))dt+O(rNkΩr(y,u)).

Solving for and substituting we obtain Equation (3.3).

Let and let be defined in the way that was introduced, that is is a ball of radius at adjoined to the domain , hence . Then for any , we define to be the projection of along the normal vector to . Thus we have that , for .

###### Lemma

Let , for all , we have that

 NkΩr(zr(tz),xr(tx)) =12πlog(1−cos(tz−tx))+Q1(tz,tx), (3.8) NkΩr(zR(tz),xr(tx)) =12πlog(1−cos(tz−tx))+Q1(tz,tx), (3.9) NkΩr(zR(tz),xR(tx)) =14πlog(1−cos(tz−tx)) + 14πlog(R4+r4−2R2r2cos(tz−tx))+Q2(tz,tx), (3.10) ∂νxRNkΩr(zR(tz),xR(tx)) =−14πR+12πRr2(R2cos(tz−tx)−r2)R4+r4−2R2r2cos(tz−tx)+Q3(tz,tx), (3.11)

where have removable singularities at , when .

###### Proof

Equations (3.8) and (3.9) follow by readily using Green’s identity on the convolution of with , and PDE (2.2), where we have to consider that an integral whose integration-boundary is over the singularity of the Dirac measure leads to half of the evaluation of the integrand.

For Equation (3.10), its proof is a simplification of the derivation of Equation (3.11). For Equation (3.11) we have with Green’s identity that

 ∂νxRNkΩr(zR,xR)=∂νxRNkΩ(zR,xR)−∫∂Br(ζ)∂νxR∂νyrNkΩ(xR,yr)NkΩr(zR,yr)dσyr.

Splitting in its singular part and its smooth remainder and subsequently extracting the singularity in , and doing so for as well, where we use Equation (3.9), we obtain that

 ∂νxRNkΩr(zR,xR)=12πRr(Rcos(tz−tx)−r)R2+r2−2Rrcos(tz−tx)+Q3(tz,tx)

Using the technical derivation shown in Appendix A we prove Equation (3.11).

We decompose , for , into its singular part and a smooth enough part, that is,

 NkΩr(zr(tz),xr(tx)) =12πlog(1−cos(tz−tx))+˜NkΩr(zr(tz),xr(tx)),

and furthermore we express through a Fourier series as

 ˜NkΩr(zr(tz),xr(tx))=∞∑n=0p(n)zrcos(ntx)+q(n)zrsin(ntx). (3.12)
###### Theorem

For small enough and for all , , we have that

 NkΩr(zR,xr)= NkΩr(zr,xr)+12πlog(R2+r22Rr−cos(tz−tx)1−cos(tz−tx))+OL2((R−r)2r2), (3.13)

where the term is a function with a norm, which is in , in the variable. Moreover,

 ∂νxRNkΩr(zR,xR) =∂νxRNkΩ(zR,xR)+r22πRR2cos(tz−tx)−r2R4+r4−2R2r2cos(tz−tx) −r∫π−π∂νxR∂νyr(t)˜NkΩ(xR,yr(t))(12πlog(R2+r22Rr−cos(tz−t))+˜NkΩr(zr,yr(t)))dt −12R∞∑n=1n(rR)n(p(n)zrcos(ntx)+q(n)zrsin(ntx)) +O((R−r)2r), (3.14)

where

 ∂νxR∂νyr˜NkΩ(xR,yr)\coloneqq∂νxR∂νyrNkΩ(xR,yr)−−12π2Rr−(R2+r2)cos(tx−ty)(R2+r2−2Rrcos(tx−ty))2.

Furthermore, we have

 NkΩR(zR,xR)= NkΩr(zr,xr)+(NkΩ(zR,xR)−NkΩ(zr,xr)) −r(R−r)∫π−π∂νxr∂νyr(t)˜NkΩ(xr,yr(t))NkΩr(zr,yr(t))dt −r∫π−π∂νyr(t)˜NkΩ(xr,yr(t))12πlog(R2+r22Rr−cos(tz−t)1−cos(tz−t))dt −12∞∑n=1((rR)n−(rR)2n)(p(n)zrcos(ntx)+q(n)zrsin(ntx)) −R∫π−π˜∂νyR(t)NkΩr(xR,yR(t))NkΩr(zr,yr(t))dt +O((R−r)2r), (3.15)

where

 ∂νyr˜NkΩ(xR,yr) \coloneqq∂νyrNkΩ(xR,yr)−12πr−Rcos(tx−ty)R2+r2−2Rrcos(tx−ty), ˜∂νxRNkΩr(zR,xR) \coloneqq∂νxRNkΩr(zR,xR)−12π2R−r22πRR2cos(tz−tx)−r2R4+r4−2R2r2cos(tz−tx).

The idea of proving this theorem is to extract the singularities developed in Lemma 3 in the integral expression for . Then any explicitly appearing integrals are solved in a similar way as described in Appendix A by using Fourier series.

###### Proof

Assuming , we can use Taylor’s theorem to obtain that

 NkΩr(zR,xr)=NkΩr(zr,xr)+(R−r)∂νzrNkΩr(zr,xr)