Log In Sign Up

Reconstructing group wavelet transform from feature maps with a reproducing kernel iteration

In this paper we consider the problem of reconstructing an image that is downsampled in the space of its SE(2) wavelet transform, which is motivated by classical models of simple cells receptive fields and feature preference maps in primary visual cortex. We prove that, whenever the problem is solvable, the reconstruction can be obtained by an elementary project and replace iterative scheme based on the reproducing kernel arising from the group structure, and show numerical results on real images.


page 12

page 13

page 15

page 16

page 17

page 18

page 19

page 20


Wavelet Channel Attention Module with a Fusion Network for Single Image Deraining

Single image deraining is a crucial problem because rain severely degene...

Denoising Auto-encoding Priors in Undecimated Wavelet Domain for MR Image Reconstruction

Compressive sensing is an impressive approach for fast MRI. It aims at r...

Multi-level Wavelet-CNN for Image Restoration

The tradeoff between receptive field size and efficiency is a crucial is...

Multi-level Wavelet Convolutional Neural Networks

In computer vision, convolutional networks (CNNs) often adopts pooling t...

Reconstructing Maps from Text

Previous research has demonstrated that Distributional Semantic Models (...

Kernel Bi-Linear Modeling for Reconstructing Data on Manifolds: The Dynamic-MRI Case

This paper establishes a kernel-based framework for reconstructing data ...

1 Introduction

This paper introduces an iterative scheme for solving a problem of image reconstruction, motivated by the classical behavior of primary visual cortex (V1), in the setting of group wavelet analysis. The mathematical formulation of the problem is that of the reconstruction of a function on the plane which, once represented as a function on the group of rotations and translations of the Euclidean plane via the group wavelet transform, is known only on a certain two dimensional subset of this three dimensional group. This problem is equivalent to that of filling in missing information related to a large subset of the group, and ultimately inquires about the completenss of the image representation provided by feature maps observed in V1.

One of the main motivations for the present study comes indeed from neuroscience, and the modeling of classical receptive fields of simple cells in terms of group actions, restricted to feature maps such as the orientation preference maps. The attempts of modeling mathematically the measured behavior of brain’s primary visual cortex (V1) have led to the introduction of the linear-nonlinear-Poisson (LNP) model [Carandini05]

, which define what is sometimes called the classical behavior. It describes the activity of a neuron in response to a visual stimulus as a Poisson spiking process driven by a linear operation on the visual stimulus, modeled by the receptive field of the neuron, passed under a sigmoidal nonlinearity. A series of thorough studies of single cell behavior could find a rather accurate description of the receptive fields of a large amount of V1 cells, called simple cells, within the LNP model, in terms of integrals against Gabor functions located in a given position of the visual field

[Marcelja80, Ringach02]. This description formally reduces the variability in the classical behavior of these cells to a few parameters, regulating the position on the visual field, the size, the shape and the local orientation of a two dimensional modulated Gaussian. A slightly simplified description of a receptive field activity in response to a visual stimulus defined by a real function on the plane, representing light intensity, is the following: denoting by ,


where the parameters define the local scale and spatial frequency, the angle defines the local direction and

define the local position of the receptive field in the visual field, while the complex formulation can be formally justified by the prevalence of so-called even and odd cells

[Ringach02]. We will focus on on the sole parameters . This can be considered restrictive [HubelWiesel62, SCP08, Barbieri15], but it is nevertheless interesting since angles represent a relevant local feature detected by V1 [HubelWiesel62] whose identification has given rise to effective geometrical models of perception [PT99, CS06, CS15]. In this case one can model the linear part of the classical behavior of simple cells in terms of a well-known object in harmonic analysis: by rephrasing (1) as

for fixed values of we obtain a group wavelet transform of (see §2). On the other hand, classical experiments with optical imaging have shown that not all parameters are available for these cortical operations [BonhofferGrinvald91, Welicky96, BoskingFitzpatrick97, White07]: in many mammals, a pinwheel-shaped function determines the available angles. This feature map can be introduced in the model (1) by saying that the receptive fields in V1 record the data


Within this setting, a natural question is thus whether this data actually contains all the sufficient information to reconstruct the original image , and how can we reobtain from these data. This is the main problem we aim to address.

Before proceding, we recall that a severe limitation of the purely spatial model (1) is that of disregarding temporal behaviors [DeAngelis95, Cocci12]. Moreover, the classical behavior described by the LNP model is well-known to be effective only up to a limited extent: several other mechanisms are present that describe a substantial amount of the neural activity in V1, such as Carandini-Heeger normalization [CarandiniHeeger12], the so-called non-classical behaviors [Fitzpatrick00, Carandini05], as well as the neural connectivity [Angelucci02]. However, spatial receptive fields and the LNP provide relevant insights on the functioning of V1 [Ringach02]

. Moreover, the ideas behind the LNP model have been a main source of inspiration in other disciplines, notably for the design of relevant mathematical and computational theories, such as wavelets and convolutional neural networks

[Marr, LeCun10]. We also point out that the use of groups and invariances to describe the variability of the neural activity has proved to be a solid idea to build effective models [PetitotBook, CS06, PA14], whose influence on the design of artificial learning architectures is still very strong [AERP19, MBCSx, Anselmi20, Duits21],

Another motivation for studying the completeness of the data collected with (2) comes from the relationship of this problem with that of sampling in reproducing kernel Hilbert spaces (RKHS). The RKHS structure in this case is that of the range the group wavelet transform, and will be discussed in detail, together with the basics on the wavelet transform, in §2. Reducing the number of parameters in a wavelet transform is a common operation, that one performs e.g. for discretization purposes [Daubechies], or because it is useful to achieve square-integrability [AAG91]

. The main issue is that these operations are typically constrained nontrivially in order to retain the sufficient information that allows one to distinguish all interesting signals. For example, when discretizing the short-time Fourier transform to obtain a discrete Gabor Transform, the well-known density theorem

[Heil07] imposes a minimum density of points in phase space where the values of the continuous transform must be known in order to retain injectivity for signals of finite energy. Much is known on this class of problems in the context of sampling, from classical Shannon’s theorem and uncertainty-related signal recovery results [DonohoStark89], to general sampling results in RKHS [FGHKR17, GRS18]. However, the kind of restriction implemented by feature maps does not seem to fit into these settings, even if some similarities may be found between the pinwheel-shaped orientation preference maps [BonhofferGrinvald91] and Meyer’s quasicrystals, which have been recently used for extending compressed sensing results [CandesTao06, MateiMeyer2010, AACM19].

The plan of the paper is the following. In §2, after a brief overview of the transform and its RKHS structure, we will formalize in a precise way the problem of functional reconstruction after the restriction (2) has been performed. In §3 we will introduce a technique to tackle with this problem, given by an iterative kernel method based on projecting the restricted

wavelet transform onto the RKHS defined by the group representation. Moreover, we will consider a discretization of the problem and, in the setting of finite dimensional vector spaces, we will give a proof that the proposed iteration converges to the desired solution. Finally, in §

4 we will show and discuss numerical results on real images.

2 Overview of the transform

The purpose of this section is to review the fundamental notions of harmonic analysis needed to provide a formal statement of the problem. We will focus on the group wavelet transform defined by the action on of the group of rotations and translation of the Euclidean plane, expressed as a convolutional integral transform.

We will denote the Fourier transform of by

and, as customary, we will use the same notation for its extension by density to the whole . We will also denote by the convolution on :

Let be the abelian group of angles of the unit circle, which is isomorphic either to the one dimensional torus or to the group of rotations of the plane . The group is the semidirect product group with elements and composition law

Its Haar measure, that is the (Radon) measure on the group that is invariant under group operations, is the Lebesgue measure on .

A standard way to perform a wavelet analysis with respect to the group on two dimensional signals is given by the operator defined as follows.

Definition 1.

Let us denote by the unitary action by rotations of on :

Let , and denote by . The wavelet transform on with mother wavelet is


In terms of this definition, we can then see that if we choose , and let be


we can write (1) as .

Moreover, by making use of the quasiregular representation of the group


and denoting by , we can rewrite (3) as follows:

which is a standard form to write the so-called group wavelet transform (see e.g. [Fuhr, DeitmarEchterhoff]). Note that, in the interesting case (4), we have .

The transform (3), together with the notion of group wavelet transform, has been studied in multiple contexts (see e.g. [Weiss01, Antoine2D, Fuhr, Duits07, DeitmarEchterhoff, DDGL] and references therein), and several of its properties are well-known. In particular, if is a bounded operator from to , which happens e.g. for by Young’s convolution inequality and the compactness of , it is easy to see that its adjoint reads


It is also well-known [Weiss01] that can not be injective on the whole , that is, we can not retrieve an arbitrary element of by knowing its transform. However, as shown in [Duits04, Duits07], and applied successfully in a large subsequent series of works (e.g. [Duits10, Duits16, Duits21]), it is possible to obtain a bounded invertible transform by extending the notion of transform to mother wavelets that do not belong to , or by simplifying the problem and reduce the wavelet analysis to the space of bandlimited functions, that are those functions whose Fourier transform is supported on a bounded set, whenever a Calderón’s admissibility condition holds. Since our main point in this paper is not the reconstruction of the whole , we will consider the transform with this second, simplified, approach. In this case, the image of the transform is a reproducing kernel Hilbert subspace of whose kernel will play an important role in the next section. For convenience, we formalize these statements with the next two theorems, and provide a sketch of the proof, even if they can be considered standard material.

Theorem 2.

For , let and let


The wavelet transform (3) for a mother wavelet is a bounded injective operator from to if and only if there exist two constants such that


holds for almost every .

Sketch of the proof.

By Fourier convolution theorem and the definition of , we have

Thus, by Plancherel’s theorem, (8) is equivalent to the so-called frame condition


for all , which is equivalent (see e.g. [AAG93]) to being bounded and invertible on . ∎

Before stating the next result we repeat an observation of [Weiss01] and show, using this theorem, that the transform can not be a bounded injective operator on the whole . Indeed, using that

we can see that the Calderón’s function in condition (8) is actually a radial function


which, by Plancherel’s theorem, satisfies Hence, the lower bound in condition (8) can not be satisfied on the whole by any .

On the other hand, for the mother wavelet (4), since , we have

From here we can see that for all , so, even if as , we have that the lower bound in (8) is larger than zero for any finite .

The next theorem shows how to construct the inverse of the wavelet transform on bandlimited functions, and what is the structure of the closed subspace defined by its image.

Theorem 3.

Let and be such that (8) holds. Let also be defined by


where , and let be as in (7). Then

  • For all it holds .

  • The space is a reproducing kernel Hilbert subspace of continuous functions of , and the orthogonal projection of onto is

Sketch of the proof.

Observe first that Thus, recalling (6) and using Fourier convolution Theorem, we can compute

which proves (i). To prove (ii), one needs to show that the elements of are continuous functions, that is selfadjoint and idempotent, and that (12). The continuity can be obtained as a consequence of the continuity of the unitary representation (5) and, by the same arguments used to prove (i) we can easily see that . On the other hand, (i) implies that for all , hence for all . Equation (12) can be obtained directly by (6) and the definition of . ∎

Since is a Hilbert space of continuous functions, it makes sense to consider its values on the zero measure set provided by the graph of a function , as in (2). We can then provide a formal statement of the problem discussed in §1:


In order for this problem to be solvable, the graph must be a set of uniqueness for . That is, the following condition must hold:


Indeed, if this was not the case, for any nonzero that vanishes on , the function would be different from but would coincide with on . That is, we could not solve problem (13).

Condition (14) is in general hard to be checked, and the formal characterization of the interplay between and that make it hold true is out of the scopes of this paper. However, in the next section we will provide a technique for addressing (13) in a discrete setting, which will allow us to explore in §4 the behavior of this problem for various functions inspired by the feature maps measured in V1.

3 A reconstruction algorithm

In this section we describe the discretization of the problem (13) which is used in the next section. Then, we introduce a kernel based iterative algorithm, and prove its convergence to the solution whenever the solvability condition (14) holds.

3.1 Discretization of the problem

The setting described in §2 can be discretized in a standard fashion by replacing with , endowed with the usual Euclidean scalar product, circular convolution and discrete Fourier transform (FFT), which amounts to replacing with , the integers modulo . Explicitly, given , , we have

With a uniform discretization of angles, by replacing with we obtain the following discretization of the transform with the mother wavelet (4):

for and , . Thus, in particular, . Note that here, for simplicity, we have removed the normalization used in (4).

This allows us to to process digital images while retaining the results of Theorems 2 and 3 as statements on finite frames (see e.g. [CasazzaKutyniok]) since Plancherel’s theorem and Fourier convolution theorem still hold. In particular, when computing numerically the inverse of using (i), Theorem 3, one has to choose an so that Calderón’s condition (8) for


holds with some for all . This is the injectivity condition on and, due to the finiteness of the space, now it can be achieved for all , i.e. without bandlimiting. However, since this is equivalent to the frame inequalities (9), the quantity defines actually the condition number of . Thus, in order to keep stability for the inversion, the ratio can not be arbitrarily large (see e.g. [CasazzaKutyniok, Duits07]). Once the parameters are chosen in such a way that this ratio provides an acceptable numerical accuracy, one can then compute the projection given by (ii), Theorem 3, on , by making use of Fourier convolution theorem:


We note at this point that this standard discretization, in general (for different from or ), retains all of the approach of §2 but the overall semidirect group structure of .

Let us now consider the discretization of the problem (13) and denote the graph of by . If we denote by the selection operator that sets to zero all the components of an that do not belong to , that is

we can see that this is now an orthogonal projection of . Hence, problem (13) can be rewritten in the present discrete setting as follows: given , find that solves the linear problem


The meaning of (17) is indeed to recover knowing only the values .

We propose to look for such a solution using the following iteration in : for , compute


The idea behind this iteration is elementary: we start with the values of selected by , we project them on the RKHS defined by the image of , and we replace the values on of the result with the known values of . The convergence of this iteration is discussed in the next section. Before that, we observe that (18) can be seen as a linearized version of the Wilson, Cowan and Ermentrout equation [WC72, EC80] since, denoting by , it is a discretization of

Apart from the absence of a sigmoid, this is indeed a classical model of population dynamics. Here, the “kernel” is not just the reproducing kernel , but it also contains the projection on a feature map . Returning to the model of V1, here the forcing term is the data collected by simple cells, while the stationary solution , if it exists, is the full group representation of the visual stimulus defined as solution to the Volterra-type equation .

3.2 The project and replace iteration

We show here that, whenever the problem (17) is solvable, the iteration (18) converges to its solution. Since the argument is general, we will consider in this subsection the setting of an arbitrary finite dimensional vector space endowed with a scalar product and the induced norm, and two arbitrary orthogonal projections . For an orthogonal projection we will denote by the complementary orthogonal projection.

We start with a basic observation, which is just a restatement of the solvability condition (14) as that of a linear system defined by an orthogonal projection, in this case , on a subspace, in this case characterized as . The simple proof is included for convenience.

Lemma 4.

Let be orthogonal projections of a finite dimensional vector space . The system


has a unique solution in for any if and only if


The system (19) is of course solved by , so we only need to prove that, for all , this solution is unique if and only if (20) holds. In order to see this, let be a solution to (19), and denote by . Then . Hence, (20) holds if and only if , that is, the only solution to (19) is . ∎

The problem posed by the system (19) is a problem of linear algebra: if we know that a vector belongs to a given subspace , and we know the projection of on a different subspace , can we recover ? The next theorem shows that, if the system (19) has a unique solution, such a solution can be obtained by the project and replace iteration (18).

Theorem 5.

Let be finite dimensional vector space, and let be orthogonal projections of . Given , set , and let be the sequences defined by the project and replace iteration


If condition (20) holds, then

and the errors , decay exponentially with the number of iterations .


Denoting by , we can rewrite the iteration (21) as two separate iterations, generating the two sequences as follows:


If (20) holds then and , because . Hence, the existence of the limits and is due to the convergence of the Neumann series (see e.g. [RieszNagy]). Thus, getting rid of the notations and , we have that the limits and are the solutions to the equations

We can now see that , because this equation reads

and, using that , we can rewrite it as

which is true by definition of . Moreover, by taking the limit in the first equation of (21), and using again that , we obtain

Finally, the convergence of (18) is exponential because, using the series expression (22) for ,

and a similar argument applies to . ∎

4 Numerical results

We present here the reconstruction results of the project and replace iteration on the restriction to feature maps of the transform of real images. We have chosen eight pixels, 8-bit grayscale digital images , which are shown in Figure 1 together with their Fourier spectra. Note that, for processing, they have been bandlimited in order to formally mantain the structure described in §2. However, this bandlimiting has minimal effects, not visible to the eye, since the spectra have a strong decay: for this reason, the bandlimited images are not shown.

For the discrete transform we have chosen a discretization of with 12 angles, so that, with respect to the notation of the previous section, we have and . We have also chosen the parameter values . The mother wavelet and dual wavelet produced by these parameters are shown in Figure 2, top and center, on a crop of the full space for better visualization. We stress that the stability of the transform and the numerical accuracy of the projection (16) depend only on the behavior of the Calderón’s function, while the accuracy of image representation under bandlimiting with radius depends on the decay of the spectra of the images. The Calderón’s function , computed as in (15), is shown in Figure 2, bottom left: the chosen parameters define a ratio , corresponding to a condition number for of less than . In Figure 2, bottom right, we have shown in scale the multiplier that defines the dual wavelet as in (11), and in particular we can see the bandlimiting radius, to be compared with the spectra of Figure 1. For visualization purposes, in Figure 3 we have shown in spatial coordinates the integral kernel defining the projection (16), which is the reproducing kernel for the discrete transform. Its real and imaginary parts are shown on the same crop used to display the wavelet of Figure 2.

We have implemented the iteration (18) for the restriction of this discrete transform to different types of feature maps , shown in Figure 4. We will comment below each one of these cases. We have chosen to illustrate the effect of that iteration as follows. We have computed the sequence as in (21), for a number of iterations, and applied the inverse transform to each . This allows us to obtain real images that are directly comparable with the original ones. We then have shown , representing the first image that can be directly retrieved from the feature parameters, and , that is the image obtained when we stopped the iteration. Moreover, as a measure of reconstruction error, we have considered the following rescaling of the Euclidean norm, at each step :


This adimensional quantity measures a % error obtained as the average square difference by pixel of an image in the dataset from its -steps reconstruction , divided by the size of the admissible pixel range for 8 bit images, which is .

First feature map: purely random selection of angles.

The first map that we have considered is a that, for each , simply choses one value in

as a uniformly distributed random variable. This map is shown in the first line of Figure

4, left, and in the first line of Figure 4, right, we have shown an enlargement to the same crop at which the wavelets in Figure 2 and the reproducing kernel in Figure 3 are shown. In Figure 5 we have shown the images resulting from and , and the evolution of the error (23) in scale, respectively in the left, center and right column, for the first four images of Figure 1. In this case we can see that the error goes beyond %, indicated by on the axis, in just about 500 iterations. As a remark, this kind of feature maps are commonly encountered in rodents (see e.g. [HoAngelucci21] and references therein).

Pinwheel-shaped feature maps.

Next, we present the results for three selection maps that are pinwheel-shaped, as it is commonly observed for orientation and direction preference maps of V1 in primates and other mammals. These maps can be constructed as follows [PetitotBook]: for , let be given by

where is a purely random process with values in . The maps that we have considered are obtained by


where angle is the phase of a complex number , and is the integer part of a real number . The resulting maps are quasiperiodic, with a characteristic correlation length which corresponds to the fact that the spectrum of is concentrated on a ring of radius . The main feature of those maps is that they possess points, called pinwheel points, around which all angles are mapped, and these points are spaced, on average, by a distance of (see e.g. [PetitotBook] and references therein).

In the second, third and fourth line of Figure 4, left, we have shown the resulting maps with respectively given by , and . On the right column we have shown an enlargement to the same crop used before.

The results of the iteration are presented, as described above, in Figures 6, 7, 8, whose structure is the same as in Figure 5 with the exception of the number of iteration, that is now larger.

In order to discuss these results, we first recall that the correlation length of orientation preference maps has been often related to the size of receptive fields, as a “coverage” constraint to obtain a faithful representation of the visual stimulus (see [Bosking02, Keil11, BCS14] and references therein). However, no mathematical proof of this principle in terms of image reconstruction has been given so far, nor has the word size received a more quantitative meaning within the models of type (1), and we have not tried to give any more specific meaning either. However, for the three cases that we present, by comparing the crops of the maps in Figure 4 with the wavelet in Figure 2, we can see that for the correlation length of the map is approximately similar to what we could call effective support of the receptive field, while for we have that the area of influence of the receptive field does not include two different pinwheel points, and for

the two scales are very different. Heuristically, one could then be led to think that the reconstruction properties in the three cases may present qualitative differences. For example, that condition (

14), or its discrete counterpart (20), may hold in the first case and may not hold in the third case.

What we can see by the numerical results of the proposed algorithm are that, actually, there is a difference in the behavior of the decay. For larger values of the parameter , when the map is more similar to the purely random selection described above, the decay of the error is faster, while for smaller values of the decay is slower, but nevertheless the error appears to be monotonically decreasing. In the presented cases, for , we can see in the right column of Figure 6 that in about 2000 iterations the error decay appears to enter an exponential regime, which is rectilinear in the scale. We see in Figure 7 that it takes roughly twice as many iterations for to enter the same regime. On the other hand, for we can see in Figure 8 that after a relatively small number of iterations the decay becomes very small, and does not seem to become exponential even after 10000 iterations. However, visual inspection of the results (which “measures” the error in a different way than the Euclidean norm) in this last case, show that the starting image appears to be qualitatively highly corrupted, while the image obtained after the iteration was stopped is remarkably true to the original one and does not display evident artifacts away from the boundaries.

Selection of a single orientation: deconvolution.

The surprisingly good performance in the reconstruction problem for the las map

has motivated a performance test for an additional feature selection map, given by a function

that is independent of , hence selecting just one angle in the transform. In the same fashion as seeing the purely random distribution as a limiting case for large of the pinwheel-shaped maps, this can be considered as a limiting case for small . However, keeping only the values of for a single angle concretely corresponds to performing a convolution with one function , and aiming to reconstruct is actually a deconvolution problem. In this case, the frequency behavior of the convolution filter is that of a Gaussian centered away from the origin, and its shape can be observed in the Calderón’s function of Figure 2, which clearly shows the sum of 12 such Gaussians. We have shown the reconstruction results for this problem, using now 15000 iterations, in Figure 9. The error decay seems to share, qualitatively, much of the same properties of the case .

5 Conclusions

In this paper we have proposed an elementary iterative technique to address a problem inspired by visual perception and related to the reconstruction of images from a fixed reduced set of values of its group wavelet transform.

A possible interpretation of this iteration with a kernel defined by the group as a neural computation in V1 comes from the modeling of the neural connectivity as a kernel operation [WC72, EC80, CS15, MCSx], especially if considered in the framework of a neural system that aims to learn group invariant representations of visual stimuli [PA14, Anselmi20]. A direct comparison of the proposed technique with kernel techniques recently introduced with radically different purposes in [MBCSx, MCSx] shows however two main differences at the level of the kernel that is used: here we need the dual wavelet to build the projection kernel, and the iteration kernel effectively contains the feature maps. On the other hand, a possible application is the inclusion of a solvability condition such as (14) as iterative steps within learning frameworks such as those of [AERP19, Anselmi20].

We would like to observe also that, since the proof of convergence of this technique is general, it could be applied to other problems with a similar structure. The computational cost essentially relies on the availability of efficient methods to implement the two projections that define the problem in the discrete setting, as it happens to be the case for the setting studied in this paper. In particular, similar arguments could be applied to other wavelet transforms based on semidirect product groups , with a subgroup of that defines what in this paper are sometimes called local features, and to sampling projections obtained for example, but not only, from other types of feature maps

. From the dimensional point of view, in the discrete setting the selection of local features with a feature map can be seen as a downsampling that allows one to mantain in the transformed space the same dimension of the input vector. This is often a desirable property and it is already commonly realized e.g. by the MRA decomposition algorith of classical wavelets or by the pooling operation in neural networks. Moreover, its apparent stability of convergence seem to suggest that this operation can be performed a priori, without the need of a previous study of solvability of the problem.

Several questions remain open after this study. Probably the most fundamental one is the characterization of those maps

that, for a given mother wavelet , satisfy the solvability condition (14). In terms of the study of the convergence of the project and replace iteration, moreover, it is plausible that one could obtain convergence under weaker conditions than (20), even if maybe to a different solution, as it appears to happen in some of the numerical simulations presented.


Figure 1: Original test images (top) and their Fourier spectra in scale (bottom).
Figure 2: Top: mother wavelet . Center: dual wavelet . Bottom, left: Calderón’s function. Bottom, right: inverse of the Calderón’s function in scale, bandlimited with .
Figure 3: Reproducing kernel for the wavelet of Figure 2. Top: real part for the 12 angles. Bottom: imaginary part for the 12 angles.
Figure 4: Left column: maps for the simulations in Figures 5, 6, 7, 8. Right column: enlargements of the same maps. First line: purely random . Second, third and fourth line: maps generated according to (24) with respectively given by , and .
Figure 5: Iteration on the images 1 to 4 of Figure 1 for the purely random shown in the first line of Figure 4. Left: first step of the iteration. Center: after 1000 iterations. Right: , for the error (23).
Figure 6: Iteration on the images 5 to 8 of Figure 1 for shown in the second line of Figure 4. Left: first step of the iteration. Center: after 5000 iterations. Right: , for the error (23).
Figure 7: Iteration on the images 1 to 4 of Figure 1 for shown in the third line of Figure 4. Left: first step of the iteration. Center: after 10000 iterations. Right: , for the error (23).
Figure 8: Iteration on the images 5 to 8 of Figure 1 for , shown in the fourth line of Figure 4. Left: first step of the iteration. Center: after 10000 iterations. Right: , for the error (23).
Figure 9: Iteration for the deconvolution by on images of Figure 1. Left: first step of the iteration. Center: after 15000 iterations. Right: , for the error (23).



This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 777822, and from Grant PID2019-105599GB-I00, Ministerio de Ciencia, Innovación y Universidades, Spain.


The author would like to thank G. Citti, A. Sarti, E. Hernández, J. Antezana and F. Anselmi for inspiring discussion on topics related to the present work.


Davide Barbieri, Universidad Autónoma de Madrid