Signal Recovery from Pooling Representations

11/16/2013 ∙ by Joan Bruna, et al. ∙ 0

In this work we compute lower Lipschitz bounds of ℓ_p pooling operators for p=1, 2, ∞ as well as ℓ_p pooling operators preceded by half-rectification layers. These give sufficient conditions for the design of invertible neural network layers. Numerical experiments on MNIST and image patches confirm that pooling layers can be inverted with phase recovery algorithms. Moreover, the regularity of the inverse pooling, controlled by the lower Lipschitz constant, is empirically verified with a nearest neighbor regression.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A standard architecture for deep feedforward networks consists of a number of stacked modules, each of which consists of a linear mapping, followed by an elementwise nonlinearity, followed by a pooling operation. Critical to the success of this architecture in recognition problems is its capacity for preserving discriminative signal information while being invariant to nuisance deformations. The recent works (Mallat, 2012; Bruna and Mallat, 2012) study the role of the pooling operator in building invariance. In this work, we will study a network’s capacity for preserving information. Specifically, we will study the invertibility of modules with a linear mapping, the half rectification nonlinearity, and pooling, for . We will discuss recent work in the case , and connections with the phase recovery problem of (Candes et al., 2013; Gerchberg and Saxton, 1972; Waldspurger et al., 2012).

1.1 pooling

The purpose of the pooling layer in each module is to give invariance to the system, perhaps at the expense of resolution. This is done via a summary statistic over the outputs of groups of nodes. In the trained system, the columns of the weight matrix corresponding to nodes grouped together often exhibit similar characteristics, and code for perturbations of a template (Kavukcuoglu et al., 2009; Hyvärinen and Hoyer, 2001).

The summary statistic in pooling is the norm of the inputs into the pool. That is, if nodes are in a pool, the output of the pool is

where as usual, if , this is

If the outputs of the nonlinearity are nonnegative (as for the half rectification function), then corresponds to average pooling, and the case

is max pooling.

1.2 Phase reconstruction

Given , a classical problem in signal processing is to recover from the absolute values of its (1 or 2 dimensional) Fourier coefficients, perhaps subject to some additional constraints on ; this problem arises in speech generation and X-ray imaging (Ohlsson, 2013). Unfortunately, the problem is not well posed- the absolute values of the Fourier coefficients do not nearly specify

. For example, the absolute value of the Fourier transform is translation invariant. It can be shown (and we discuss this below) that the absolute value of the inner products between

and any basis of are not enough to uniquely specify an arbitrary ; the situation is worse for . On the other hand, recent works have shown that by taking a redundant enough dictionary, the situation is different, and can be recovered from the modulus of its inner products with the dictionary (Balan et al., 2006; Candes et al., 2013; Waldspurger et al., 2012).

Suppose for a moment that there is no elementwise nonlinearity in our feedforward module, and only a linear mapping followed by a pooling. Then with a slightly generalized notion of phase, where the modulus is the

norm of the pool, and the phase is the

unit vector specifying the “direction” of the inner products in the pool, the phase recovery problem above asks if the module loses any information. The

case has been recently studied in (Cahill et al., 2013)

1.3 What vs. Where

If the columns of the weight matrix in a pool correspond to related features, it can be reasonable to see the entire pool as a “what”. That is, the modulus of the pool indicates the relative presence of a grouping of (sub)features into a template, and the phase of the pool describes the relative arrangement of the subfeatures, describing “where” the template is, or more generally, describing the “pose” of the template.

From this viewpoint, phase reconstruction results make rigorous the notion that given enough redundant versions of “what” and throwing away the “where”, we can still recover the “where”.

1.4 Contributions of this work

In this work we give conditions so that a module consisting of a linear mapping, perhaps followed by a half rectification, followed by an pooling preserves the information in its input. We extend the results of (Cahill et al., 2013; Balan and Wang, 2013) in several ways: we consider the case, take into account the half rectification nonlinearity, and we make the results quantitative in the sense that we give bounds on the lower Lipschitz constants of the modules. This gives a measure of the stability of the inversion, which is especially important in a multi-layer system. Using our bounds, we prove that redundant enough random modules with or pooling are invertible.

We also show the results of numerical experiments designed to explore the gaps in our results and the results in the literature. We note that the alternating minimization method of (Gerchberg and Saxton, 1972) can be used essentially unchanged for the case, with or without rectification, and show experiments giving evidence that recovery is roughly equally possible for , , and using this algorithm; and that half rectification before pooling can make recovery easier. Furthermore, we show that with a trained initialization, the alternating method compares favorably with the state of the art recovery methods (for with no rectification) in (Waldspurger et al., 2012; Candes et al., 2013), which suggests that the above observations are not an artifcact of the alternating method.

2 Injectivity and Lipschitz stability of Pooling Operators

This section studies necessary and sufficient conditions guaranteeing that pooling representations are invertible. It also computes upper and lower Lipschitz bounds, which are tight under certain configurations.

Let us first introduce the notation used throughout the paper. Let be a real frame of , with . The frame is organized into disjoint blocks , , such that and . For simplicity, we shall assume that all the pools have equal size .

The pooling operator is defined as the mapping

(1)

A related representation, which has gained popularity in recent deep learning architectures, introduces a point-wise thresholding before computing the

norm. If is a fixed threshold vector, and , then the rectified pooling operator is defined as

(2)

where contains the coordinates of .

We shall measure the stability of the inverse pooling with the Euclidean distance in the representation space. Given a distance in the input space, the Lipschitz bounds of a given operator are defined as the constants such that

In the remainder of the paper, given a frame , we denote respectively by and its lower and upper frame bounds. If has vectors and , we denote the frame obtained by keeping the vectors indexed in . Finally, we denote the complement of .

2.1 Absolute value and Thresholding nonlinearities

In order to study the injectivity of pooling representations, we first focus on the properties of the operators defined by the point-wise nonlinearities.

The properties of the phaseless mapping

(3)

have been extensively studied in the literature (Balan et al., 2006; Balan and Wang, 2013), in part motivated by applications to speech processing (Achan et al., 2003) or X-ray crystallography (Ohlsson, 2013). It is shown in (Balan et al., 2006) that if then it is possible to recover from , up to a global sign change. In particular, (Balan and Wang, 2013) recently characterized the stability of the phaseless operator, that is summarized in the following proposition:

Proposition 2.1 ((Balan and Wang, 2013), Theorem 4.3)

Let with and
. The mapping satisfies

(4)

where

(5)
(6)

In particular, is injective if and only if for any subset , either or is an invertible frame.

A frame satisfying the previous condition is said to be phase retrievable.

We now turn our attention to the half-rectification operator, defined as

(7)

For that purpose, let us introduce some extra notation. Given a frame , a subset is admissible if

(8)

We denote by the collection of all admissible sets, and the vector space generated by . The following proposition, proved in Section B, gives a necessary and sufficient condition for the injectivity of the half-rectification.

Proposition 2.2

Let . Then the half-rectification operator is injective if and only if . Moreover, it satisfies

(9)

with .

The half-rectification has the ability to recover the input signal, without the global sign ambiguity. The ability to reconstruct from is thus controlled by the rank of any matrix whose columns are taken from a subset belonging to . In particular, if , since , it results that is necessary in order to have .

The rectified linear operator creates a partition of the input space into polytopes, defined by (8), and computes a linear operator on each of these regions. A given input activates a set , encoded by the sign of the linear measurements .

As opposed to the absolute value operator, the inverse of , whenever it exists, can be computed directly by locally inverting a linear operator. Indeed, the coordinates of satisfying form a set , which defines a linear model which is invertible if .

In order to compare the stability of the half-rectification versus the full rectification, one can modify so that it maps and to the same point. If one considers

then satisfies the following:

Corollary 2.3
(10)

with

(11)
(12)

and , so and . In particular, if is invertible, so is .

It results that the bi-Lipschitz bounds of the half-rectification operator, when considered in under the equivalence , are controlled by the bounds of the absolute value operator, up to a factor . However, the lower Lipschitz bound (11) consists in a minimum taken over a much smaller family of elements than in (5).

2.2 Pooling

We give bi-Lipschitz constants of the Pooling and rectified Pooling operators for .

From its definition, it follows that pooling operators and can be expressed respectively as a function of phaseless and half-rectified operators, which implies that for the pooling to be invertible, it is necessary that the absolute value and rectified operators are invertible too. Naturally, the converse is not true.

2.2.1 pooling

The invertibility conditions of the pooling representation have been recently studied in (Cahill et al., 2013), where the authors obtain necessary and sufficient conditions on the frame . We shall now generalize those results, and derive bi-Lipschiz bounds.

Let us define

(13)

thus contains all the orthogonal bases of each subspace .

The following proposition, proved in section B, computes upper and lower bounds of the Lipschitz constants of .

Proposition 2.4

The pooling operator satisfies

(14)

where

(15)

This proposition thus generalizes the results from (Cahill et al., 2013), since it shows that not only controls when is invertible, but also controls the stability of the inverse mapping.

We also consider the rectified pooling case. For simplicity, we shall concentrate in the case where the pools have dimension . For that purpose, for each , we consider a modification of the families , by replacing each sub-frame by , that we denote .

Corollary 2.5

Let , and set . Then the rectified pooling operator satisfies

(16)

where

Proposition 2.4 and Corollary 2.5 give a lower Lipschitz bound which gives sufficient guarantees for the inversion of pooling representations. Corollary 2.5 indicates that, in the case , the lower Lipschitz bounds are sharper than the non-rectified case, in consistency with the results of section 2.1. The general case remains an open issue.

2.2.2 Pooling

We give in this section sufficient and necessary conditions such that the max-pooling operator is injective, and we compute a lower bound of its lower Lipschitz constant.

Given , we define the switches of as the vector of coordinates in each pool where the maximum is attained; that is, for each :

and we denote by the set of all attained switches: . This is a discrete subset of . Given , the set of input signals having defines a linear cone :

and as a result the input space is divided into a collection of Voronoi cells defined from linear equations. Restricted to each cone , the max-pooling operator computes the phaseless mapping from equation (3) corresponding to .

Given vectors and , as usual, set the angle For each such that and for each , we define

This is a modified first principal angle between the subspaces and , where the infimum is taken only on the directions included in the respective cones. Set .

Given , , we also define . Recall is the size of each pool. Set

The following proposition, proved in section B, gives a lower Lipschitz bound of the max-pooling operator.

Proposition 2.6

For all and , the max-pooling operator satisfies

(17)

where .

Propostion 2.6 shows that the lower Lipschitz bound is controlled by two different phenomena. The first one depends upon how the cones corresponding to disjoint switches are aligned, whereas the second one depends on the internal incoherence of each frame . One may ask how do these constants evolve in different asymptotic regimes. For example, if one lets the number of pools be fixed but increases the size of each pool by increasing . In that case, the set of possible switches increases, and each cone gets smaller. The bound corresponding to decreases since the infimum is taken over a larger family. However, as the cones become smaller, the likelihood that any pair share the same switches decreases, thus giving more importance to the case . Although the ratio decreases, the lower frame bounds , will in general increase linearly with . The lower bound will thus mainly be driven by the principal angles . Although the minimum in (28) is taken over a larger family, each angle is computed over a smaller region of the space, suggesting that one can indeed increase the size of each pool without compromising the injectivity of the max-pooling.

Another asymptotic regime considers pools of fixed size and increases the number of pools . In that case, the bound increases as long as the principal angles remain lower bounded.

We also consider the stability of max-pooling with a half-rectification. By redefining the switches accordingly:

(18)

the following proposition, proved in section B, computes a lower bound of the Lipschitz constant of .

Corollary 2.7

The rectified max-pooling operator satisfies

(19)

with

defined using the cones obtained from (18).

2.2.3 Pooling and Max-Out

Propostion 2.6 can be used to obtain a bound of the lower Lipschitz constant of the pooling operator, as well as the Maxout operator (Goodfellow et al., 2013); see section B.4.2 in the supplementary material.

2.3 Random Pooling Operators

What is the minimum amount of redundancy needed to invert a pooling operator? As in previous works on compressed sensing (Candes and Tao, 2004) and phase recovery (Balan et al., 2006)

, one may address this question by studying random pooling operators. In this case, the lower Lipschitz bounds derived in previous sections can be shown to be positive with probability

given appropriate parameters and .

The following proposition, proved in Appendix B, analyzes the invertibility of a generic pooling operator constructed from random measurements. We consider a frame where its columns are iid Gaussian vectors of .

Proposition 2.8

Let be a random frame of , organized into disjoint pools of dimension . With probability is injective (modulo ) if for and if for .

The size of the pools does not influence the injectivity of random pooling, but it affects the stability of the inverse, as shown in proposition 2.6. The half-rectified case requires extra care, since the set of admissible switches might contain frames with columns with non-zero probability, and is not considered in the present work.

3 Numerical Experiments

Our main goal in this section is to experimentally compare the invertibility of pooling for , with and without rectification. Unlike in the previous sections, we will not consider the Lipschitz bounds, as we do not know a good way to measure these experimentally. Our experiments suggest that recovery is roughly the same difficulty for , and that rectification makes recovery easier.

In the case without rectification, and with , a growing body of works (Candes et al., 2013; Waldspurger et al., 2012) describe how to invert the pooling operator. This is often called phase recovery. A problem for us is a lack of a standard algorithm when or with rectification. We will see that the simple alternating minimization algorithm of (Gerchberg and Saxton, 1972) can be adapted to these situations. However, alternating minimization with random initialization is known to be an inferior recovery algorithm for , and so any conclusions we will draw about ease of recovery will be tainted, as we would be testing whether the algorithm is equally bad in the various situations, rather than if the problems are equally hard. We will show that in certain cases, a training set allows us to find a good initialization for the alternating minimization, leading to excellent recovery performance, and that in this setting, or the random setting, recovery via alternating minimization is roughly as succesful for each of the , suggesting invertibility is equally hard for each . In the same way, we will see evidence that half rectification before pooling makes recovery easier.

3.1 Recovery Algorithms

3.1.1 Alternating minimization

A greedy method for recovering the phase from the modulus of complex measurements is given in (Gerchberg and Saxton, 1972); this method naturally extends to the case at hand. As above, denote the frame , let be the frame vectors in the th block, and set to be the indices of the th block. Let be the pseudoinverse of ; set . Starting with an initial signal , update

  1. , ,

  2. .

This approach is not, as far as we know, guarantee to converge to the correct solution, even when is invertible. However, in practice, if the inversion is easy enough, or if is close to the true solution, the method can work well. Moreover, this algorithm can be run essentially unchanged for each ; for half rectification, we only use the nonegative entries in for reconstruction.

In the experiments below, we will use random, Gaussian i.i.d. , but also we will use the outputs of dictionary learning with block sparsity. The generated this way is not really a frame, as the condition number of a trained dictionary on real data is often very high. In this case, we will renormalize each data point to have norm , and modify the update to

  1. .

In practice, this modification might not always be possible, since the norm is not explicitly presented in . However, in the classical setting of Fourier measurements and positive , this information is available. Moreover, our empirical experience has been that using this regularization on well conditioned analysis dictionaries offers no benefit; in particular, it gives no benefit with random analysis matrices.

3.1.2 Phaselift and Phasecut

Two recent algorithms (Candes et al., 2013) and (Waldspurger et al., 2012) are guaranteed with high probability to solve the (classical) problem of recovering the phase of a complex signal from its modulus, given enough random measurements. In practice both perform better than the greedy alternating minimization. However, it is not obvious to us how to adapt these algorithms to the setting.

3.1.3 Nearest neighbors regression

We would like to use the same basic algorithm for all settings to get an idea of the relative difficulty of the recovery problem for different , but also would like an algorithm with good recovery performance. If our algorithm simply returns poor results in each case, differences between the cases might be masked.

The alternating minimization can be very effective when well initialized. When given a training set of the data to recover, we use a simple regression to find . Fix a number of neighbors (in the experiments below we use , and suppose is the training set). Set , and for a new point to recover from , find the nearest neighbors in of , and take their principal component to serve as in the alternating minimization algorithm. We use the fast neighbor searcher from (Vedaldi and Fulkerson, 2008) to accelerate the search.

3.2 Experiments

We discuss results on the MNIST dataset, available at http://yann.lecun.com/exdb/mnist/, and on patches drawn from the VOC dataset, available at http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2012/. For each of these data sets, we run experiments with random dictionaries and adapted dictionaries. We also run experiments where the data and the dictionary are both Gaussian i.i.d.; in this case, we do not use adapted dictionaries.

The basic setup of the experiments in each case is the same: we vary the number of measurements (that is, number of pools) over some range, and attempt to recover the original signal from the pooled measurements, using various methods. We record the average angle between the recovered signal and the original , that is, we use as the measure of success in recovery.

In each case the random analysis dictionary is built by fixing a size parameter , and generating a Gaussian i.i.d. matrix of size , where for MNIST, and for VOC. Each pair of rows of is then orthogonalized to obtain ; that is we use groups of size , where the pair of elements in each group are orthogonal. This allows us to use standard phase recovery software in the case to get a baseline. We used the ADMM version of phaselift from (Ohlsson et al., 2012) and the phasecut algorithm of (Waldspurger et al., 2012). For all of our data sets, the latter gave better results (note that phasecut can explicitly use the fact that the solution to the problem is real, whereas that version of phaselift cannot), so we report only the phasecut results.

In the experiments with adapted dictionaries, the dictionary is built using block OMP and batch updates with a K-SVD type update (Aharon et al., 2006); in this case, is the transpose of the learned dictionary. We again use groups of size in the adapted dictionary experiments.

We run two sets of experiments with Gaussian i.i.d. data and dictionaries, with and . We consider in the range from to . On this data set, phaselift outperforms alternating minimization; see the supplementary material.

For MNIST, we use the standard training set projected to via PCA, and we let the number of dictionary elements range from 60 to 600 (that is, 30 to 300 measurements). On this data set, alternating minimization with nearest neighbor initialization gives exact reconstruction by measurements; for comparison, Phaselift at has mean square angle of ; see the supplementary material.

We draw approximately 5 million

grayscale image patches from the PASCAL VOC data set; these are sorted by variance, and the largest variance 1 million are kept. The mean is removed from each patch. These are split into a training set of 900000 patches and a test set of 100000 patches. In this experiment, we let

range from 30 to 830. On this data set, by measurements, alternating minimization with nearest neighbor initialization recovers mean angle of ; for comparison, Phaselift at has mean angle of ; see the supplementary material.

Figure 1: Average recovery angle using alternating projections on random data; each is Gaussian i.i.d. in . The vertical axis measures the average value of , where is the recovered vector, over 50 random test points. The horizontal axis is the number of measurements (the size of the analysis dictionary is twice the axis in this experiment). The leftmost figure is pooling, the middle , and the right max pooling. The dark blue curve is alternating minimization, and the green curve is alternating minimization with half rectification; both with random initialization.
(a) MNIST, random filters
(b) MNIST, adapted filters
(c) Image patches, random filters
(d) Image patches, adapted filters
Figure 2: Average recovery angle using alternating projections on image patch data points. The vertical axis measures the average value of , where is the recovered vector, over 50 random test points. The horizontal axis is the number of measurements (the size of the analysis dictionary is twice the axis in this experiment). The leftmost figure is pooling, the middle , and the right max pooling. In the top row of each pair of rows the analysis dictionary is Gaussian i.i.d.; in the bottom row of each pair of rows, it is generated by block OMP/KSVD with nozero blocks of size 2. The dark blue curve is alternating minimization, and the green curve is alternating minimization with half rectification; both with random initialization. The magenta and yellow curves are the nearest neighbor regressor described in 3.1.3 without and with rectification; and the red and aqua curves are alternating minimization initialized via neighbor regression, without and with rectification. See Section 3.3 for a discussion of the figures.

3.3 Analysis

The experiments show (see figures 1 and 2) that:

  • For every data set, with random initializations and dictionaries, recovery is easier with half rectification before pooling than without (green vs dark blue in figures).

  • , , and pooling appear roughly the same difficulty to invert, regardless of algorithm (each column of figures, corresponding to an , is essentially the same).

  • Good initialization improves performance; indeed, alternating minimization with nearest neighbor regression outperforms phaselift and phasecut (which of course do not have the luxury of samples from the prior, as the regressor does). We believe this of independent interest.

  • Adapted analysis “frames” (with regularization) are easier to invert than random analysis frames, with or without regularization (the bottom row of each pair of graphs vs the top row of each pair in Figure 2).

Each of these conclusions is unfortunately only true up to the optimization method- it may be true that a different optimizer will lead to different results. With learned initializations and alternating minimization, recovery results can be better without half rectification. Note this is only up until the point where the alternating minimization gets better than the learned initialization without any refinement, and is especially true for random dictionaries. The simple interpretation is that the reconstruction step 2 of the alternating minimization just does not have a large enough span with roughly half the entries removed; that is, this is an effect of the optimization, not of the difference between the difficulty of the problems.

4 Conclusion

We have studied conditions under which neural network layers of the form (1) and (2) preserve signal information. As one could expect, recovery from pooling measurements is only guaranteed under large enough redundancy and configurations of the subspaces, which depend upon which is considered. We have proved conditions which bound the lower Lipschitz constants for these layers, giving quantitative descriptions of how much information they preserve. Furthermore, we have given conditions under which modules with random filters are invertible. We have also given experimental evidence that for both random and adapted modules, it is roughly as easy to invert pooling with , , and ; and shown that when given training data, alternating minimization gives state of the art phase recovery with a regressed initialization.

However, we are not anywhere near where we would like to be in understanding these systems, or even the invertibility of the layers of these systems. This work gives little direct help to a practicioner asking the question “how should I design my network?”. In particular, our results just barely touch on the distribution of the data; but the experiments make it clear (see also (Ohlsson et al., 2012)) that knowing more information about the data changes the invertibility of the mappings. Moreover, preservation of information needs to be balanced against invariance, and the tension between these is not discussed in this work. Even in the setting of this work, without consideration of the data distribution or tension with invariance, Proposition 2.4 although tight, is not easy to use, and even though we are able to use 2.6 to get an invertibility result, it is probably not tight.

This work also shows there is much research to do in the field of algorithmic phase recovery. What are correct algorithms for inversion, perhaps with half rectification? How can we best use knowledge of the data distribution for phase recovery, even for the well studied case? Is it possible to guarantee that a well initialized alternating minimization converges to the correct solution?

References

  • Achan et al. [2003] Kannan Achan, Sam T. Roweis, and Brendan J. Frey. Probabilistic inference of speech signals from phaseless spectrograms. In In Neural Information Processing Systems 16, pages 1393–1400. MIT Press, 2003.
  • Aharon et al. [2006] M. Aharon, M. Elad, and A. Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. Trans. Sig. Proc., 54(11):4311–4322, November 2006. ISSN 1053-587X.
  • Balan and Wang [2013] Radu Balan and Yang Wang. Invertibility and robustness of phaseless reconstruction, 2013.
  • Balan et al. [2006] Radu Balan, Pete Casazza, and Dan Edidin. On signal reconstruction without phase. Applied and Computational Harmonic Analysis, 20(3):345–356, May 2006.
  • Bruna and Mallat [2012] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE transactions of PAMI, 2012.
  • Cahill et al. [2013] Jameson Cahill, Peter G. Casazza, Jesse Peterson, and Lindsey Woodland. Phase retrieval by projections, 2013.
  • Candes and Tao [2004] E. Candes and T. Tao. Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? ArXiv Mathematics e-prints, October 2004.
  • Candes et al. [2013] Emmanuel J. Candes, Thomas Strohmer, and Vladislav Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1241–1274, 2013.
  • Gerchberg and Saxton [1972] R. W. Gerchberg and W. Owen Saxton. A practical algorithm for the determination of the phase from image and diffraction plane pictures. Optik, 35:237–246, 1972.
  • Goodfellow et al. [2013] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout Networks. ArXiv e-prints, February 2013.
  • Hyvärinen and Hoyer [2001] A. Hyvärinen and P. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413–2423, August 2001. ISSN 00426989. doi: 10.1016/s0042-6989(01)00114-6.
  • Kavukcuoglu et al. [2009] Koray Kavukcuoglu, Marc’Aurelio Ranzato, Rob Fergus, and Yann LeCun. Learning invariant features through topographic filter maps. In

    Proc. International Conference on Computer Vision and Pattern Recognition (CVPR’09)

    . IEEE, 2009.
  • Mallat [2012] S. Mallat. Group Invariant Scattering. Communications in Pure and Applied Mathematics, 2012.
  • Ohlsson et al. [2012] Henrik Ohlsson, Allen Y. Yang, Roy Dong, and S. Shankar Sastry. Cprl – an extension of compressive sensing to the phase retrieval problem. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, editors, NIPS, pages 1376–1384, 2012.
  • Ohlsson [2013] Y. Ohlsson, H. Eldar. On conditions for uniqueness for sparse phase retrieval, 2013.
  • Vedaldi and Fulkerson [2008] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms. http://www.vlfeat.org/, 2008.
  • Waldspurger et al. [2012] Irène Waldspurger, Alexandre d’Aspremont, and Stéphane Mallat. Phase recovery, maxcut and complex semidefinite programming, 2012.

Appendix A Comparison between phaselift and the various alternating minimization algorithms

Here we give a brief comparison between the phaselift algorithm and the algorithms we use in the main text. Our main goal is to show that the similarities between the , , recovery results are not just due to the alternating minimization algorithm performing poorly on all three tasks; however we feel that the quality of the recovery with a regressed initialization is interesting in itself, especially considering that it is much faster than either phaselift or phasecut.

In figures 3, and 4

we compare phaselift against alternating minimization with a random initialization and alternating minimization with a nearest neghbor/locally linear regressed initialization. Because we are comparing against phasecut, here we only show inversion of

pooling.

In figure of 3, we use random data and a random dictionary. As the data has no structure, we only compare against random initialization, with and without half rectification. We can see from figure 3 in this case, where we do not know a good way to initialize the alternating minimization, alternating minimization is significantly worse than phasecut. On the other hand, recovery after rectified pooling with alternating minimization does almost as well as phasecut.

In the examples where we have training data, shown in figure 4, alternating minimization with the nearest neighbor regressor (red curve) performs significantly better than phasecut (green and blue curves). Of course phasecut does not get the knowledge of the data distribution used to generate the regressor.

Figure 3: Average recovery angle using phaselift and alternating minimization on random data, Gaussian i.i.d. points in . The blue curve is phaselift followed by alternating minimization; the green curve is alternating minimization, and the red is alternating minimization on pooling following half rectification.
Figure 4: Average recovery angle using phaselift and alternating minimization on MNIST and patches data sets. Top: MNIST digits, projected via PCA to . Bottom: 16x16 image patches with mean removed. The red curve is alternating minimization with nearest neighbor initialization, the green is alternating minimization initialized by phasecut (this is the recommended usage of phasecut), the blue is phasecut with no alternating minimization, and the aqua is alternating minimization with a random initialization.

Appendix B Proofs of results in Section 2

b.1 Proof of Proposition 2.2

Let us first show that is sufficient to construct an inverse of . Let . By definition, the coordinates of correspond to

which in particular implies that is known to lie in , the subspace generated by . But the restriction is a linear operator, which can be inverted in as long as .

Let us now show that is also necessary. Let us suppose that for some , is such that . It results that there exists such that but . Since is a cone, we can find and small enough such that . It results that which implies that cannot be injective.

Finally, let us prove (9). If are such that , then

If , we have that if and if , . It results that

.

b.2 Proof of Proposition 2.4

The upper Lipschitz bound is obtained by observing that, in dimension ,

It results that

Let us now concentrate on the lower Lipschitz bound. Given , we first consider a rotation on each subspace such that for , which always exists. If now we modify by applying a rotation of the remaining two-dimensional subspace such that and are bisected, one can verify that

which implies, by denoting , that . Since , it results from Proposition 2.1 that

(21)

b.3 Proof of Corollary 2.5

Given , let denote the groups , such that . It results that

On the groups in we can apply the same arguments as in theorem 2.4, and hence find a frame from the family such that

with and . Then, by following the same arguments used previously, it results from the definition of that

Finally, the upper Lipschitz bound is obtained by noting that

and using the same argument as in (B.2) .

b.4 Proof of Proposition 2.6

Let , and let . Suppose first that . Since , it results that

(22)

by Proposition 2.1 and by definition (LABEL:mpool_bobo).

Let us now suppose , and let . It results that , and hence we can split the coordinates into , such that

We shall concentrate in each restriction independently. Since , it results that

(23)

Since by definition