Active learning of deep surrogates for PDEs: Application to metasurface design

08/24/2020 ∙ by Raphaël Pestourie, et al. ∙ MIT ibm 0

Surrogate models for partial-differential equations are widely used in the design of meta-materials to rapidly evaluate the behavior of composable components. However, the training cost of accurate surrogates by machine learning can rapidly increase with the number of variables. For photonic-device models, we find that this training becomes especially challenging as design regions grow larger than the optical wavelength. We present an active learning algorithm that reduces the number of training points by more than an order of magnitude for a neural-network surrogate model of optical-surface components compared to random samples. Results show that the surrogate evaluation is over two orders of magnitude faster than a direct solve, and we demonstrate how this can be exploited to accelerate large-scale engineering optimization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 9

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (Left) Examples of three-dimensional and two-dimensional unit cells. 3D: fin unit cell with two parameters, H-shape unit cell with four parameters. 2D: multi-layer unit cell with holes with ten parameters. Each of the unit cell parameters are illustrated by red arrows. The transmitted field of the unit-cell is computed with periodic boundary conditions. When the period is subwavelength, the transmitted field can be summarized by a single complex number—the complex transmission. (Right) Unit cells (with independent sets of parameters) are juxtaposed to form a metasurface which is optimized to scatter light in a prescribed way. Using the local periodic approximation and the unit cell simulations, we can efficiently compute the approximate source equivalent to the metasurface and generate the field anywhere in the far-field.

Designing metamaterials or composite materials, in which computational tools select composable components to recreate desired properties that are not present in the constituent materials, is a crucial task for a variety of areas of engineering (acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics) [20]. For example in metalenses, the components are subwavelength scatterers on a surface, but the device diameter is often wavelengths [21]. Applications of such optical structures include ultra-compact sensors, imaging, and spectroscopy devices used in cell phone cameras and in medical applications [21]. As the metamaterials become larger in scale and as the manufacturing capabilities improve, there is a pressing need for scalable computational design tools.

In this work, surrogate models were used to rapidly evaluate the effect of each metamaterial components during device design [38], and machine learning is an attractive technique for such models [2, 18, 3, 19]. However, in order to exploit improvements in nano-manufacturing capabilities, components have an increasing number of design parameters and training the surrogate models (using brute-force numerical simulations) becomes increasingly expensive. The question then becomes: How can we obtain an accurate model from minimal training data? We present a new active-learning (AL) approach—in which training points are selected based on an error measure (Fig. 3)—that can reduce the number of training points by more than an order of magnitude for a neural-network (NN) surrogate model of partial-differential equations (PDEs). Further, we show how such a surrogate can be exploited to speed up large-scale engineering optimization by . In particular, we apply our approach to the design of optical metasurfaces: large ( wavelengths ) aperiodic nanopattered ( structures that perform functions such as compact lensing [51].

Metasurface design can be performed by breaking the surface into unit cells with a few parameters each (Fig. 1) via domain-decomposition approximations [38, 25], learning a “surrogate” model that predicts the transmitted optical field through each unit as a function of an individual cell’s parameters, and optimizing the total field (e.g. the focal intensity) as a function of the parameters of every unit cell [38] (Sec. 2). This makes metasurfaces an attractive application for machine learning (Sec. 4) because the surrogate unit-cell model is re-used millions of times during the design process, amortizing the cost of training the model based on expensive “exact” Maxwell solves sampling many unit-cell parameters. For modeling the effect of a

unit-cell parameters, Chebyshev polynomial interpolation can be very effective 

[38]

, but encounters an exponential “curse of dimensionality” with more parameters 

[7, 48]. In this paper, we find that a NN can be trained with orders of magnitude fewer Maxwell solves for the same accuracy with parameters, even for the most challenging case of multi-layer unit cells many wavelengths () thick (Sec. 5). In contrast, we show that subwavelength-diameter design regions (considered by several other authors [2, 17, 18, 32, 3, 19]) require orders of magnitude fewer training points for the same number of parameters (Sec. 3), corresponding to the physical intuition that wave propagation through subwavelength regions is effectively determined by a few “homogenized” parameters [15], making the problems effectively low-dimensional. In contrast to typical machine-learning applications, constructing surrogate models for physical model such as Maxwell’s equations corresponds to interpolating smooth functions with no noise, and this requires new approaches to training and active learning as described in Sec. 4. We believe that these methods greatly extend the reach of surrogate model for metamaterial optimization and other applications requiring moderate-accuracy high-dimensional smooth interpolation.

Recent work has demonstrated a wide variety of optical-metasurface design problems and algorithms. Different applications [33] such as holograms [45], polarization- [4, 36], wavelength- [49], depth-of-field-[6], or incident angle-dependent functionality [29] are useful for imaging or spectroscopy [1, 52]. Ref. Pestourie et al. [38] introduced an optimization approach to metasurface design using Chebyshev-polynomial surrogate model, which was subsequently extended to topology optimization ( parameters per cell) with “online” Maxwell solvers [26]. Metasurface modeling can also be composed with signal/image-processing stages for optimized “end-to-end design” [44, 27]. Previous work demonstrated NN surrogate models in optics for a few parameters [28, 34, 41], or with more parameters in deeply subwavelength design regions [2, 17]. As we will show in Sec. 3, deeply subwavelength regions pose a vastly easier problem for NN training than parameters spread over larger diameters. Another approach involves generative design, again typically for subwavelength [3, 19] or wavelength-scale unit cells [30], in some cases in conjunction with larger-scale models [18, 32, 17]. A generative model is essentially the inverse of a surrogate function: instead of going from geometric parameters to performance, it takes the desired performance as an input and produces the geometric structure, but the mathematical challenge appears to be closely related to that of surrogates.

Active learning (AL) is connected with the field of uncertainty quantification (UQ), because AL consists of adding the “most uncertain” points to training set in an iterative way (Sec. 4) and hence it requires a measure of uncertainty. Our approach to UQ (Sec. 4) is based on the NN-ensemble idea of Ref. Lakshminarayanan et al. [24] due to its scalability and reliability. There are many other approaches for UQ [16, 12, 43, 47, 10], but Ref. Lakshminarayanan et al. [24] demonstrated performance and scalability advantages of the NN-ensemble approach. In contrast, Bayesian optimization relies on Gaussian processes that scale poorly ( where is the number of training samples) [31, 5]. To our knowledge, the work presented here is the first to achieve training time efficiency (we show an order of magnitude reduction sample complexity), design time efficiency (the actively learned surrogate model is at least two orders of magnitude faster than solving Maxwell’s equations), and realistic large-scale designs (due to our optimization framework [38]), all in one package.

2 Metasurfaces and surrogate models

In this section, we present the neural-network surrogate model used in this paper, for which we adopt the metasurface design formulation from Ref. Pestourie et al. [38]. The first step of this approach is to divide the metasurface into unit cells with a few geometric parameters each. For example, Fig. 1(left) shows several possible unit cells: (a) a rectangular pillar (“fin”) etched into a 3d dielectric slab [22] (two parameters); (b) an H-shaped hole (four parameters) in a dielectric slab [2]; or a (c) multi-layered 2d unit cell with ten holes of varyings widths considered in this paper. As depicted in Fig. 1(right), a metasurface consists of an array of these unit cells. The second step is to solve for the transmitted field (from an incident planewave) independently for each unit cell using approximate boundary conditions [38, 26, 22, 50], in our case a locally periodic approximation (LPA) based on the observation that optimal structures often have parameters that mostly vary slowly from one unit cell to the next [38]. (Other approximate boundary conditions are also possible [25].) For a subwavelength period, the LPA transmitted far field is entirely described by a single number—the complex transmission coefficient . One can then compute the field anywhere above the metasurface by convolving these approximate transmitted fields with a known Green’s function, a near-to-farfield transformation [14]. Finally, any desired function of the transmitted field, such as the focal-point intensity, can be optimized as a function of the geometric parameters of each unit cell [38].

In this way, optimizing an optical metasurface is built on top of evaluating the function (transmission through a single unit cell as a function of its geometric parameters) thousands or even millions of times—once for every unit cell, for every step of the optimization process. Although it is possible to solve Maxwell’s equations “online” during the optimization process, allowing one to use thousands of parameters per unit cell requires substantial parallel computing clusters [26]. Alternatively, one can solve Maxwell’s equations “offline” (before metasurface optimization) in order to fit to a surrogate model

(1)

which can subsequently be evaluated rapidly during metasurface optimization (perhaps for many different devices). For similar reasons, surrogate (or “reduced-order”) models are attractive for any design problem involving a composite of many components that can be modeled separately [3, 19, 35]. The key challenge of the surrogate approach is to increase the number of design parameters, especially in non-subwavelength regions as discussed in Sec. 3.

In this paper, the surrogate model for each of the real and imaginary parts of the complex transmission is an ensemble of independent neural networks (NNs) with the same training data but different random “batches” [13] on each training step. Each of NN  is trained to output a prediction

and an error estimate

for every set of parameters . To obtain these and from training data (from brute-force “offline” Maxwell solves) we minimize [24]:

(2)

over the parameters of NN . Equation (2) is motivated by problems in which

was sampled from a Gaussian distribution for each

, in which case and

could be interpreted as mean and hetero-skedastic variance, respectively 

[24]. Although our exact function is smooth and noise-free, we find that Eq. (2) still works well to estimate the fitting error, as demonstrated in Sec. 4

. Each NN is composed of an input layer with 13 nodes (10 nodes for the geometry parameterization and 3 nodes for the one-hot encoding 

[13]

of three frequencies of interest), three fully-connected hidden layers with 256 rectified linear units (ReLU 

[13]

), and one last layer containing one unit with a scaled hyperbolic-tangent activation function 

[13] (for ) and one unit with a softplus activation function [13] (for ). Given this ensemble of  NNs, the final prediction (for the real or imaginary part of ) and its associated error estimate are amalgamated as [24]:

(3)
(4)

3 Subwavelength is easier: Effect of diameter

Figure 2: Comparison of baseline training as we shrink the unit cell. Left: for the same number of training points, the fractional error (defined in Methods) on the test set of the small unit cell and the smallest unit cell are, respectively, one and two orders of magnitude better than the error of the main unit cell when using training points or more, which indicates that parameters are more independent when the design-region diameter is big (), and training the surrogate model becomes harder. Right: pictures of the unit cells to scale. Each color corresponds to the line color in the plot. For clarity, an inset shows the smallest unit cell enlarged 10 times.

Before performing active learning, we first identify the regime where active learning can be most useful: unit-cell design volumes that are not small compared to the wavelength . Previous work on surrogate models [2, 17, 18, 32, 3, 19] demonstrated NN surrogates (trained with random samples) for unit cells with

parameters. However, these NN models were limited to a regime where the unit-cell degrees of freedom lay within a subwavelength-diameter volume of the unit cell. To illustrate the effect of shrinking design volume on NN training, we trained our surrogate model for three unit cells (Fig. 

2(right)): the main unit cell of this study is deep, the small unit cell is a vertically scaled-down version of the normal unit cell only deep, and the smallest unit cell is a version of the small unit cell further scaled down (both vertically and horizontally) by . Fig. 2(left) shows that, for the same number of training points, the fractional error (defined in Methods) on the test set of the small unit cell and the smallest unit cell are, respectively, one and two orders of magnitude better than the error of the main unit cell when using training points or more. (The surrogate output is the complex transmission from Sec. 2.) That is, Fig. 2(left) shows that in the subwavelength-design regime, training the surrogate model is far easier than for larger design regions (.

Physically, for extremely sub-wavelength volumes the waves only “see” an averaged effective medium [15]

, so there are effectively only a few independent design parameters regardless of the number of geometric degrees of freedom. Quantitatively, we find that the Hessian of the trained surrogate model (second-derivative matrix) in the smallest unit-cell case is dominated by only two singular values—consistent with a function that effectively has only two free parameters—with the other singular values being more than

smaller in magnitude; for the other two cases, many more training points would be required to accurately resolve the smallest Hessian singular values. A unit cell with large design-volume diameter ( is much harder to train, because the dimensionality of the design parameters is effectively much larger.

4 Active-learning algorithm

Figure 3: Diagram of the surrogate model (blue background), and the active-learning algorithm (orange background), the circle arrow signifies that the algorithm iterates T times. The fast evaluation of the surrogate is used both to create predictions of the surrogate model, and to compute the error measure that selects the points to add to the training set.

Here, we present an algorithm to choose training points that is significantly better at reducing the error than choosing points at random. As described below, we select the training points where the estimated model error is largest, given the estimated error from Sec. 2.

The algorithm used to train each of the real and imaginary parts is outlined in Fig. 3 and Algorithm 1. Initially we choose uniformly distributed random points to train a first iteration

over 50 epochs 

[13]. Then, given the model at iteration , we evaluate (which is orders of magnitude faster than the Maxwell solver) at points sampled uniformly at random and choose the points that correspond to the largest . We perform the expensive Maxwell solves only for these points, and add the newly labeled data to the training set. We train with the newly augmented training set. We repeat this process times.

Essentially, the method works because the error estimate is updated every time the model is retrained with an augmented dataset. In this way, model tells us where it does poorly by setting a large for parameters where the estimation would be bad in order to minimize Eq. (2).

Result: the surrogate model ( and )
= points chosen at random ;
Solve expensive PDE for each points in ;
Create the first iteration of the labeled training set ;
Train the ensemble on ;
for i = 1: do
       = MK points chosen at random ;
       Compute (cheaply) the error measures using , ;
       = select K points in with the highest error measures ;
       Solve expensive PDE for each points in and get , ;
       Augment the labeled training set with new labeled data ;
       Train the ensemble on ;
      
end for
Algorithm 1 Active-learning of the surrogate model

5 Active-learning results

5.1 Order-of-magnitude reduction in training data

Figure 4: (Left) The lower the desired fractional error, the greater the reduction in training cost compared to the baseline algorithm; the slope of the active-learning fractional error () is about 30% steeper that that of baseline (). The active-learning algorithm achieves a reasonable fractional error of in twelve times less points than the baseline, which corresponds to more than one order of magnitude saving in training data. Chebyshev interpolation (surrogate for blue frequency only) does not compete well with this number of training points. (Right) Unit cell corresponding to the surrogate model.

We compared the fractional errors of a NN surrogate model trained using uniform random samples with an identical NN trained using an active-learning approach, in both cases modeling the complex transmission of a multi-layer unit cell with ten independent parameters (Fig. 4(right)). With the notation of Sec. 4, the baseline corresponds to , and equal to the total number of training points. This corresponds to no active learning at all, because the points are chosen at random. In the case of active learning, , , and we computed for and . Although three orders of magnitude on the log-log plot is too small to determine if the apparent linearity indicates a power law, Fig. 4(left) shows that the lower the desired fractional error, the greater the reduction in training cost compared to the baseline algorithm; the slope of the active-learning fractional error () is about 30% steeper that that of baseline (). The active-learning algorithm achieves a reasonable fractional error of in twelve times less points than the baseline, which corresponds to more than one order of magnitude saving in training data (much less expensive Maxwell solves). This advantage would presumably increase for a lower error tolerance, though computational costs prohibited us from collecting orders of magnitude more training data to explore this in detail. For comparison and completeness, Fig. 4(left) shows fractional errors using Chebyshev interpolation (for the blue frequency only). Chebyshev interpolation has a much worse fractional error for a similar number of training points. Chebyshev interpolation suffers from the “curse of dimensionality”—the number of training points is exponential with the number of variables. The two fractional errors shown are for three and four interpolation points in each of the dimensions, respectively. In contrast, NNs are known to mitigate the “curse of dimensionality” [11].

Figure 5: (Top) We used the active-learning and the baseline surrogates models to design a multiplexer—an optical device that focuses different wavelength at different points in space. The actively learned surrogate model results in a design that much more closely matches a numerical validation than the baseline surrogate. This shows that the active-learning surrogate is better at driving the optimization away from regions of inaccuracy. (Bottom) The resulting metastructure for the active-learning surrogate with 100 unit cells of 10 independent parameters each (one parameter per layer).

5.2 Application to metalens design

We used both surrogates models to design a multiplexer—an optical device that focuses different wavelength at different points in space. The actively learned surrogate model results in a design that much more closely matches a numerical validation than the baseline surrogate (Fig. 5). As explained in Sec. 2

, we replace a Maxwell’s equations solver with a surrogate model to rapidly compute the optical transmission through each unit cell; a similar surrogate approached could be used for optimizing many other complex physical systems. In the case of our two-dimensional unit cell, the surrogate model is two orders of magnitude faster than solving Maxwell’s equations with a finite difference frequency domain (FDFD) solver 

[8]. The speed advantage of a surrogate model becomes drastically greater in three dimensions, where PDE solvers are much more costly while a surrogate model remains the same.

The surrogate model is evaluated millions of times during a meta-structure optimization. We used the actively learned surrogate model and the baseline surrogate model (random training samples), in both cases with training points, and we optimized a ten-layer metastructure with unit cells of period  nm for a multiplexer application—where three wavelengths (blue:  nm, green:  nm, and red:  nm) are focused on three different focal spots (m, m), (, m), and (m, m), respectively. The diameter is m and the focal length is m, which corresponds to a numerical aperture of . Our optimization scheme tends to yield results robust to manufacturing errors [38] for two reasons: first, we optimize for the worst case of the three focal spot intensities, using an epigraph formulation [38]; second, we compute the average intensity from an ensemble of surrogate models that can be thought of as a Gaussian distribution with , and and are defined in Eq. (3) and Eq. (4), respectively,

(5)

where is a Green’s function that generates the far-field from the sources of the metastructure [38]. The resulting optimized structure for the active-learning surrogate is shown in Fig. 5(bottom).

In order to compare the surrogate models, we validate the designs by computing the optimal unit cell fields directly using a Maxwell solver instead of using the surrogate model. This is computationally easy because it only needs to be done once for each of the unit cells instead of millions of times during the optimization. The focal lines—the field intensity along a line parallel to the two-dimensional metastructure and passing through the focal spots—resulting from the validation are exact solutions to Maxwell’s equations assuming the locally periodic approximation (Sec. 2). Fig. 5(top) shows the resulting focal lines for the active-learning and baseline surrogate models. A multiplexer application requires similar peak intensity for each of the focal spots, which is achieved using worst case optimization [38]. Fig. 5(top) shows that the actively learned surrogate has smaller error in the focal intensity compared to the baseline surrogate model. This result shows that not only is the active-learning surrogate more accurate than the baseline surrogate for training points, but also the results are more robust using the active-learning surrogate—the optimization does not drive the parameters towards regions of high inaccuracy of the surrogate model. Note that we limited the design to a small overall diameter ( unit cells) mainly to ease visualization (Fig. 5(bottom)), and we find that this design can already yield good focusing performance despite the small diameter. In earlier work, we have already demonstrated that our optimization framework is scalable to designs that are orders of magnitudes larger [39].

Previous work, such as Ref. Chen and Gu [9]—in a different approach to active-learning that does not quantify uncertainty—suggested iteratively adding the optimum design points to the training set (re-optimizing before each new set of training points is added). However, we did not find this approach to be beneficial in our case. In particular, we tried adding the data generated from LPA validations of the optimal design parameters, in addition to the points selected by our active learning algorithm, at each training iteration, but we found that this actually destabilized the learning and resulted in designs qualitatively worse than the baseline. By exploiting validation points, it seems that the active learning of the surrogate tends to explore less of the landscape of the complex transmission function, and hence leads to poorer designs. Such exploitation–exploration trade-offs are known in the active-learning literature [16].

6 Concluding remarks

In this paper, we present an active-learning algorithm for composite materials which reduces the training time of the surrogate model for a physical response, by at least one order of magnitude. The simulation time is reduced by at least two orders of magnitude using the surrogate model compared to solving the partial differential equations numerically. While the domain-decomposition method used here is the locally periodic approximation and the partial differential equations are the Maxwell equations, the proposed approach is directly applicable to other domain-decomposition methods (e.g. overlapping domain approximation [25]

) and other partial differential equations or ordinary differential equations 

[42].

We used an ensemble of NNs for interpolation in a regime that is seldom considered in the machine-learning literature—when the data is obtained from a smooth function rather than noisy measurements. In this regime, it would be instructive to have a deeper understanding of the relationship between NNs and traditional approximation theory (e.g. with polynomials and rational functions [7, 48]). For example, the likelihood maximization of our method forces to go to zero when . Although this allows us to simultaneously obtain a prediction and an error estimate , there is a drawback. In the interpolation regime (when the surrogate is fully determined), would become identically zero even if the surrogate does not match the exact model away from the training points. In contrast, interpolation methods such as Chebyshev polynomials yield a meaningful measure of the interpolation error even for exact interpolation of the training data [7, 48]. In the future, we plan to separate the estimation model and the model for the error measure using a meta-learner architecture [10], with expectation that the meta-learner will produce a more accurate error measure and further improve training time. We believe that the method presented in this paper will greatly extend the reach of surrogate-model based optimization of composite materials and other applications requiring moderate-accuracy high-dimensional interpolation.

Methods

6.1 Training-data computation

The complex transmission coefficients were computed in parallel using an open-source finite difference frequency-domain solver for Helmholtz equation 

[40] on a  GHz -Core Intel Xeon E processor. The material properties of the multi-layered unit cells are silica (refractive index of 1.45) in the substrate, and air (refractive index of 1) in the hole and in the background. In the normal unit cell, the period of the cell is 400 nm, the height of the ten holes is fixed to 304 nm and their widths varies between 60 nm and 340 nm, each hole is separated by 140 nm of substrate. In the small unit cell, the period of the cell is 400 nm, the height of the ten holes is 61 nm, and their widths varies between 60 nm and 340 nm, there is no separation between the holes. The smallest unit cell is the same as the small unit cell shrunk ten times (period of 40 nm, ten holes of heigth 6.1 nm and width varying between 6 nm and 34 nm).

6.2 Metalens design problem

The complex transmission data is used to compute the scattered field off a multi-layered metastructure with unit cells as in Ref. Pestourie et al. [38]. The metastructure was designed to focus three wavelengths (blue:  nm, green:  nm, and red:  nm) on three different focal spots (m, m), (, m), and (m, m), respectively. The epigraph formulation of the worst case optimization and the derivation of the adjoint method to get the gradient are detailed in Ref. Pestourie et al. [38]. Any gradient based-optimization algorithm would work, but we used an algorithm based on conservative convex separable approximations [46]. The average intensity is derived from the distribution of the surrogate model with and the computation of the intensity based on the local field as in Ref. Pestourie et al. [38],

where the notation denotes the complex conjugate, the notations and are simplified to , and , and the notation is dropped for clarity. From the linearity of expectation,

(6)
(7)

where we used that and .

6.3 Active-learning architecture and training

The ensemble of NN was implemented using PyTorch 

[37] on a  GHz -Core Intel Xeon E processor. We trained an ensemble of NN for each surrogate models. Each NN is composed of an input layer with 13 nodes (10 nodes for the geometry parameterization and 3 nodes for the one-hot encoding [13] of three frequencies of interest), three fully-connected hidden layers with 256 rectified linear units (ReLU [13]), and one last layer containing one unit with a scaled hyperbolic-tangent activation function [13] (for ) and one unit with a softplus activation function [13] (for ). The cost function is a Gaussian loglikelihood as in Eq. (2). The mean and the variance of the ensemble are the pooled mean and variance from Eq. (3) and Eq. (4). The optimizer is Adam [23]. The starting learning rate is . After the tenth epoch, the learning rate is decayed by a factor of . Each iteration of the active learning algorithm as well as the baseline were trained for epochs. The choice of training points is detailed in Sec. 4. The quantitative evaluations were computed using the fractional error on a test set containing points chosen at random. The fractional error

between two vectors of complex values

and is

(8)

where is the L-norm for complex vectors.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

References

  • [1] F. Aieta, M. A. Kats, P. Genevet, and F. Capasso (2015) Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347 (6228), pp. 1342–1345. Cited by: §1.
  • [2] S. An, C. Fowler, B. Zheng, M. Y. Shalaginov, H. Tang, H. Li, L. Zhou, J. Ding, A. M. Agarwal, C. Rivero-Baleine, et al. (2019)

    A deep learning approach for objective-driven all-dielectric metasurface design

    .
    ACS Photonics 6 (12), pp. 3196–3207. Cited by: §1, §1, §1, §2, §3.
  • [3] S. An, B. Zheng, H. Tang, M. Y. Shalaginov, L. Zhou, H. Li, T. Gu, J. Hu, C. Fowler, and H. Zhang (2019) Generative multi-functional meta-atom and metasurface design networks. arXiv preprint arXiv:1908.04851. Cited by: §1, §1, §1, §2, §3.
  • [4] A. Arbabi, Y. Horie, M. Bagheri, and A. Faraon (2015) Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission. Nature Nanotechnology 10 (11), pp. 937. Cited by: §1.
  • [5] L. Bassman, P. Rajak, R. K. Kalia, A. Nakano, F. Sha, J. Sun, D. J. Singh, M. Aykol, P. Huck, K. Persson, et al. (2018) Active learning for accelerated design of layered materials. npj Computational Materials 4 (1), pp. 1–9. Cited by: §1.
  • [6] E. Bayati, R. Pestourie, S. Colburn, Z. Lin, S. G. Johnson, and A. Majumdar (2020) Inverse designed metalenses with extended depth of focus. ACS Photonics 7 (4), pp. 873–878. Cited by: §1.
  • [7] J. P. Boyd (2001) Chebyshev and fourier spectral methods.. Dover Publications. Cited by: §1, §6.
  • [8] N. J. Champagne II, J. G. Berryman, and H. M. Buettner (2001) FDFD: a 3d finite-difference frequency-domain code for electromagnetic induction tomography. Journal of Computational Physics 170 (2), pp. 830–848. Cited by: §5.2.
  • [9] C. Chen and G. X. Gu (2020)

    Generative deep neural networks for inverse materials design using backpropagation and active learning

    .
    Advanced Science 7 (5), pp. 1902607. Cited by: §5.2.
  • [10] T. Chen, J. Navrátil, V. Iyengar, and K. Shanmugam (2019)

    Confidence scoring using whitebox meta-models with linear classifier probes

    .
    In

    The 22nd International Conference on Artificial Intelligence and Statistics

    ,
    pp. 1467–1475. Cited by: §1, §6.
  • [11] P. Cheridito, A. Jentzen, and F. Rossmannek (2019) Efficient approximation of high-dimensional functions with deep neural networks. arXiv preprint arXiv:1912.04310. Cited by: §5.1.
  • [12] Y. Gal and Z. Ghahramani (2016) Dropout as a bayesian approximation: representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. Cited by: §1.
  • [13] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT press. Cited by: §2, §4, §6.3.
  • [14] R. F. Harrington (2001-09) Time-Harmonic Electromagnetic Fields. 2nd edition, Wiley-IEEE. Note: Hardcover External Links: ISBN 047120806X, Link Cited by: §2.
  • [15] C. L. Holloway, E. F. Kuester, and A. Dienstfrey (2011) Characterizing metasurfaces/metafilms: the connection between surface susceptibilities and effective material properties. IEEE Antennas and Wireless Propagation Letters 10, pp. 1507–1511. Cited by: §1, §3.
  • [16] E. Hüllermeier and W. Waegeman (2019) Aleatoric and epistemic uncertainty in machine learning: a tutorial introduction. arXiv preprint arXiv:1910.09457. Cited by: §1, §5.2.
  • [17] J. Jiang, M. Chen, and J. A. Fan (2020) Deep neural networks for the evaluation and design of photonic devices. arXiv preprint arXiv:2007.00084. Cited by: §1, §1, §3.
  • [18] J. Jiang and J. A. Fan (2019) Simulator-based training of generative neural networks for the inverse design of metasurfaces. Nanophotonics 1. Cited by: §1, §1, §1, §3.
  • [19] J. Jiang, D. Sell, S. Hoyer, J. Hickey, J. Yang, and J. A. Fan (2019)

    Free-form diffractive metagrating design based on generative adversarial networks

    .
    ACS nano 13 (8), pp. 8872–8878. Cited by: §1, §1, §1, §2, §3.
  • [20] M. Kadic, G. W. Milton, M. van Hecke, and M. Wegener (2019) 3D metamaterials. Nature Reviews Physics 1 (3), pp. 198–210. Cited by: §1.
  • [21] M. Khorasaninejad and F. Capasso (2017) Metalenses: versatile multifunctional photonic components. Science 358 (6367). Cited by: §1.
  • [22] M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso (2016) Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 352 (6290), pp. 1190–1194. Cited by: §2.
  • [23] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §6.3.
  • [24] B. Lakshminarayanan, A. Pritzel, and C. Blundell (2017) Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pp. 6402–6413. Cited by: §1, §2.
  • [25] Z. Lin and S. G. Johnson (2019) Overlapping domains for topology optimization of large-area metasurfaces. Optics Express 27 (22), pp. 32445–32453. Cited by: §1, §2, §6.
  • [26] Z. Lin, V. Liu, R. Pestourie, and S. G. Johnson (2019) Topology optimization of freeform large-area metasurfaces. Optics Express 27 (11), pp. 15765–15775. Cited by: §1, §2, §2.
  • [27] Z. Lin, C. Roques-Carmes, R. Pestourie, M. Soljačić, A. Majumdar, and S. G. Johnson (2020) End-to-end inverse design for inverse scattering via freeform metastructures. arXiv preprint arXiv:2006.09145. Cited by: §1.
  • [28] D. Liu, Y. Tan, E. Khoram, and Z. Yu (2018) Training deep neural networks for the inverse design of nanophotonic structures. ACS Photonics 5 (4), pp. 1365–1369. Cited by: §1.
  • [29] W. Liu, Z. Li, H. Cheng, C. Tang, J. Li, S. Zhang, S. Chen, and J. Tian (2018) Metasurface enabled wide-angle fourier lens. Advanced Materials 30 (23), pp. 1706368. Cited by: §1.
  • [30] Z. Liu, Z. Zhu, and W. Cai (2020) Topological encoding method for data-driven photonics inverse design. Optics Express 28 (4), pp. 4825–4835. Cited by: §1.
  • [31] T. Lookman, P. V. Balachandran, D. Xue, and R. Yuan (2019) Active learning in materials science with emphasis on adaptive sampling using uncertainties for targeted design. npj Computational Materials 5 (1), pp. 1–17. Cited by: §1.
  • [32] W. Ma, F. Cheng, and Y. Liu (2018) Deep-learning-enabled on-demand design of chiral metamaterials. ACS Nano 12 (6), pp. 6326–6334. Cited by: §1, §1, §3.
  • [33] E. Maguid, I. Yulevich, D. Veksler, V. Kleiner, M. L. Brongersma, and E. Hasman (2016) Photonic spin-controlled multifunctional shared-aperture antenna array. Science 352 (6290), pp. 1202–1206. Cited by: §1.
  • [34] I. Malkiel, M. Mrejen, A. Nagler, U. Arieli, L. Wolf, and H. Suchowski (2018) Plasmonic nanostructure design and characterization via deep learning. Light: Science & Applications 7 (1), pp. 1–8. Cited by: §1.
  • [35] M. P. Mignolet, A. Przekop, S. A. Rizzi, and S. M. Spottswood (2013) A review of indirect/non-intrusive reduced order modeling of nonlinear geometric structures. Journal of Sound and Vibration 332 (10), pp. 2437–2460. Cited by: §2.
  • [36] J. B. Mueller, N. A. Rubin, R. C. Devlin, B. Groever, and F. Capasso (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Physical Review Letters 118 (11), pp. 113901. Cited by: §1.
  • [37] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. In NIPS-W, Cited by: §6.3.
  • [38] R. Pestourie, C. Pérez-Arancibia, Z. Lin, W. Shin, F. Capasso, and S. G. Johnson (2018) Inverse design of large-area metasurfaces. Optics Express 26 (26), pp. 33732–33747. Cited by: §1, §1, §1, §1, §2, §5.2, §5.2, §6.2.
  • [39] R. Pestourie (2020) Assume your neighbor is your equal: inverse design in nanophotonics. Harvard University. Cited by: §5.2.
  • [40] R. Pestourie (2020) FDFD Local Field. GitHub. Note: https://github.com/rpestourie/fdfd_local_field Cited by: §6.1.
  • [41] J. Peurifoy, Y. Shen, L. Jing, Y. Yang, F. Cano-Renteria, B. G. DeLacy, J. D. Joannopoulos, M. Tegmark, and M. Soljačić (2018) Nanophotonic particle simulation and inverse design using artificial neural networks. Science Advances 4 (6), pp. eaar4206. Cited by: §1.
  • [42] C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, and A. Ramadhan (2020) Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385. Cited by: §6.
  • [43] B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §1.
  • [44] V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein (2018)

    End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging

    .
    ACM Transactions on Graphics (TOG) 37 (4), pp. 1–13. Cited by: §1.
  • [45] J. Sung, G. Lee, C. Choi, J. Hong, and B. Lee (2019) Single-layer bifacial metasurface: full-space visible light control. Advanced Optical Materials 7 (8), pp. 1801748. Cited by: §1.
  • [46] K. Svanberg (2002) A class of globally convergent optimization methods based on conservative convex separable approximations. SIAM journal on optimization 12 (2), pp. 555–573. Cited by: §6.2.
  • [47] J. J. Thiagarajan, P. Sattigeri, D. Rajan, and B. Venkatesh (2020) Calibrating healthcare ai: towards reliable and interpretable deep predictive models. arXiv preprint arXiv:2004.14480. Cited by: §1.
  • [48] L. N. Trefethen (2019) Approximation theory and approximation practice. Vol. 164, Siam. Cited by: §1, §6.
  • [49] W. Ye, F. Zeuner, X. Li, B. Reineke, S. He, C. Qiu, J. Liu, Y. Wang, S. Zhang, and T. Zentgraf (2016) Spin and wavelength multiplexed nonlinear metasurface holography. Nature Communications 7, pp. 11930. Cited by: §1.
  • [50] N. Yu and F. Capasso (2014) Flat optics with designer metasurfaces. Nature materials 13 (2), pp. 139–150. Cited by: §2.
  • [51] N. Yu, P. Genevet, M. A. Kats, F. Aieta, J. Tetienne, F. Capasso, and Z. Gaburro (2011) Light propagation with phase discontinuities: generalized laws of reflection and refraction. science 334 (6054), pp. 333–337. Cited by: §1.
  • [52] Y. Zhou, I. I. Kravchenko, H. Wang, J. R. Nolen, G. Gu, and J. Valentine (2018) Multilayer noninteracting dielectric metasurfaces for multiwavelength metaoptics. Nano Letters 18 (12), pp. 7529–7537. Cited by: §1.

Acknowledgements

This work was supported in part by IBM Research, the MIT-IBM Watson AI Laboratory, the U.S. Army Research Office through the Institute for Soldier Nanotechnologies (under award W911NF-13-D-0001), and by the PAPPA program of DARPA MTO (under award HR0011-20-90016).

Author contributions

RP, YM, PD, and SGJ designed the study, contributed to the machine-learning approach, and analyzed results; RP led the code development, software implementation, and numerical experiments; RP and SGJ were responsible for the physical ideas and interpretation; TVN assisted in designing and implementing the training. All authors contributed to the algorithmic ideas and writing.

Competing interests

The authors declare no competing financial or non-financial interests.