Designing metamaterials or composite materials, in which computational tools select composable components to recreate desired properties that are not present in the constituent materials, is a crucial task for a variety of areas of engineering (acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics) . For example in metalenses, the components are subwavelength scatterers on a surface, but the device diameter is often wavelengths . Applications of such optical structures include ultra-compact sensors, imaging, and spectroscopy devices used in cell phone cameras and in medical applications . As the metamaterials become larger in scale and as the manufacturing capabilities improve, there is a pressing need for scalable computational design tools.
In this work, surrogate models were used to rapidly evaluate the effect of each metamaterial components during device design , and machine learning is an attractive technique for such models [2, 18, 3, 19]. However, in order to exploit improvements in nano-manufacturing capabilities, components have an increasing number of design parameters and training the surrogate models (using brute-force numerical simulations) becomes increasingly expensive. The question then becomes: How can we obtain an accurate model from minimal training data? We present a new active-learning (AL) approach—in which training points are selected based on an error measure (Fig. 3)—that can reduce the number of training points by more than an order of magnitude for a neural-network (NN) surrogate model of partial-differential equations (PDEs). Further, we show how such a surrogate can be exploited to speed up large-scale engineering optimization by . In particular, we apply our approach to the design of optical metasurfaces: large (– wavelengths ) aperiodic nanopattered ( structures that perform functions such as compact lensing .
Metasurface design can be performed by breaking the surface into unit cells with a few parameters each (Fig. 1) via domain-decomposition approximations [38, 25], learning a “surrogate” model that predicts the transmitted optical field through each unit as a function of an individual cell’s parameters, and optimizing the total field (e.g. the focal intensity) as a function of the parameters of every unit cell  (Sec. 2). This makes metasurfaces an attractive application for machine learning (Sec. 4) because the surrogate unit-cell model is re-used millions of times during the design process, amortizing the cost of training the model based on expensive “exact” Maxwell solves sampling many unit-cell parameters. For modeling the effect of a –
unit-cell parameters, Chebyshev polynomial interpolation can be very effective
, but encounters an exponential “curse of dimensionality” with more parameters[7, 48]. In this paper, we find that a NN can be trained with orders of magnitude fewer Maxwell solves for the same accuracy with parameters, even for the most challenging case of multi-layer unit cells many wavelengths () thick (Sec. 5). In contrast, we show that subwavelength-diameter design regions (considered by several other authors [2, 17, 18, 32, 3, 19]) require orders of magnitude fewer training points for the same number of parameters (Sec. 3), corresponding to the physical intuition that wave propagation through subwavelength regions is effectively determined by a few “homogenized” parameters , making the problems effectively low-dimensional. In contrast to typical machine-learning applications, constructing surrogate models for physical model such as Maxwell’s equations corresponds to interpolating smooth functions with no noise, and this requires new approaches to training and active learning as described in Sec. 4. We believe that these methods greatly extend the reach of surrogate model for metamaterial optimization and other applications requiring moderate-accuracy high-dimensional smooth interpolation.
Recent work has demonstrated a wide variety of optical-metasurface design problems and algorithms. Different applications  such as holograms , polarization- [4, 36], wavelength- , depth-of-field-, or incident angle-dependent functionality  are useful for imaging or spectroscopy [1, 52]. Ref. Pestourie et al.  introduced an optimization approach to metasurface design using Chebyshev-polynomial surrogate model, which was subsequently extended to topology optimization ( parameters per cell) with “online” Maxwell solvers . Metasurface modeling can also be composed with signal/image-processing stages for optimized “end-to-end design” [44, 27]. Previous work demonstrated NN surrogate models in optics for a few parameters [28, 34, 41], or with more parameters in deeply subwavelength design regions [2, 17]. As we will show in Sec. 3, deeply subwavelength regions pose a vastly easier problem for NN training than parameters spread over larger diameters. Another approach involves generative design, again typically for subwavelength [3, 19] or wavelength-scale unit cells , in some cases in conjunction with larger-scale models [18, 32, 17]. A generative model is essentially the inverse of a surrogate function: instead of going from geometric parameters to performance, it takes the desired performance as an input and produces the geometric structure, but the mathematical challenge appears to be closely related to that of surrogates.
Active learning (AL) is connected with the field of uncertainty quantification (UQ), because AL consists of adding the “most uncertain” points to training set in an iterative way (Sec. 4) and hence it requires a measure of uncertainty. Our approach to UQ (Sec. 4) is based on the NN-ensemble idea of Ref. Lakshminarayanan et al.  due to its scalability and reliability. There are many other approaches for UQ [16, 12, 43, 47, 10], but Ref. Lakshminarayanan et al.  demonstrated performance and scalability advantages of the NN-ensemble approach. In contrast, Bayesian optimization relies on Gaussian processes that scale poorly ( where is the number of training samples) [31, 5]. To our knowledge, the work presented here is the first to achieve training time efficiency (we show an order of magnitude reduction sample complexity), design time efficiency (the actively learned surrogate model is at least two orders of magnitude faster than solving Maxwell’s equations), and realistic large-scale designs (due to our optimization framework ), all in one package.
2 Metasurfaces and surrogate models
In this section, we present the neural-network surrogate model used in this paper, for which we adopt the metasurface design formulation from Ref. Pestourie et al. . The first step of this approach is to divide the metasurface into unit cells with a few geometric parameters each. For example, Fig. 1(left) shows several possible unit cells: (a) a rectangular pillar (“fin”) etched into a 3d dielectric slab  (two parameters); (b) an H-shaped hole (four parameters) in a dielectric slab ; or a (c) multi-layered 2d unit cell with ten holes of varyings widths considered in this paper. As depicted in Fig. 1(right), a metasurface consists of an array of these unit cells. The second step is to solve for the transmitted field (from an incident planewave) independently for each unit cell using approximate boundary conditions [38, 26, 22, 50], in our case a locally periodic approximation (LPA) based on the observation that optimal structures often have parameters that mostly vary slowly from one unit cell to the next . (Other approximate boundary conditions are also possible .) For a subwavelength period, the LPA transmitted far field is entirely described by a single number—the complex transmission coefficient . One can then compute the field anywhere above the metasurface by convolving these approximate transmitted fields with a known Green’s function, a near-to-farfield transformation . Finally, any desired function of the transmitted field, such as the focal-point intensity, can be optimized as a function of the geometric parameters of each unit cell .
In this way, optimizing an optical metasurface is built on top of evaluating the function (transmission through a single unit cell as a function of its geometric parameters) thousands or even millions of times—once for every unit cell, for every step of the optimization process. Although it is possible to solve Maxwell’s equations “online” during the optimization process, allowing one to use thousands of parameters per unit cell requires substantial parallel computing clusters . Alternatively, one can solve Maxwell’s equations “offline” (before metasurface optimization) in order to fit to a surrogate model
which can subsequently be evaluated rapidly during metasurface optimization (perhaps for many different devices). For similar reasons, surrogate (or “reduced-order”) models are attractive for any design problem involving a composite of many components that can be modeled separately [3, 19, 35]. The key challenge of the surrogate approach is to increase the number of design parameters, especially in non-subwavelength regions as discussed in Sec. 3.
In this paper, the surrogate model for each of the real and imaginary parts of the complex transmission is an ensemble of independent neural networks (NNs) with the same training data but different random “batches”  on each training step. Each of NN is trained to output a prediction
and an error estimatefor every set of parameters . To obtain these and from training data (from brute-force “offline” Maxwell solves) we minimize :
over the parameters of NN . Equation (2) is motivated by problems in which
was sampled from a Gaussian distribution for each, in which case and
could be interpreted as mean and hetero-skedastic variance, respectively. Although our exact function is smooth and noise-free, we find that Eq. (2) still works well to estimate the fitting error, as demonstrated in Sec. 4
. Each NN is composed of an input layer with 13 nodes (10 nodes for the geometry parameterization and 3 nodes for the one-hot encoding13]
), and one last layer containing one unit with a scaled hyperbolic-tangent activation function (for ) and one unit with a softplus activation function  (for ). Given this ensemble of NNs, the final prediction (for the real or imaginary part of ) and its associated error estimate are amalgamated as :
3 Subwavelength is easier: Effect of diameter
Before performing active learning, we first identify the regime where active learning can be most useful: unit-cell design volumes that are not small compared to the wavelength . Previous work on surrogate models [2, 17, 18, 32, 3, 19] demonstrated NN surrogates (trained with random samples) for unit cells with
parameters. However, these NN models were limited to a regime where the unit-cell degrees of freedom lay within a subwavelength-diameter volume of the unit cell. To illustrate the effect of shrinking design volume on NN training, we trained our surrogate model for three unit cells (Fig.2(right)): the main unit cell of this study is deep, the small unit cell is a vertically scaled-down version of the normal unit cell only deep, and the smallest unit cell is a version of the small unit cell further scaled down (both vertically and horizontally) by . Fig. 2(left) shows that, for the same number of training points, the fractional error (defined in Methods) on the test set of the small unit cell and the smallest unit cell are, respectively, one and two orders of magnitude better than the error of the main unit cell when using training points or more. (The surrogate output is the complex transmission from Sec. 2.) That is, Fig. 2(left) shows that in the subwavelength-design regime, training the surrogate model is far easier than for larger design regions (.
Physically, for extremely sub-wavelength volumes the waves only “see” an averaged effective medium 
, so there are effectively only a few independent design parameters regardless of the number of geometric degrees of freedom. Quantitatively, we find that the Hessian of the trained surrogate model (second-derivative matrix) in the smallest unit-cell case is dominated by only two singular values—consistent with a function that effectively has only two free parameters—with the other singular values being more thansmaller in magnitude; for the other two cases, many more training points would be required to accurately resolve the smallest Hessian singular values. A unit cell with large design-volume diameter ( is much harder to train, because the dimensionality of the design parameters is effectively much larger.
4 Active-learning algorithm
Here, we present an algorithm to choose training points that is significantly better at reducing the error than choosing points at random. As described below, we select the training points where the estimated model error is largest, given the estimated error from Sec. 2.
over 50 epochs. Then, given the model at iteration , we evaluate (which is orders of magnitude faster than the Maxwell solver) at points sampled uniformly at random and choose the points that correspond to the largest . We perform the expensive Maxwell solves only for these points, and add the newly labeled data to the training set. We train with the newly augmented training set. We repeat this process times.
Essentially, the method works because the error estimate is updated every time the model is retrained with an augmented dataset. In this way, model tells us where it does poorly by setting a large for parameters where the estimation would be bad in order to minimize Eq. (2).
5 Active-learning results
5.1 Order-of-magnitude reduction in training data
We compared the fractional errors of a NN surrogate model trained using uniform random samples with an identical NN trained using an active-learning approach, in both cases modeling the complex transmission of a multi-layer unit cell with ten independent parameters (Fig. 4(right)). With the notation of Sec. 4, the baseline corresponds to , and equal to the total number of training points. This corresponds to no active learning at all, because the points are chosen at random. In the case of active learning, , , and we computed for and . Although three orders of magnitude on the log-log plot is too small to determine if the apparent linearity indicates a power law, Fig. 4(left) shows that the lower the desired fractional error, the greater the reduction in training cost compared to the baseline algorithm; the slope of the active-learning fractional error () is about 30% steeper that that of baseline (). The active-learning algorithm achieves a reasonable fractional error of in twelve times less points than the baseline, which corresponds to more than one order of magnitude saving in training data (much less expensive Maxwell solves). This advantage would presumably increase for a lower error tolerance, though computational costs prohibited us from collecting orders of magnitude more training data to explore this in detail. For comparison and completeness, Fig. 4(left) shows fractional errors using Chebyshev interpolation (for the blue frequency only). Chebyshev interpolation has a much worse fractional error for a similar number of training points. Chebyshev interpolation suffers from the “curse of dimensionality”—the number of training points is exponential with the number of variables. The two fractional errors shown are for three and four interpolation points in each of the dimensions, respectively. In contrast, NNs are known to mitigate the “curse of dimensionality” .
5.2 Application to metalens design
We used both surrogates models to design a multiplexer—an optical device that focuses different wavelength at different points in space. The actively learned surrogate model results in a design that much more closely matches a numerical validation than the baseline surrogate (Fig. 5). As explained in Sec. 2
, we replace a Maxwell’s equations solver with a surrogate model to rapidly compute the optical transmission through each unit cell; a similar surrogate approached could be used for optimizing many other complex physical systems. In the case of our two-dimensional unit cell, the surrogate model is two orders of magnitude faster than solving Maxwell’s equations with a finite difference frequency domain (FDFD) solver. The speed advantage of a surrogate model becomes drastically greater in three dimensions, where PDE solvers are much more costly while a surrogate model remains the same.
The surrogate model is evaluated millions of times during a meta-structure optimization. We used the actively learned surrogate model and the baseline surrogate model (random training samples), in both cases with training points, and we optimized a ten-layer metastructure with unit cells of period nm for a multiplexer application—where three wavelengths (blue: nm, green: nm, and red: nm) are focused on three different focal spots (m, m), (, m), and (m, m), respectively. The diameter is m and the focal length is m, which corresponds to a numerical aperture of . Our optimization scheme tends to yield results robust to manufacturing errors  for two reasons: first, we optimize for the worst case of the three focal spot intensities, using an epigraph formulation ; second, we compute the average intensity from an ensemble of surrogate models that can be thought of as a Gaussian distribution with , and and are defined in Eq. (3) and Eq. (4), respectively,
In order to compare the surrogate models, we validate the designs by computing the optimal unit cell fields directly using a Maxwell solver instead of using the surrogate model. This is computationally easy because it only needs to be done once for each of the unit cells instead of millions of times during the optimization. The focal lines—the field intensity along a line parallel to the two-dimensional metastructure and passing through the focal spots—resulting from the validation are exact solutions to Maxwell’s equations assuming the locally periodic approximation (Sec. 2). Fig. 5(top) shows the resulting focal lines for the active-learning and baseline surrogate models. A multiplexer application requires similar peak intensity for each of the focal spots, which is achieved using worst case optimization . Fig. 5(top) shows that the actively learned surrogate has smaller error in the focal intensity compared to the baseline surrogate model. This result shows that not only is the active-learning surrogate more accurate than the baseline surrogate for training points, but also the results are more robust using the active-learning surrogate—the optimization does not drive the parameters towards regions of high inaccuracy of the surrogate model. Note that we limited the design to a small overall diameter ( unit cells) mainly to ease visualization (Fig. 5(bottom)), and we find that this design can already yield good focusing performance despite the small diameter. In earlier work, we have already demonstrated that our optimization framework is scalable to designs that are orders of magnitudes larger .
Previous work, such as Ref. Chen and Gu —in a different approach to active-learning that does not quantify uncertainty—suggested iteratively adding the optimum design points to the training set (re-optimizing before each new set of training points is added). However, we did not find this approach to be beneficial in our case. In particular, we tried adding the data generated from LPA validations of the optimal design parameters, in addition to the points selected by our active learning algorithm, at each training iteration, but we found that this actually destabilized the learning and resulted in designs qualitatively worse than the baseline. By exploiting validation points, it seems that the active learning of the surrogate tends to explore less of the landscape of the complex transmission function, and hence leads to poorer designs. Such exploitation–exploration trade-offs are known in the active-learning literature .
6 Concluding remarks
In this paper, we present an active-learning algorithm for composite materials which reduces the training time of the surrogate model for a physical response, by at least one order of magnitude. The simulation time is reduced by at least two orders of magnitude using the surrogate model compared to solving the partial differential equations numerically. While the domain-decomposition method used here is the locally periodic approximation and the partial differential equations are the Maxwell equations, the proposed approach is directly applicable to other domain-decomposition methods (e.g. overlapping domain approximation 
) and other partial differential equations or ordinary differential equations.
We used an ensemble of NNs for interpolation in a regime that is seldom considered in the machine-learning literature—when the data is obtained from a smooth function rather than noisy measurements. In this regime, it would be instructive to have a deeper understanding of the relationship between NNs and traditional approximation theory (e.g. with polynomials and rational functions [7, 48]). For example, the likelihood maximization of our method forces to go to zero when . Although this allows us to simultaneously obtain a prediction and an error estimate , there is a drawback. In the interpolation regime (when the surrogate is fully determined), would become identically zero even if the surrogate does not match the exact model away from the training points. In contrast, interpolation methods such as Chebyshev polynomials yield a meaningful measure of the interpolation error even for exact interpolation of the training data [7, 48]. In the future, we plan to separate the estimation model and the model for the error measure using a meta-learner architecture , with expectation that the meta-learner will produce a more accurate error measure and further improve training time. We believe that the method presented in this paper will greatly extend the reach of surrogate-model based optimization of composite materials and other applications requiring moderate-accuracy high-dimensional interpolation.
6.1 Training-data computation
The complex transmission coefficients were computed in parallel using an open-source finite difference frequency-domain solver for Helmholtz equation on a GHz -Core Intel Xeon E processor. The material properties of the multi-layered unit cells are silica (refractive index of 1.45) in the substrate, and air (refractive index of 1) in the hole and in the background. In the normal unit cell, the period of the cell is 400 nm, the height of the ten holes is fixed to 304 nm and their widths varies between 60 nm and 340 nm, each hole is separated by 140 nm of substrate. In the small unit cell, the period of the cell is 400 nm, the height of the ten holes is 61 nm, and their widths varies between 60 nm and 340 nm, there is no separation between the holes. The smallest unit cell is the same as the small unit cell shrunk ten times (period of 40 nm, ten holes of heigth 6.1 nm and width varying between 6 nm and 34 nm).
6.2 Metalens design problem
The complex transmission data is used to compute the scattered field off a multi-layered metastructure with unit cells as in Ref. Pestourie et al. . The metastructure was designed to focus three wavelengths (blue: nm, green: nm, and red: nm) on three different focal spots (m, m), (, m), and (m, m), respectively. The epigraph formulation of the worst case optimization and the derivation of the adjoint method to get the gradient are detailed in Ref. Pestourie et al. . Any gradient based-optimization algorithm would work, but we used an algorithm based on conservative convex separable approximations . The average intensity is derived from the distribution of the surrogate model with and the computation of the intensity based on the local field as in Ref. Pestourie et al. ,
where the notation denotes the complex conjugate, the notations and are simplified to , and , and the notation is dropped for clarity. From the linearity of expectation,
where we used that and .
6.3 Active-learning architecture and training
The ensemble of NN was implemented using PyTorch on a GHz -Core Intel Xeon E processor. We trained an ensemble of NN for each surrogate models. Each NN is composed of an input layer with 13 nodes (10 nodes for the geometry parameterization and 3 nodes for the one-hot encoding  of three frequencies of interest), three fully-connected hidden layers with 256 rectified linear units (ReLU ), and one last layer containing one unit with a scaled hyperbolic-tangent activation function  (for ) and one unit with a softplus activation function  (for ). The cost function is a Gaussian loglikelihood as in Eq. (2). The mean and the variance of the ensemble are the pooled mean and variance from Eq. (3) and Eq. (4). The optimizer is Adam . The starting learning rate is . After the tenth epoch, the learning rate is decayed by a factor of . Each iteration of the active learning algorithm as well as the baseline were trained for epochs. The choice of training points is detailed in Sec. 4. The quantitative evaluations were computed using the fractional error on a test set containing points chosen at random. The fractional error
between two vectors of complex valuesand is
where is the L-norm for complex vectors.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
-  (2015) Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347 (6228), pp. 1342–1345. Cited by: §1.
A deep learning approach for objective-driven all-dielectric metasurface design. ACS Photonics 6 (12), pp. 3196–3207. Cited by: §1, §1, §1, §2, §3.
-  (2019) Generative multi-functional meta-atom and metasurface design networks. arXiv preprint arXiv:1908.04851. Cited by: §1, §1, §1, §2, §3.
-  (2015) Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission. Nature Nanotechnology 10 (11), pp. 937. Cited by: §1.
-  (2018) Active learning for accelerated design of layered materials. npj Computational Materials 4 (1), pp. 1–9. Cited by: §1.
-  (2020) Inverse designed metalenses with extended depth of focus. ACS Photonics 7 (4), pp. 873–878. Cited by: §1.
-  (2001) Chebyshev and fourier spectral methods.. Dover Publications. Cited by: §1, §6.
-  (2001) FDFD: a 3d finite-difference frequency-domain code for electromagnetic induction tomography. Journal of Computational Physics 170 (2), pp. 830–848. Cited by: §5.2.
Generative deep neural networks for inverse materials design using backpropagation and active learning. Advanced Science 7 (5), pp. 1902607. Cited by: §5.2.
Confidence scoring using whitebox meta-models with linear classifier probes. In
The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1467–1475. Cited by: §1, §6.
-  (2019) Efficient approximation of high-dimensional functions with deep neural networks. arXiv preprint arXiv:1912.04310. Cited by: §5.1.
-  (2016) Dropout as a bayesian approximation: representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. Cited by: §1.
-  (2016) Deep learning. MIT press. Cited by: §2, §4, §6.3.
-  (2001-09) Time-Harmonic Electromagnetic Fields. 2nd edition, Wiley-IEEE. Note: Hardcover External Links: Cited by: §2.
-  (2011) Characterizing metasurfaces/metafilms: the connection between surface susceptibilities and effective material properties. IEEE Antennas and Wireless Propagation Letters 10, pp. 1507–1511. Cited by: §1, §3.
-  (2019) Aleatoric and epistemic uncertainty in machine learning: a tutorial introduction. arXiv preprint arXiv:1910.09457. Cited by: §1, §5.2.
-  (2020) Deep neural networks for the evaluation and design of photonic devices. arXiv preprint arXiv:2007.00084. Cited by: §1, §1, §3.
-  (2019) Simulator-based training of generative neural networks for the inverse design of metasurfaces. Nanophotonics 1. Cited by: §1, §1, §1, §3.
Free-form diffractive metagrating design based on generative adversarial networks. ACS nano 13 (8), pp. 8872–8878. Cited by: §1, §1, §1, §2, §3.
-  (2019) 3D metamaterials. Nature Reviews Physics 1 (3), pp. 198–210. Cited by: §1.
-  (2017) Metalenses: versatile multifunctional photonic components. Science 358 (6367). Cited by: §1.
-  (2016) Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 352 (6290), pp. 1190–1194. Cited by: §2.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §6.3.
-  (2017) Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pp. 6402–6413. Cited by: §1, §2.
-  (2019) Overlapping domains for topology optimization of large-area metasurfaces. Optics Express 27 (22), pp. 32445–32453. Cited by: §1, §2, §6.
-  (2019) Topology optimization of freeform large-area metasurfaces. Optics Express 27 (11), pp. 15765–15775. Cited by: §1, §2, §2.
-  (2020) End-to-end inverse design for inverse scattering via freeform metastructures. arXiv preprint arXiv:2006.09145. Cited by: §1.
-  (2018) Training deep neural networks for the inverse design of nanophotonic structures. ACS Photonics 5 (4), pp. 1365–1369. Cited by: §1.
-  (2018) Metasurface enabled wide-angle fourier lens. Advanced Materials 30 (23), pp. 1706368. Cited by: §1.
-  (2020) Topological encoding method for data-driven photonics inverse design. Optics Express 28 (4), pp. 4825–4835. Cited by: §1.
-  (2019) Active learning in materials science with emphasis on adaptive sampling using uncertainties for targeted design. npj Computational Materials 5 (1), pp. 1–17. Cited by: §1.
-  (2018) Deep-learning-enabled on-demand design of chiral metamaterials. ACS Nano 12 (6), pp. 6326–6334. Cited by: §1, §1, §3.
-  (2016) Photonic spin-controlled multifunctional shared-aperture antenna array. Science 352 (6290), pp. 1202–1206. Cited by: §1.
-  (2018) Plasmonic nanostructure design and characterization via deep learning. Light: Science & Applications 7 (1), pp. 1–8. Cited by: §1.
-  (2013) A review of indirect/non-intrusive reduced order modeling of nonlinear geometric structures. Journal of Sound and Vibration 332 (10), pp. 2437–2460. Cited by: §2.
-  (2017) Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Physical Review Letters 118 (11), pp. 113901. Cited by: §1.
-  (2017) Automatic differentiation in pytorch. In NIPS-W, Cited by: §6.3.
-  (2018) Inverse design of large-area metasurfaces. Optics Express 26 (26), pp. 33732–33747. Cited by: §1, §1, §1, §1, §2, §5.2, §5.2, §6.2.
-  (2020) Assume your neighbor is your equal: inverse design in nanophotonics. Harvard University. Cited by: §5.2.
-  (2020) FDFD Local Field. GitHub. Note: https://github.com/rpestourie/fdfd_local_field Cited by: §6.1.
-  (2018) Nanophotonic particle simulation and inverse design using artificial neural networks. Science Advances 4 (6), pp. eaar4206. Cited by: §1.
-  (2020) Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385. Cited by: §6.
-  (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §1.
End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging. ACM Transactions on Graphics (TOG) 37 (4), pp. 1–13. Cited by: §1.
-  (2019) Single-layer bifacial metasurface: full-space visible light control. Advanced Optical Materials 7 (8), pp. 1801748. Cited by: §1.
-  (2002) A class of globally convergent optimization methods based on conservative convex separable approximations. SIAM journal on optimization 12 (2), pp. 555–573. Cited by: §6.2.
-  (2020) Calibrating healthcare ai: towards reliable and interpretable deep predictive models. arXiv preprint arXiv:2004.14480. Cited by: §1.
-  (2019) Approximation theory and approximation practice. Vol. 164, Siam. Cited by: §1, §6.
-  (2016) Spin and wavelength multiplexed nonlinear metasurface holography. Nature Communications 7, pp. 11930. Cited by: §1.
-  (2014) Flat optics with designer metasurfaces. Nature materials 13 (2), pp. 139–150. Cited by: §2.
-  (2011) Light propagation with phase discontinuities: generalized laws of reflection and refraction. science 334 (6054), pp. 333–337. Cited by: §1.
-  (2018) Multilayer noninteracting dielectric metasurfaces for multiwavelength metaoptics. Nano Letters 18 (12), pp. 7529–7537. Cited by: §1.
This work was supported in part by IBM Research, the MIT-IBM Watson AI Laboratory, the U.S. Army Research Office through the Institute for Soldier Nanotechnologies (under award W911NF-13-D-0001), and by the PAPPA program of DARPA MTO (under award HR0011-20-90016).
RP, YM, PD, and SGJ designed the study, contributed to the machine-learning approach, and analyzed results; RP led the code development, software implementation, and numerical experiments; RP and SGJ were responsible for the physical ideas and interpretation; TVN assisted in designing and implementing the training. All authors contributed to the algorithmic ideas and writing.
The authors declare no competing financial or non-financial interests.