Noise-Resilient Designs for Optical Neural Networks

08/11/2023
by   Gianluca Kosmella, et al.
0

All analog signal processing is fundamentally subject to noise, and this is also the case in modern implementations of Optical Neural Networks (ONNs). Therefore, to mitigate noise in ONNs, we propose two designs that are constructed from a given, possibly trained, Neural Network (NN) that one wishes to implement. Both designs have the capability that the resulting ONNs gives outputs close to the desired NN. To establish the latter, we analyze the designs mathematically. Specifically, we investigate a probabilistic framework for the first design that establishes that the design is correct, i.e., for any feed-forward NN with Lipschitz continuous activation functions, an ONN can be constructed that produces output arbitrarily close to the original. ONNs constructed with the first design thus also inherit the universal approximation property of NNs. For the second design, we restrict the analysis to NNs with linear activation functions and characterize the ONNs' output distribution using exact formulas. Finally, we report on numerical experiments with LeNet ONNs that give insight into the number of components required in these designs for certain accuracy gains. We specifically study the effect of noise as a function of the depth of an ONN. The results indicate that in practice, adding just a few components in the manner of the first or the second design can already be expected to increase the accuracy of ONNs considerably.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2023

Hardware Realization of Nonlinear Activation Functions for NN-based Optical Equalizers

To reduce the complexity of the hardware implementation of neural networ...
research
08/23/2016

Neural Networks with Smooth Adaptive Activation Functions for Regression

In Neural Networks (NN), Adaptive Activation Functions (AAF) have parame...
research
02/28/2022

An Analytical Approach to Compute the Exact Preimage of Feed-Forward Neural Networks

Neural networks are a convenient way to automatically fit functions that...
research
12/30/2021

Two Instances of Interpretable Neural Network for Universal Approximations

This paper proposes two bottom-up interpretable neural network (NN) cons...
research
06/30/2020

Deriving Neural Network Design and Learning from the Probabilistic Framework of Chain Graphs

The last decade has witnessed a boom of neural network (NN) research and...
research
12/09/2022

Implementing Neural Network-Based Equalizers in a Coherent Optical Transmission System Using Field-Programmable Gate Arrays

In this work, we demonstrate the offline FPGA realization of both recurr...
research
12/18/2020

Universal Approximation in Dropout Neural Networks

We prove two universal approximation theorems for a range of dropout neu...

Please sign up or login with your details

Forgot password? Click here to reset