
A Phase Shift Deep Neural Network for High Frequency Wave Equations in Inhomogeneous Media
In this paper, we propose a phase shift deep neural network (PhaseDNN) w...
read it

Parallel frequency functiondeep neural network for efficient complex broadband signal approximation
A neural network is essentially a highdimensional complex mapping model...
read it

Recovering Geometric Information with Learned Texture Perturbations
Regularization is used to avoid overfitting when training a neural netwo...
read it

Progressive transfer learning for low frequency data prediction in full waveform inversion
For the purpose of effective suppression of the cycleskipping phenomeno...
read it

Learning a Waveletlike AutoEncoder to Accelerate Deep Neural Networks
Accelerating deep neural networks (DNNs) has been attracting increasing ...
read it

Similarity GroupingGuided Neural Network Modeling for Maritime Time Series Prediction
Reliable and accurate prediction of time series plays a crucial role in ...
read it

Neural Network Training Techniques Regularize Optimization Trajectory: An Empirical Study
Modern deep neural network (DNN) trainings utilize various training tech...
read it
An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network
Deep neural network (DNN) usually learns the target function from low to high frequency, which is called frequency principle or spectral bias. This frequency principle sheds light on a highfrequency curse of DNNs – difficult to learn highfrequency information. Inspired by the frequency principle, a series of works are devoted to develop algorithms for overcoming the highfrequency curse. A natural question arises: what is the upper limit of the decaying rate w.r.t. frequency when one trains a DNN? In this work, our theory, confirmed by numerical experiments, suggests that there is a critical decaying rate w.r.t. frequency in DNN training. Below the upper limit of the decaying rate, the DNN interpolates the training data by a function with a certain regularity. However, above the upper limit, the DNN interpolates the training data by a trivial function, i.e., a function is only nonzero at training data points. Our results indicate a better way to overcome the highfrequency curse is to design a proper precondition approach to shift highfrequency information to lowfrequency one, which coincides with several previous developed algorithms for fast learning highfrequency information. More importantly, this work rigorously proves that the highfrequency curse is an intrinsic difficulty of DNNs.
READ FULL TEXT
Comments
There are no comments yet.