What is Jump Discontinuity?
How does Jump Discontinuity work?
Imagine a parabolic (u-shaped) curve atop a graph that has been split down the middle and separated at a point along the domain (x-axis). The function's curve would be labeled as discontinuous as it jumps from one point to another at the function's discontinuity point. To better understand what is happening, we can split our function into two different sections, each approaching the jump from either direction. As either side of the function approaches the discontinuity, they are approaching their limit. The feature of approaching a limit on one side is known as a one-sided limit, however our function has two one-sided limits, and they have different values. In short, the function approaches different values depending on which direction X is moving.
Jump Discontinuity and Machine Learning
Jump Discontinuity can be a tricky challenge for neural networks, as the dissonance between limits can be confusing and difficult to approximate. However, a feed-forward neural network could be able to roughly process jump discontinuity using the universal approximation theorem. Simply, the theorem stats that a neural network can represent a wide variety of functions, when given the relevant parameters, however it does not account for the learnability of those parameters.