What is Mathematical Optimization?
Mathematical optimization is the process of maximizing or minimizing an objective function by finding the best available values across a set of inputs. Some variation of optimization is required for all deep learning models to function, whether using supervised or unsupervised learning. There are many specific techniques to choose from, but all optimization algorithms require a minimum of:
An objective function f(x), to define the output that’s being maximized or minimized. This function can be deterministic (with specific effects) or stochastic (achieving a probability threshold).
Inputs that are controllable, in the form of variables like x1, x2, etc… Each can be either discrete or continuous.
Constraints to place limits on how large or small variables can be. Equality constraints are usually noted hn (x) and inequality constraints are noted gn (x). Equations without constraints are known as “unlimited” optimization problems.
What’s the Difference Between Global and Local Optimization?
Global optimization locates the maximum or minimum over all available input values, whereas local optimization determines local minima or maxima in a particular sample (subpopulation). For very simple algorithms, there is no difference between the two values, but most complex deep learning models require multiple local optimizations.