Non-probabilistic Supervised Learning for Non-linear Convex Variational Problems
In this article we propose, based on a non-probabilistic supervised learning approach, a new general mathematical framework to solve non-linear convex variational problems on reflexive Banach spaces. The variational problems covered by this methodology include the standard ones. Concerning the training sets considered in this work, called radial dictionaries, the definition includes, among others, tensors in Tucker format with bounded rank and Neural Networks with fixed architecture and bounded parameters. The training set will be used to construct, from an iterative algorithm defined by a multivalued map, a sequence called progressive learning by dictionary optimization. We prove the convergence of this sequence to the solution of the variational problem. Furthermore, we obtain the same rate of convergence which is obtained in the Method of Steepest Descend implemented in a reflexive Banach space, O(m^-1).
READ FULL TEXT