Learning Theory for Estimation of Animal Motion Submanifolds

03/30/2020
by   Nathan Powell, et al.
0

This paper describes the formulation and experimental testing of a novel method for the estimation and approximation of submanifold models of animal motion. It is assumed that the animal motion is supported on a configuration manifold Q that is a smooth, connected, regularly embedded Riemannian submanifold of Euclidean space X≈R^d for some d>0, and that the manifold Q is homeomorphic to a known smooth, Riemannian manifold S. Estimation of the manifold is achieved by finding an unknown mapping γ:S→ Q⊂ X that maps the manifold S into Q. The overall problem is cast as a distribution-free learning problem over the manifold of measurements Z=S× X. That is, it is assumed that experiments generate a finite sets {(s_i,x_i)}_i=1^m⊂Z^m of samples that are generated according to an unknown probability density μ on Z. This paper derives approximations γ_n,m of γ that are based on the m samples and are contained in an N(n) dimensional space of approximants. The paper defines sufficient conditions that shows that the rates of convergence in L^2_μ(S) correspond to those known for classical distribution-free learning theory over Euclidean space. Specifically, the paper derives sufficient conditions that guarantee rates of convergence that have the form E (γ_μ^j-γ_n,m^j_L^2_μ(S)^2 )≤ C_1 N(n)^-r + C_2 N(n)log(N(n))/m for constants C_1,C_2 with γ_μ:={γ^1_μ,...,γ^d_μ} the regressor function γ_μ:S→ Q⊂ X and γ_n,m:={γ^1_n,j,...,γ^d_n,m}.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset