Statistical learning on measures: an application to persistence diagrams
We consider a binary supervised learning classification problem where instead of having data in a finite-dimensional Euclidean space, we observe measures on a compact space 𝒳. Formally, we observe data D_N = (μ_1, Y_1), …, (μ_N, Y_N) where μ_i is a measure on 𝒳 and Y_i is a label in {0, 1}. Given a set ℱ of base-classifiers on 𝒳, we build corresponding classifiers in the space of measures. We provide upper and lower bounds on the Rademacher complexity of this new class of classifiers that can be expressed simply in terms of corresponding quantities for the class ℱ. If the measures μ_i are uniform over a finite set, this classification task boils down to a multi-instance learning problem. However, our approach allows more flexibility and diversity in the input data we can deal with. While such a framework has many possible applications, this work strongly emphasizes on classifying data via topological descriptors called persistence diagrams. These objects are discrete measures on ℝ^2, where the coordinates of each point correspond to the range of scales at which a topological feature exists. We will present several classifiers on measures and show how they can heuristically and theoretically enable a good classification performance in various settings in the case of persistence diagrams.
READ FULL TEXT