On the Convergence of Optimizing Persistent-Homology-Based Losses

06/06/2022
by   Yikai Zhang, et al.
5

Topological loss based on persistent homology has shown promise in various applications. A topological loss enforces the model to achieve certain desired topological property. Despite its empirical success, less is known about the optimization behavior of the loss. In fact, the topological loss involves combinatorial configurations that may oscillate during optimization. In this paper, we introduce a general purpose regularized topology-aware loss. We propose a novel regularization term and also modify existing topological loss. These contributions lead to a new loss function that not only enforces the model to have desired topological behavior, but also achieves satisfying convergence behavior. Our main theoretical result guarantees that the loss can be optimized efficiently, under mild assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

Topological Autoencoders

We propose a novel approach for preserving topological structures of the...
research
11/22/2021

Topological Regularization for Dense Prediction

Dense prediction tasks such as depth perception and semantic segmentatio...
research
03/02/2020

Topological Differential Testing

We introduce topological differential testing (TDT), an approach to extr...
research
08/24/2023

Disentanglement Learning via Topology

We propose TopDis (Topological Disentanglement), a method for learning d...
research
10/16/2020

Optimizing persistent homology based functions

Solving optimization tasks based on functions and losses with a topologi...
research
11/25/2020

Topological Learning for Brain Networks

This paper proposes a novel topological learning framework that can inte...
research
09/15/2023

Topological Node2vec: Enhanced Graph Embedding via Persistent Homology

Node2vec is a graph embedding method that learns a vector representation...

Please sign up or login with your details

Forgot password? Click here to reset