Border Basis Computation with Gradient-Weighted Norm

01/02/2021 ∙ by Hiroshi Kera, et al. ∙ 0

Normalization of polynomials plays an essential role in the approximate basis computation of vanishing ideals. In computer algebra, coefficient normalization, which normalizes a polynomial by its coefficient norm, is the most common method. In this study, we propose gradient-weighted normalization for the approximate border basis computation of vanishing ideals, inspired by the recent results in machine learning. The data-dependent nature of gradient-weighted normalization leads to powerful properties such as better stability against perturbation and consistency in the scaling of input points, which cannot be attained by the conventional coefficient normalization. With a slight modification, the analysis of algorithms with coefficient normalization still works with gradient-weighted normalization and the time complexity does not change. We also provide an upper bound on the coefficient norm based on the gradient-weighted norm, which allows us to discuss the approximate border bases with gradient-weighted normalization from the perspective of the coefficient norm.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.