Group K-Means

01/05/2015
by   Jianfeng Wang, et al.
0

We study how to learn multiple dictionaries from a dataset, and approximate any data point by the sum of the codewords each chosen from the corresponding dictionary. Although theoretically low approximation errors can be achieved by the global solution, an effective solution has not been well studied in practice. To solve the problem, we propose a simple yet effective algorithm Group K-Means. Specifically, we take each dictionary, or any two selected dictionaries, as a group of K-means cluster centers, and then deal with the approximation issue by minimizing the approximation errors. Besides, we propose a hierarchical initialization for such a non-convex problem. Experimental results well validate the effectiveness of the approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2013

A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries

We consider the problem of learning overcomplete dictionaries in the con...
research
11/29/2020

Translation-invariant interpolation of parametric dictionaries

In this communication, we address the problem of approximating the atoms...
research
06/03/2015

Unsupervised domain adaption dictionary learning for visual recognition

Over the last years, dictionary learning method has been extensively app...
research
07/15/2020

Group Invariant Dictionary Learning

The dictionary learning problem concerns the task of representing data a...
research
11/16/2010

PADDLE: Proximal Algorithm for Dual Dictionaries LEarning

Recently, considerable research efforts have been devoted to the design ...
research
05/11/2012

Are visual dictionaries generalizable?

Mid-level features based on visual dictionaries are today a cornerstone ...
research
06/19/2018

Approximation Strategies for Incomplete MaxSAT

Incomplete MaxSAT solving aims to quickly find a solution that attempts ...

Please sign up or login with your details

Forgot password? Click here to reset