Semi-supervised knowledge distillation is a powerful training paradigm f...
The current best approximation algorithms for k-median rely on first
obt...
Distillation with unlabeled examples is a popular and powerful method fo...
Distilling knowledge from a large teacher model to a lightweight one is ...
We consider stochastic settings for clustering, and develop provably-goo...