Multi-task Transformation Learning for Robust Out-of-Distribution Detection
Detecting out-of-distribution (OOD) samples plays a key role in open-world and safety-critical applications such as autonomous systems and healthcare. Self-supervised representation learning techniques (e.g., contrastive learning and pretext learning) are well suited for learning representation that can identify OOD samples. In this paper, we propose a simple framework that leverages multi-task transformation learning for training effective representation for OOD detection which outperforms state-of-the-art OOD detection performance and robustness on several image datasets. We empirically observe that the OOD performance depends on the choice of data transformations which itself depends on the in-domain training set. To address this problem, we propose a simple mechanism for selecting the transformations automatically and modulate their effect on representation learning without requiring any OOD training samples. We characterize the criteria for a desirable OOD detector for real-world applications and demonstrate the efficacy of our proposed technique against a diverse range of the state-of-the-art OOD detection techniques.
READ FULL TEXT