DeepAI AI Chat
Log In Sign Up

Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features

by   Haohan Wang, et al.

Machine learning has demonstrated remarkable prediction accuracy over i.i.d data, but the accuracy often drops when tested with data from another distribution. In this paper, we aim to offer another view of this problem in a perspective assuming the reason behind this accuracy drop is the reliance of models on the features that are not aligned well with how a data annotator considers similar across these two datasets. We refer to these features as misaligned features. We extend the conventional generalization error bound to a new one for this setup with the knowledge of how the misaligned features are associated with the label. Our analysis offers a set of techniques for this problem, and these techniques are naturally linked to many previous methods in robust machine learning literature. We also compared the empirical strength of these methods demonstrated the performance when these previous techniques are combined.


page 1

page 2

page 3

page 4


Bio-inspired data mining: Treating malware signatures as biosequences

The application of machine learning to bioinformatics problems is well e...

Do CIFAR-10 Classifiers Generalize to CIFAR-10?

Machine learning is currently dominated by largely experimental work foc...

Charting the Right Manifold: Manifold Mixup for Few-shot Learning

Few-shot learning algorithms aim to learn model parameters capable of ad...

Distributional Robustness Bounds Generalization Errors

Bayesian methods, distributionally robust optimization methods, and regu...

Self-Challenging Improves Cross-Domain Generalization

Convolutional Neural Networks (CNN) conduct image classification by acti...

Phishing Detection through Email Embeddings

The problem of detecting phishing emails through machine learning techni...

Enhancement attacks in biomedical machine learning

The prevalence of machine learning in biomedical research is rapidly gro...