research
∙
05/28/2022
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Deep neural networks for image classification are well-known to be vulne...
research
∙
05/28/2021
Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial Awareness
Most studies on learning from noisy labels rely on unrealistic models of...
research
∙
02/11/2021
OpinionRank: Extracting Ground Truth Labels from Unreliable Expert Opinions with Graph-Based Spectral Ranking
As larger and more comprehensive datasets become standard in contemporar...
research
∙
02/17/2020
Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks
Artificial neural networks are well-known to be susceptible to catastrop...
research
∙
02/20/2018