Video data and algorithms have been driving advances in multi-object tra...
Federated learning is vulnerable to poisoning attacks in which malicious...
Due to its distributed nature, federated learning is vulnerable to poiso...
Federated learning (FL) is vulnerable to model poisoning attacks, in whi...
Motion and interaction of social insects (such as ants) have been studie...
Existing model poisoning attacks to federated learning assume that an
at...
Local Differential Privacy (LDP) protocols enable an untrusted server to...
Existing deepfake-detection methods focus on passive detection, i.e., th...
Agriculture is the foundation of human civilization. However, the rapid
...
Deepfakes pose growing challenges to the trust of information on the
Int...
Federated learning enables clients to collaboratively learn a shared glo...
Byzantine-robust federated learning aims to enable a service provider to...
Data poisoning attacks aim to corrupt a machine learning model via modif...
Top-k predictions are used in many real-world applications such as machi...
Graph neural networks (GNNs) have recently gained much attention for nod...
In a data poisoning attack, an attacker modifies, deletes, and/or
insert...
Backdoor attack is a severe security threat to deep neural networks (DNN...
Community detection plays a key role in understanding graph structure.
H...
It is well-known that classifiers are vulnerable to adversarial
perturba...
In federated learning, multiple client devices jointly learn a machine
l...
Local Differential Privacy (LDP) protocols enable an untrusted data coll...
A deep neural network (DNN) classifier represents a model owner's
intell...
Deep neural networks (DNNs) have transformed several artificial intellig...