
-
Sample Complexity of Adversarially Robust Linear Classification on Separated Data
We consider the sample complexity of learning with adversarial robustnes...
read it
-
ShadowNet: A Secure and Efficient System for On-device Model Inference
On-device machine learning (ML) is getting more and more popular as fast...
read it
-
Intertwining Order Preserving Encryption and Differential Privacy
Ciphertexts of an order-preserving encryption (OPE) scheme preserve the ...
read it
-
Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers
Detecting anomalous inputs, such as adversarial and out-of-distribution ...
read it
-
Abstract Universal Approximation for Neural Networks
With growing concerns about the safety and robustness of neural networks...
read it
-
Robust Learning against Logical Adversaries
Test-time adversarial attacks have posed serious challenges to the robus...
read it
-
Continuous Release of Data Streams under both Centralized and Local Differential Privacy
In this paper, we study the problem of publishing a stream of real-value...
read it
-
Towards Effective Differential Privacy Communication for Users' Data Sharing Decision and Comprehension
Differential privacy protects an individual's privacy by perturbing data...
read it
-
Obliviousness Makes Poisoning Adversaries Weaker
Poisoning attacks have emerged as a significant security threat to machi...
read it
-
Face-Off: Adversarial Face Obfuscation
Advances in deep learning have made face recognition increasingly feasib...
read it
-
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Recent advances in machine learning (ML) algorithms, especially deep neu...
read it
-
CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods
We study the problem of learning Granger causality between event types f...
read it
-
Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
We present Survival-OPT, a physical adversarial example algorithm in the...
read it
-
Semantic Robustness of Models of Source Code
Deep neural networks are vulnerable to adversarial examples - small inpu...
read it
-
Generating Semantic Adversarial Examples with Differentiable Rendering
Machine learning (ML) algorithms, especially deep neural networks, have ...
read it
-
On Need for Topology Awareness of Generative Models
Manifold assumption in learning states that: the data lie approximately ...
read it
-
On Need for Topology-Aware Generative Models for Manifold-Based Defenses
ML algorithms or models, especially deep neural networks (DNNs), have sh...
read it
-
MURS: Practical and Robust Privacy Amplification with Multi-Party Differential Privacy
When collecting information, local differential privacy (LDP) alleviates...
read it
-
Practical and Robust Privacy Amplification with Multi-Party Differential Privacy
When collecting information, local differential privacy (LDP) alleviates...
read it
-
Data-Dependent Differentially Private Parameter Learning for Directed Graphical Models
Directed graphical models (DGMs) are a class of probabilistic models tha...
read it
-
Adversarially Robust Learning Could Leverage Computational Hardness
Over recent years, devising classification algorithms that are robust to...
read it
-
Enhancing ML Robustness Using Physical-World Constraints
Recent advances in Machine Learning (ML) have demonstrated that neural n...
read it
-
Robust Attribution Regularization
An emerging problem in trustworthy machine learning is to train models t...
read it
-
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Attribution methods have been developed to explain the decision of a mac...
read it
-
Outis: Crypto-Assisted Differential Privacy on Untrusted Servers
Differential privacy has steadily become the de-facto standard for achie...
read it
-
Privacy-Preserving Collaborative Prediction using Random Forests
We study the problem of privacy-preserving machine learning (PPML) for e...
read it
-
Model Extraction and Active Learning
Machine learning is being increasingly used by individuals, research ins...
read it
-
Adversarial Learning and Explainability in Structured Datasets
We theoretically and empirically explore the explainability benefits of ...
read it
-
Explainable Black-Box Attacks Against Model-based Authentication
Establishing unique identities for both humans and end systems has been ...
read it
-
Adversarial Binaries for Authorship Identification
Binary code authorship identification determines authors of a binary pro...
read it
-
Neural-Augmented Static Analysis of Android Communication
We address the problem of discovering communication links between applic...
read it
-
Improving Adversarial Robustness by Data-Specific Discretization
A recent line of research proposed (either implicitly or explicitly) gra...
read it
-
Semantic Adversarial Deep Learning
Fueled by massive amounts of data, models produced by machine-learning (...
read it
-
OEI: Operation Execution Integrity for Embedded Devices
We formulate a new security property, called "Operation Execution Integr...
read it
-
The Manifold Assumption and Defenses Against Adversarial Perturbations
In the adversarial-perturbation problem of neural networks, an adversary...
read it
-
Manifold Assumption and Defenses Against Adversarial Perturbations
In the adversarial perturbation problem of neural networks, an adversary...
read it
-
The Unintended Consequences of Overfitting: Training Data Inference Attacks
Machine learning algorithms that are applied to sensitive data pose a di...
read it
-
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Motivated by applications such as autonomous vehicles, test-time attacks...
read it
-
ROSA: R Optimizations with Static Analysis
R is a popular language and programming environment for data scientists....
read it
-
Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics
While significant progress has been made separately on analytics systems...
read it
-
Practical Black-Box Attacks against Machine Learning
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vul...
read it
-
The Limitations of Deep Learning in Adversarial Settings
Deep learning takes advantage of large datasets and computationally effi...
read it
-
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Deep learning algorithms have been shown to perform extremely well on ma...
read it