
-
Adversarial Examples in Constrained Domains
Machine learning algorithms have been shown to be vulnerable to adversar...
read it
-
MLSNet: A Policy Complying Multilevel Security Framework for Software Defined Networking
Ensuring that information flowing through a network is secure from manip...
read it
-
IoTRepair: Systematically Addressing Device Faults in Commodity IoT (Extended Paper)
IoT devices are decentralized and deployed in un-stable environments, wh...
read it
-
Real-time Analysis of Privacy-(un)aware IoT Applications
Users trust IoT apps to control and automate their smart devices. These ...
read it
-
Multi-User Multi-Device-Aware Access Control System for Smart Home
In a smart home system, multiple users have access to multiple devices, ...
read it
-
How Relevant is the Turing Test in the Age of Sophisbots?
Popular culture has contemplated societies of thinking machines for gene...
read it
-
IoTSan: Fortifying the Safety of IoT Systems
Today's IoT systems include event-driven smart applications (apps) that ...
read it
-
Program Analysis of Commodity IoT Applications for Security and Privacy: Challenges and Opportunities
Recent advances in Internet of Things (IoT) have enabled myriad domains ...
read it
-
Regulating Access to System Sensors in Cooperating Programs
Modern operating systems such as Android, iOS, Windows Phone, and Chrome...
read it
-
Soteria: Automated IoT Safety and Security Analysis
Broadly defined as the Internet of Things (IoT), the growth of commodity...
read it
-
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
Deep neural networks (DNNs) enable innovative applications of machine le...
read it
-
Sensitive Information Tracking in Commodity IoT
Broadly defined as the Internet of Things (IoT), the growth of commodity...
read it
-
Ensemble Adversarial Training: Attacks and Defenses
Machine learning models are vulnerable to adversarial examples, inputs m...
read it
-
Extending Defensive Distillation
Machine learning is vulnerable to adversarial examples: inputs carefully...
read it
-
The Space of Transferable Adversarial Examples
Adversarial examples are maliciously perturbed inputs designed to mislea...
read it
-
On the (Statistical) Detection of Adversarial Examples
Machine Learning (ML) models are applied in a variety of tasks such as n...
read it
-
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
Deep neural networks, like many other machine learning models, have rece...
read it
-
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Many machine learning models are vulnerable to adversarial examples: inp...
read it
-
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Machine learning models are frequently used to solve complex security pr...
read it
-
Extending Detection with Forensic Information
For over a quarter century, security-relevant detection has been driven ...
read it
-
Practical Black-Box Attacks against Machine Learning
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vul...
read it
-
The Limitations of Deep Learning in Adversarial Settings
Deep learning takes advantage of large datasets and computationally effi...
read it
-
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Deep learning algorithms have been shown to perform extremely well on ma...
read it