Deep Learning and Open Set Malware Classification: A Survey

04/08/2020 ∙ by Jingyun Jia, et al. ∙ Florida Institute of Technology 10

As the Internet is growing rapidly these years, the variant of malicious software, which often referred to as malware, has become one of the major and serious threats to Internet users. The dramatic increase of malware has led to a research area of not only using cutting edge machine learning techniques classify malware into their known families, moreover, recognize the unknown ones, which can be related to Open Set Recognition (OSR) problem in machine learning. Recent machine learning works have shed light on Open Set Recognition (OSR) from different scenarios. Under the situation of missing unknown training samples, the OSR system should not only correctly classify the known classes, but also recognize the unknown class. This survey provides an overview of different deep learning techniques, a discussion of OSR and graph representation solutions and an introduction of malware classification systems.



There are no comments yet.


page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Malware, software that ”deliberately fulfills the harmful intent of an attacker” (Bayer et al., 2006) nowadays come in a wide range of variations and multiple families. They have become one of the most terrible and major security threats of the Internet today. Instead of using traditional defenses, which typically use signature-based methods. There is an active research area that using machine learning-based techniques to solve the problem in both sides:

  1. Classify known malware into their families, which turns out to be normal multi-classification

  2. Recognize unknown malware. i.e. malware that are not present in the training set but appear in the test set.

One solution for malware classification is to convert it to function call graphs (FCG), then classify them into families according to the representation of FCG as in (Hassen and Chan, 2017)

. Graphs are an important data structure in machine learning tasks, and the challenge is to find a way to represent graphs. Traditionally, the feature extraction relied on user-defined heuristics. And recent researches have been focus on using deep learning to automatically learn to encode graph structure into low-dimensional embedding.

Another problem is it is less likely to label all the classes in training samples for the fast-developing diverse of malware families, the second item has become even more important daily. The related open set recognition (OSR) should also be able to handle those unlabeled ones. Traditional classification techniques focus on problems with labeled classes. While OSR pays attention to unknown classes. It requires the classifier accurately classify known classes, meanwhile identify unknown classes. Meanwhile, deep learning based OSR solutions have become a flourish research area in recent years.

In this survey, we will first review basic deep learning techniques. Then a brief categorization of current OSR techniques will be given in section 3. In section 4, we will cover methods learning graph representations, followed by an introduction to state-of-art malware classification techniques in section 5. And finally, section 6 will conclude.

2. Deep learning basics

As conventional machine-learning techniques were limited in their ability to process natural data in their raw form, hence required expertise in feature engineering (LeCun et al., 2015). Deep Learning was introduced to discover intricate structures in high-dimension data, which requires litter engineering by hand.

In the following subsections, we will give an overview of different network architectures in different areas in recent years.


Figure 1. A CNN sequence to classify handwritten digits (Saha, (2018))

2.1. Convolutional neural networks

As a popular architecture of Deep Neural Networks, Convolutional Neural Networks (CNNs) has achieved good performance in the Computer Vision area. As Figure


, a typical CNN usually consists of “Input Layer”, “Convolutional Layer”, “Pooling Layer” and “Output Layer”. The convolutional and pooling layer can be repeated several times. In most cases, the Relu function is used as activation in the convolutional layer and Max-Pooling is used in the pooling layer. During the learning process, filters will be learned and feature maps will then be generated, which is the output of representation learning. The output is usually followed by a fully connected network for classification problems. The architectures could be different in various aspects: layers and connections (

(Cao et al., 2015) (He et al., 2016)

), loss functions (

(Wen et al., 2016) (He et al., 2018) (Deng et al., 2019)), etc.

2.1.1. Feedback Network

Cao et al. (2015) proposed Feedback Network to develop a computational feedback mechanism which can help better visualized and understand how deep neural network works, and capture visual attention on expected objects, even in images with cluttered background and multiple objects.

Feedback Network introduced a feedback layer. The feedback layer contains another set of binary neuron activation variables

. The feedback layer is stacked upon each ReLU layer, and they compose a hybrid control unit to active neuron response in both bottom-up and top-down manners: Bottom-Up Inherent the selectivity from ReLU layers, and the dominant features will be passed to upper layers; Top-Down is controlled by Feedback Layers, which propagate the high-level semantics and global information back to image representations. Only those gates related to particular target neurons are activated.

2.1.2. Center loss

Wen et al. proposed center loss as a new supervision signal (objective function) for face recognition tasks in

(Wen et al., 2016). To separate the features of different classes to achieve better performance in classification tasks, center loss tries to minimize the variation of intra-class. Let denotes the center of the embeddings of th class, the loss function looks like:


To make the computation more efficient, center loss uses a mini-batch updating method and the centers are updated by the features mean of the corresponding classes after each iteration. The paper showed that under the joint supervision of softmax loss and center loss, CNN can obtain inter-class dispensation and intra-class compactness as much as possible.

2.1.3. Triplet-center loss

Inspired by triplet loss and center loss, He et al. introduced triplet-center loss to further enhance the discriminative power of the features, as to 2D object recognition algorithms in (He et al., 2018). Triplet-loss intends to find an embedding space where the distances between different classes are greater than those form the same classes. Center loss tries to find embedding spaces where the deep learned features from the same class more compact and closer to the corresponding center. Similarly, instead of comparing the distances of each two instances in triplet loss, triplet loss computes the distances of instance and class center.


where is a distance function and is margin value. By setting up a margin value, the loss function ensures different classes be pushed by at least distance away.

2.1.4. Arcface

In (Deng et al., 2019), Deng et al. proposed an additive angular margin loss (ArcFace) to obtain highly discriminative features for face recognition. Based on classic softmax loss,


where denotes the embedding of the th sample. After normalizing and , ArcFace adds an additive angular margin penalty between and to simultaneously enhance the intra-class compactness and inter-class discrepancy:


The paper shows that ArcFace has a better geometric attribute as the angular margin has the exact correspondence to the geodesic distance.

2.1.5. ResNets

The training of deeper neural networks is facing degradation problems: with the network depth increasing, accuracy gets saturated and then degrades rapidly. The degradation problem indicates that not all systems are similarly easy to optimize. Under the hypothesis that it is easier to optimize the residual mapping than to optimize the original unreferenced mapping, He et al. (2016) presented a residual learning framework called ResNets to ease the training of the network. ResNets consists of residual blocks as Figure 2. Instead of hoping each few stacked layers directly fit a desired underlying mapping, ResNets explicitly let these layers fit a residual mapping.


Figure 2. A building block of resNet (He et al., (2016))

Specifically, instead fitting the desired underlying mapping as , ResNet makes stacked nonlinear layers fit another mapping of . Then original mapping is recast into . In an extreme case, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping.


Figure 3. A standard RNN contains a single layer (Olah, (2015))

2.2. Recurrent neural networks

Another type of popular architecture of Deep Neural Networks is Recurrent Neural Networks (RNNs), which involves sequential inputs, such as speech and language. RNNs process an input sequence one element at a time, also maintain a state vector that implicitly contains all the historical information. An unfold RNN (Figure

3) could be considered as a deep multi-layer network. Just like CNNs, there are multiple variants for RNNs as well. Particularly, it has been widely used in machine translation tasks ((Cho et al., 2014) (Bahdanau et al., 2015) (Luong et al., 2015)).


Figure 4. An LSTM contains four interacting layers (Olah, (2015))

2.2.1. Lstm

As it is not applicable to store information for very long, “Long Short Term Memory” (LSTM) was proposed to solve the problem.


gave a gentle tutorial on basics of backpropagation in recurrent neural networks (RNN) and long short-term memory (LSTM) in

(Chen, 2016). LSTM (Figure 4) includes four gates: input modulation gate, input gate, forget gate (Gers et al. (1999)

) and output gate along with their corresponding weights. LSTM also contains a special unit called memory cell act like an accumulator or a gated leaky neuron. Meanwhile, There are other augment RNNs with a memory module such as “Neural Turing Machine” and “memory networks”. These models are being used for tasks need reasoning and symbol manipulation.


Figure 5. An illustration of the RNN Encoder-Decoder (Cho et al., (2014))

2.2.2. RNN encoder-dencoder

Cho et al. (2014) proposed a neural network architecture called RNN Encoder-Decoder (Figure 5), which can be to used as additional features in statistical machine translation (SMT) system to generate a target sequence, also can be used to score a given pair of input and output sequence. The architecture learns to encode a variable-length sequence into a fixed-length vector representation and to decode a given fixed-length vector representation back into a variable-length sequence. The encoder is an RNN that reads each symbol of an input sequence x sequentially. The decoder of the proposed model is another RNN that is trained to generate the output sequence by predicting the next symbol given the hidden state.

In addition to a novel model architecture, the paper also proposed a variant of LSTM, which includes an update gate and a reset gate. The update gate selects whether the hidden state is to be updated with a new hidden state while the reset gate decides whether the previous hidden state is ignored.


Figure 6. An illustration of the RNNsearch (Bahdanau et al., (2015))

2.2.3. RNNsearch

In (Bahdanau et al., 2015), Bahdanau et al. proposed a new architecture for machine translation model by adding an alignment model to basic RNN Encoder-Decoder. Just like a traditional machine translation model, the proposed architecture consists of an encoder and a decoder. The encoder reads the input sentence, then converts into a vector. And the decoder emulates searching through a source sentence during decoding a translation. As Figure 6, the align model learns the weights of each annotation scoring how well the inputs around the position and output at position match. The score is based on the RNN hidden state and the th annotation of the input sentence.

2.2.4. Attentional mechanism

In (Luong et al., 2015), Luong et al.

examined two classes of attentional mechanism to better improve neural machine translation (NMT): a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. Based on LSTM, they introduced a variable-length alignment vector for two kinds of attentional mechanisms. The global attention model is based on the global context, and the size of the alignment vector equals the number of time steps on the source site. While the local attention model is based on a window context, where the size of the alignment vector equals to window size.


Figure 7. Overview of the framework of GANs (Silva, (2018))

2.3. Generative adversarial networks

Deep learning has achieved great performance in supervised learning in discriminative models. However, deep generative models have had less of impact as:

  • It is difficult to approximate the computations in maximum likelihood estimation

  • It is difficult to leverage the benefits of piecewise linear units in the generative context”

Goodfellow et al. (2014) proposed a new generative model: generative adversarial nets (GANs) to avoid these difficulties. The proposed GANs architecture includes two components: a generator and a discriminator . To learn the distribution over given data . GANs define a prior on input noise variables . The framework of GANs corresponds to a “min-max two-player game” (discriminator vs. generator) with value function :


Generator generates noise samples from a prior distribution and discriminator represents the probability of the data come from the target dataset rather than a generator. Hence the target is to train the discriminator

to maximize the probability of assigning the correct label to both training examples and samples from , meanwhile train the generator to minimize , i.e. generating samples alike examples to “fool” the discriminator. In practice, the procedure optimizes steps and one step of .

2.3.1. DCGANs

Radford et al. (2016)

proposed deep convolution generative adversarial networks (DCGANs) to bridge the gap between the supervised learning and unsupervised learning in CNNs, which makes GANs more stable. The architecture guidelines for stable Deep Convolutional GANs:

  • Replace any pooling layers with stridden convolutions (discriminator) and fractional-strided convolutions (generator).

  • Use batch norm in both the generator and the discriminator.

  • Remove fully connected hidden layers for deeper architectures.

  • Use ReLU activation in the generator for all layers except for the output, which uses Tanh.

  • Use LeakyReLU activation in the discriminator for all layers.

2.3.2. Aae

Makhzani et al. (2015)

proposed a new inference algorithm Adversarial Autoencoder (AAE), which uses the GANs framework which could better deal with applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction, and data visualization. The algorithm aims to find a representation for graphs that follows a certain type of distribution. And it consists of two phases: the reconstruction phase and the regularization phase. In the reconstruction phase, encoder and decoder are updated to minimize reconstruction error. In the regularization phase, the discriminator is updated to distinguish true prior samples from generated samples, and the generator is updated to fool the discriminator. Reconstruction phase and regularization phase are referred to as the generator and discriminator in GANs. And the method could be used in semi-supervised learning and unsupervised clustering. For semi-supervised learning, there is a semi-supervised classification phase besides the reconstruction phase and regularization phase. And labeled data would be trained at this stage. which is an aggregated categorical distribution. The architecture of unsupervised clustering is similar to semi-supervised learning, the difference is that the semi-supervised classification stage is removed and thus no longer train the network on any labeled mini-batch.

2.4. Representation learning

Representation learning allows a machine to be fed with raw data and to automatically discover the representations (embeddings) needed for detection or classification (LeCun et al., 2015). Those raw data could be images, videos, texts, etc. An image comes in the form of an array of pixel values and texts come in the form of word sequences. Motivated by different objectives, a set of representative features would be generated through deep neural networks.

2.4.1. Skip-gram

Skip-gram model has achieved good performance in learning high-quality vector representations of words from large amounts of unstructured text data, which doesn’t require dense matrix multiplications. The training objective of the Skip-gram model is to find word representations that are useful for predicting the surrounding words in a sentence or a document. Given a sequence of training words , the objective of the Skip-gram model is to maximize the average log probability:


where is the size of training context. The basic Skip-gram formulation defines using the softmax function:


where and are the ”input” and ”output” vector representations of , and is the number of words in the vocabulary. Based on the skip-gram algorithm. Mikolov et al. (2013)

presents some extensions to improve its performance: Hierarchical softmax, negative sampling and subsampling. It shows that the word vectors can be meaningful combined using just simple vector addition. Specifically, hierarchical softmax uses a binary tree to present the output layer rather than a plat output of all the words for output dimension reduction to make the computation more efficient. An alternative to hierarchical softmax is negative sampling, inspired by Noise Contrastive Estimation (NCE). The basic idea is to sample one “accurate” data and

noise data, the objective is to maximize their conditional log-likelihood:


The objective of NCE is used to replace every term in the Skip-gram objective, and the task is to distinguish the target word from draws from the noise distribution

using logistic regression, where there are

negative samples for each data sample. The paper also suggested a simple subsampling approach to address the imbalance issue between the rare and frequent words: each word in the training set is discarded with probability computed by the formula


where is the frequent of word and is threshold.

2.4.2. GloVe

To better deal with word representations: word analogy, word similarity, and named entity recognition tasks,

Pennington et al. (2014) constructs a new model GloVe (for Global Vectors), which can capture the global corpus statistics. GloVe combines count-based methods and prediction-based methods for the unsupervised learning of word representations, proposing a new cost function


where is the size of the vocabulary, is a weighting function. are word vectors and are separate context word vector as training multiple instances of the network and then combining the results can help reduce overfitting and noise and generally improve results. and are corresponding bias for and .

2.5. Meta-learning and interpretability


Figure 8. Diagram of the MAML (Finn et al., (2017))

2.5.1. Meta-Learning

Deep neural networks generally perform poorly on few-shot learning tasks as a classifier has to quickly generalize after seeing very few examples from each class. Ravi and Larochelle (2016) proposes an LSTM based meta-learner model to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The meta-learner captures both short-term knowledge within a task and long-term knowledge common among all the tasks. Also, Finn et al. (2017)

proposed an algorithm called model-agnostic meta-learning (MAML) for meta-learning which is compatible with any model trained with gradient descent and different learning problems such as classification, regression and reinforcement learning. The meta-learning is to prepare the model for fast adaption. In general, it consists of two steps:

  1. sample batch of tasks to learn the gradient update for each of them, then combine their results;

  2. take the result of step 1 as a starting point when learning a specific task.

The diagram looks like Figure 8, it optimizes for a representation that can quickly adapt to new tasks.

2.5.2. Lime

As machine learning techniques are rapidly developed these days, there are plenty of models remain mostly black box. To make the predictions more interpretable to non-expertises despite which model it is (model-agnostic), so that people can make better decisions, Ribeiro et al. (2016) proposed a method - Local Interpretable Model-agnostic Explanations (LIME) to identify an interpretable model over interpretable presentation that is locally faithful to the classifier. It introduced sparse line explanations, weighing similarity between instance and its interpretable version with their distance. The paper also suggested a submodular pick algorithm (SP-LIME) to better select instances by picking the most important features based on the explanation matrix learned in LIME.

2.5.3. Large-Scale evolution

To minimize human participation in neural network design, Real et al. (Real et al., 2017)

employed evolutionary algorithms to discover network architectures automatically. The evolutionary algorithm uses the evolutionary algorithm to select the best of a pair to be a parent during tournament selection. Using pairwise comparisons instead of whole population operations.

In the proposed method, individual architectures are encoded as a graph. In the graph, the vertices represent rank-3 tensors or activations. The graph’s edges represent identity connections or convolutions and contain the mutable numerical parameters defining the convolution’s properties. A child is similar but not identical to the parent because of the action of a mutation. Mutation operations include ”ALTER-LEARNING-RATE”, ”RESET-WEIGHTS”, ”INSERT-CONVOLUTION”, ”REMOVE-CONVOLUTION”, etc.

3. Open Set Recognition

As deep learning has achieved great success in object classification, it is less likely to label all the classes in training samples, and the unlabeled class, so-called open-set becomes a problem. Different from a traditional close-set problem, which only requires correctly classify the labeled data, open set recognition (OSR) should also handle those unlabeled ones. Geng et al. stated four categories of recognition problems as follows (Geng et al., 2018):

  1. known known classes: labeled distinctive positive classes, available in training samples;

  2. known unknown classes: labeled negative classes, available in training samples;

  3. unknown known classes: training samples not available, but having some side-information such as semantic/attribute information

  4. unknown unknown classes: neither training samples nor side-information available, completely unseen.

Traditional classification techniques focus on problems with labeled classes, which include known known classes and known unknown classes. While open set recognition (OSR) pays attention to the later ones: unknown known classes and unknown unknown classes. It requires the classifier accurately classify known known classes, meanwhile identify unknown unknown classes.

In general, the techniques can be categorized into three classes according to the training set compositions as Table 1.

Training Set papers
Borrowing Additional Data (Shu et al., 2018a) (Saito et al., 2018) (Shu et al., 2018b) (Hendrycks et al., 2019) (Dhamija et al., 2018) (Perera and Patel, 2019)
Generating additional data (Jo et al., 2018) (Neal et al., 2018) (Ge et al., 2017) (Yu et al., 2017) (Lee et al., 2018)
No Additional Data (Bendale and Boult, 2016) (Hassen and Chan, 2018a) (Júnior et al., 2017) (Mao et al., 2018) (Wang et al., 2018) (Schultheiss et al., 2017) (Zhang and Patel, 2016) (Liang et al., 2018) (Shu et al., 2017)
Table 1. OSR Techniques Categorization

3.1. Borrowing additional data

To better discriminate known class and unknown class, some techniques introduce unlabeled data in training((Shu et al., 2018a) (Saito et al., 2018)). In addition, (Shu et al., 2018b) indicates several manually annotations for unknown classes are required in their workflow.

3.1.1. Open set domain adaptation by backpropagation

Saito et al. (2018) proposed a method which marks unlabeled target samples as unknown, then mixes them with labeled source samples together to train a feature generator and a classifier. The classifier attempts to make a boundary between source and target samples whereas the generator attempts to make target samples far from the boundary. The idea is to extract feature which separates known and unknown samples. According to the feature generator, the test data either would be aligned to known classes or rejected as an unknown class.

3.1.2. Unseen class discovery in open-world classification


Figure 9. Overall framework of OCN+PCN+HC (Shu et al., 2018)

Shu et al. (2018a)

introduced a framework to solve the open set problem, which involves unlabeled data as an autoencoder network to avoid overfitting. Besides autoencoder, it contains another two networks in the training process - an Open Classification Network (OCN), a Pairwise Classification Network (PCN). Only OCN participants in the testing phase, which predicts test dataset including unlabeled examples from both seen and unseen classes. Then it follows the clustering phase, based on the results of the predictions of PCN, they used hierarchical clustering (bottom-up/ merge) to cluster rejected examples clusters. Overall framework as Figure


3.1.3. Odn

Manual labeled unknown data is used in Open Deep Network (ODN) proposed by Shu et al. (2018b). It needs several manually annotations. Specifically, it added another new column corresponding to the unknown category to the weight matrix and initialized it as :


where is the weight column if the known th category. In addition, as the similar categories should play a more critical role in the initialization of , ODN added another term to emphase the similar known categories. The is the weight columns of highest activation values. The is concatenated to the transfer weight to support the new category. And this initialization method is called Emphasis Initialization.

ODN also introduces multi-class triplet thresholds to identify new categories: accept threshold, reject threshold and distance-threshold. Specifically, a sample would be accepted as a labeled class if and only if the index of its top confidence value is greater than the acceptable threshold. A sample would be considered as unknown if all the confidence values are below the rejected threshold. For samples between accept threshold and reject threshold, they would also be accepted as a labeled class if the distance between top and second maximal confidence values is large than the distance-threshold.

3.1.4. Outlier Exposure

Hendrycks et al. (2019)

proposed Outlier Exposure(OE) to distinguish between anomalous and in-distribution examples. OE borrowed data from other datasets to be “out-of-distribution” (OOD), denoted as

. Meanwhile target samples as “in-distribution”, marked as . Then the model is trained to discover signals and learn heuristics to detect” which dataset a query is sampled from. Given a model and the original learning objective , the objective function of OE looks like:


is an outlier exposure dataset. The equation indicates the model tries to minimize the objective L for data from “in-distribution” () and “out-of-distribution” (). The paper also used the maximum softmax probability baseline detector (cross-entropy) for . And when labels are not available, was set to a margin ranking loss on the log probabilities and . However, the performance of this method depends on the chosen OOD dataset.

3.1.5. Objectosphere Loss

Dhamija et al. (2018)

proposed Entropic Open-Set and Objectoshere losses for open set recognition, which trained networks using negative samples from some classes. The method reduced the deep feature magnitude and maximize entropy of the softmax scores of unknown sample to separate them from known samples. The idea of Entropic Open-Set is to maximum entropy when an input is unknown. Formally, let

denotes the softmax score of sample from known class , the Entropic Open-Set Loss can be defined as:


where denotes out of distribution samples. To further separate known and unknown samples, the paper pushed known samples into the “objectosphere” where they have large feature magnitude and low entropy, so-called Objectosphere Loss, which is calculated as:


where is the deep feature vector, and Objectosphere Loss penalizes the known classes if their feature magnitude is inside the boundary of the Objectosphere and unknown classes if their magnitude is greater than zero.

3.1.6. Doc

Perera and Patel (2019) proposed a deep learning-based solution for one-class classification (DOC) feature extraction. The objective of one-class classification is to recognize normal class and abnormal class using only samples from normal class and there are different strategies to solve a classification problem. The proposed accept two inputs: one from the target dataset, one from the reference dataset, and produces two losses through a pre-trained reference network and a secondary network. The reference dataset is the dataset used to train the reference network, and the target dataset contains samples of the class for which one-class learning is used for. During training, two image batches, each from the reference dataset and the target dataset are simultaneously fed into the input layers of the reference network and secondary network. At the end of the forward pass, the reference network generates a descriptiveness loss (), which is the same as the cross-entropy loss, and the secondary network generates compactness loss (). The composite loss () of the network is defined as:


where and are the training data from reference dataset and target dataset respectively, is the shared weights of both networks, and is a constant.

The overview of the proposed method looks like figure 10, where is feature extraction networks and is classification networks. The compactness loss is to assess the compactness of the class under consideration in the learned feature space. The descriptiveness loss was assessed by an external multi-class dataset.


Figure 10. Overview of the Deep Feature for One-Class Classification framework (Perera and Patel, (2019))

All the above methods borrow some dataset as unknown class during training, (Shu et al., 2018a) borrows target samples (test set) as unknown classes and utilizes adversarial learning. A classifier is trained to make a boundary between the source and the target samples whereas a generator is trained to make target samples far from the boundary. (Saito et al., 2018) also borrows unlabeled examples from the dataset, then used and Auto-encoder to learn the representations. (Shu et al., 2018b) uses several manually annotations during Emphasis Initialization. (Hendrycks et al., 2019) introduced outlier exposure datasets on top of in-distribution datasets. (Dhamija et al., 2018) also introduces unknown datasets, meanwhile utilizes the differences of feature magnitudes between known and unknown samples as part of the objective function. Different from the multi-class classification problems, (Perera and Patel, 2019)

presents a one-class classification problem from anomaly detection, with additional reference dataset for transfer learning. In general, borrowing and annotating additional data s OSR an easier problem. However, the retrieval and selection of additional datasets would be another problem.

3.2. Generating additional data

As adversarial learning has achieved great access such as GANs, there are ideas use GANs generating unknown samples before training.

3.2.1. G-OpenMax

Ge et al. (2017) designed a networks based on OpenMax and GANs. Their approach provided explicit modeling and decision score for novel category image synthesis. The method proposed has two stages as well as OpenMax: pre-Network training and score calibration. During the pre-Network training stage, different with OpenMax, it first generates some unknown class samples (synthetic samples) then sends them along with known samples into networks for training. A modified conditional GAN is employed in G-OpenMax to sythesize unknown classes. In conditional GAN, random noise is fed to the generator with a one-hot vector , which represents a desired class. Meanwhile, the discriminator learns faster if the input image is supplied together with the class it belongs to. Thus, the optimization of a conditional GAN with class labels can be formulated as:


where and are trainable parameters for and , the generator inputs and are the latent variables drawn from their prior distribution and . For each generated sample, if the class with the highest value is different from the pre-trained classifier, it will be marked as ”unknown”. Finally, a final classifier provides an explicit estimated probability for generated unknown classes.

3.2.2. Adversarial sample generation

Yu et al. (2017) proposed Adversarial Sample Generation (ASG) as a data augmentation technique for the OSR problem. The idea is to generate some points closed to but different from the training instances as unknown labels, then straightforward to train an open-category classifier to identify seen from unseen. Moreover, ASG also generates ”unknown” samples, which are close to ”known” samples. Different from the GANs min-max strategy, ASG generated samples based on distances and distributions, the generated unknown samples are:

  1. close to the seen class data

  2. scattered around the known/unknown boundary

3.2.3. Counterfactual image generation

Different from standard GANs, Neal et al. (2018) proposed a dataset augmentation technique called counterfactual image generation, which adopts an encoder-decoder architecture to generate synthetic images closed to the real image but not in any known classes, then take them as unknown class. The architecture consists of three components:

  • An encoder network: maps from images to a latent space.

  • A generator: maps from latent space back to an image.

  • A discriminator: discriminates generated images from real images.

3.2.4. Gan-Mdfm

Jo et al. (2018)

presented a new method to generate fake data for unknown unknowns. They proposed Marginal Denoising Autoencoder (MDAE) technique, which models the noise distribution of known classes in feature spaces with a margin is introduced to generate data similar to known classes but not the same ones. The model contains a classifier, a generator, and an autoencoder. The classifier calculated the entropy of membership probability instead of discriminating generated data from real data explicitly. Then, a threshold is used here to identify unknown classes. The generator modeled the distribution

away from the known classes.

3.2.5. Confident classifier

In order to distinguish in-distribution and out-of-distribution samples, Lee et al. (2018) suggested two additional terms added to the original cross-entropy loss, where the first one (confident loss) forces out-of-distribution samples less confident by the classifier while the second one (adversarial generator) is for generating most effective training samples for the first one. Specifically, the proposed confident loss is to minimize the Kullback-Leibler (KL) divergence from the predictive distribution on out-of-distribution samples to the uniform one in order to achieve less confident predictions for samples from out-of-distribution. Meanwhile in- and out-of-distributions are expected to be more separable. Then, an adversarial generator is introduced to generate the most effective out-of-distribution samples. Unlike the original generative adversarial network(GAN), which generates samples similar to in-distribution samples, the proposed generator generates ”boundary” samples in the low-density area of in-distribution acting as out-of-distribution samples. Finally, a joint training scheme was designed to minimize both loss functions alternatively. Finally, the paper showed that the proposed GAN implicitly encourages training a more confident classifier.

Instead of borrowing data from other datasets, generating additional data methods generate unknown samples from the knowns. Most data generation methods are based on GANs. (Ge et al., 2017) introduces a conditional GAN to generate some unknown samples followed by OpenMax open set classifier. (Yu et al., 2017) also uses the min-max strategy from GANs, generating data around the decision boundary between known and unknown samples as unknown. (Neal et al., 2018) adds another encoder network to traditional GANs to map from images to a latent space. (Jo et al., 2018) generates unknown samples by marginal denoising autoencoder that provided a target distribution which is away from the distribution of known samples. (Lee et al., 2018) generates ”boundary” samples in the low-density area of in-distribution acting as unknown samples, and jointly trains confident classifier and adversarial generator to make both models improve each other. Generating unknown samples for the OSR problem has achieved great performance, meanwhile, it requires more complex network architectures.


Figure 11. Network Architecture of ii-loss (Hassen and Chan, 2018)

3.3. No additional data

The OSR techniques not requiring additional data in training can be divided to DNN-based ((Bendale and Boult, 2016) (Hassen and Chan, 2018a) (Mao et al., 2018) (Zhang and Patel, 2016) (Liang et al., 2018)) and traditional ML-based ((Júnior et al., 2017)).

3.3.1. Extreme Value Signatures

Schultheiss et al. (2017)

investigated class-specific activation patterns to leverage CNNs to novelty detection tasks. They introduced “extreme value signature”, which specifies which dimensions of deep neural activations have the largest value. They also assumed that a semantic category can be described by its signature. Thereby, a test example will be considered as novel if it is different from the extreme-value signatures of all known categories, They applied extreme value signatures on the top of existing models, which allow to “upgrade” arbitrary classification networks to jointly estimate novelty and class membership.

3.3.2. OpenMax

Bendale and Boult (2016)

proposed OpenMax which replaces the softmax layer in DNNs with an OpenMax layer, and the model estimates the probability of an input being from an unknown class. The model adopts the Extreme Value Theory (EVT) meta-recognition calibration in the penultimate layer of the networks. For each instance, the activation vector is revised to the sum of the product of its distance to the mean activation vectors (MAV) of each class. Further, it redistributes values of activation vector acting as activation for unknown class. Finally, the new redistributed activation vectors are used for computing the probabilities of both known and unknown classes.

3.3.3. ii-loss

Hassen and Chan (2018a) proposed a distance-based loss function in DNNs in order to learn the representation for open set recognition. The idea is to maximize the distance between different classes (inter-class separation) and minimize distance of an instance from its class mean (intra-class spread). So that in the learned representation, instances from the same class are close to each other while those from different classes are further apart. More formally, let be the the projection (embedding) of the input vector of instance . The intra class spread is measured as the average distance of instances from their class means:


where is the number of training instances in class , is the number of training instances, and is the mean of class . Meanwhile, the inter class separation is measured as the closest two class means among all the known classes:


The proposed ii-loss minimizes the intra-class spread and maximizes inter-class separation:


So that the distance between an instance and the closest known class mean can be used as a criterion of unknown class. i.e. if the distance above some threshold, the instance then be recognized as an unknown class. The network architecture as Figure 11.

3.3.4. Distribution networks

Mao et al. (2018)

assumed that through a certain mapping, all the classes followed different Gaussian distributions. They proposed a distributions parameters transfer strategy to detect and model the unknown classes through estimating those of known classes. Formally, let

denotes the embedding of , they assume samples from class

follow a probability distribution

with learnable parameters in the latent space. For class , the log-likelihood is


The training objective is to make samples more likely to belong to their labeled class. i.e, maximize the log-likelihood of each class with respect to their samples. Hence the negative mean log-likelihood is used as a loss function in the proposed distribution networks.


The method can not only detect novel samples but also differentiate and model unknown classes, hence discover new patterns or even new knowledge in the real world.

3.3.5. Osnn

Besides DNNs, Júnior et al. (2017) introduced OSNN as an extension for the traditional machine learning technique - Nearest Neighbors(NN) classifier. It applies the Nearest Neighbors Distance Ratio (NNDR) technique as a threshold on the ratio of similarity scores. Specifically, it measures the ratio of the distances between a sample and its nearest neighbors in two different known classes. And assign the sample to one of the class if the ratio is below a certain threshold. And samples who are ambiguous between classes (ratio above a certain threshold) and those faraway from any unknown class are classified as unknown.

3.3.6. Rlcn

Wang et al. (2018) proposed a pairwise-constraint loss (PCL) function to achieve “intra-class compactness” and “inter-class separation” in order to address OSR problem. They also developed a two-channel co-representation framework to detect novel class over time. In addition to which, they added a Frobenius regularization term to avoid over-fitting. Their model also applied binary classification error(BCE) at the final output layer to form the entire loss function. Moreover, they applied temperature scaling and t distribution assumptions to find the optimal threshold, which requires fewer parameters. The two-channel co-representation framework looks like figure 12.


Figure 12. Overview of the RLCN Framework (Wang et al., (2018))

3.3.7. Srosr

Zhang and Patel (2016) proposed a generalized Sparse Recognition based Classification (SRC) algorithm for open set recognition in. The algorithm uses class reconstruction errors for classification. It models the tail of those two error distributions using the statistical Extreme Value Theory (EVT), then simplifies the open set recognition problem into a set of hypothesis testing problems. Figure 13 gives an overview of the proposed SROSR algorithm.


Figure 13. Overview of the SROSR framework (Zhang and Patel, (2016))

The algorithm consists of two main stages. In the first stage, given training samples, SROSR models tail part of the matched reconstruction error distribution and the sum of non-matched reconstruction error using the EVT. In the second stage, the modeled distributions and the matched and the non-matched reconstruction errors are used to calculate the confidence scores for test samples. Then these scores are fused to obtain the final score for recognition.

3.3.8. Odin

Liang et al. (2018) proposed ODIN, an out-of-distribution detector, for solving the problem of detecting out-of-distribution images in neural networks. The proposed method does not require any change to a pre-trained neural network. The detector is based on two components: temperature scaling and input pre-processing. Specifically, ODIN set a temperature scaling parameter in original softmax output for each class like:


ODIN used the maximum softmax probability softmax score, the temperature scaling can push the softmax scores of in- and out-of-distribution images further apart from each other, making the out-of-distribution images distinguishable. Meanwhile, small perturbations were added to the input during pre-processing to make in- distribution images and out-of-distribution images more separable.

3.3.9. Doc

To address open classification problem, Shu et al. proposed Deep Open Classification (DOC) method in (Shu et al., 2017). DOC builds a multi-class classifier with a 1-vs-rest final layer of sigmoids rather than softmax to reduce the open space risk as Figure 14

. Specifically, the 1-vs-rest layer contains one sigmoid function for each class. And the objective function is the summation of all log loss of the sigmoid functions:


where is the indicator function and is the probability output from th sigmoid function (th class) on the th document’s th dimension of .


Figure 14. Overview of DOC framework (Shu et al., (2017))

DOC also borrows the idea of outlier detection in statistics to reduce the open space risk further for rejection by tightening the decision boundaries of sigmoid functions with Gaussian fitting. It fits the predicted probability for all training data of each class, then estimates the standard deviation to find the classification thresholds for each different class.

The above papers manage to solve the OSR problems without additional datasets, and some of them adopt similar ideas as in Table 2. (Schultheiss et al., 2017), (Bendale and Boult, 2016) and (Zhang and Patel, 2016) utilize EVT to distinguish unknown class and known classes. (Hassen and Chan, 2018a) and (Wang et al., 2018) design different distance-based loss functions to achieve “intra-class compactness” and “inter-class separation”. Some technologies are applied in different ways: (Wang et al., 2018) uses temperature scaling to find the threshold of outliers, while (Liang et al., 2018) uses temperature scaling in softmax output. (Mao et al., 2018) assumes all the classes followed different Gaussian distributions, while (Shu et al., 2017) tightens the decision boundaries of sigmoid functions with Gaussian fitting. In general, not using an additional dataset requires the networks generating more precise representations for known classes. Other than DNN, (Júnior et al., 2017) introduces an extension for the Nearest Neighbors classifier.

Ideas (Schultheiss et al., 2017) (Bendale and Boult, 2016) (Hassen and Chan, 2018a) (Mao et al., 2018) (Júnior et al., 2017) (Wang et al., 2018) (Zhang and Patel, 2016) (Liang et al., 2018) (Shu et al., 2017)
DNN x x x x x x x x
EVT x x x
Distance-based activation vector x
Distance-based loss function x x
Gaussian distribution x x
Temperature scaling x x
Input perturbations x
1-vs-rest x
Nearest neighbors x
Table 2. Similarities and Differences of OSR Techniques without Additional Data

4. Learning graph representation

Hamilton et al. (2017b) provided a review of techniques in representation learning on graphs, which including matrix factorization-based methods, random-walk based algorithms ad graph network.

The paper introduced methods for vertex embedding and subgraph embedding. The vertex embedding can be viewed as encoding nodes into a latent space from an encoder-decoder perspective. The goal of subgraph embedding is to encode a set of nodes and edges, which is a continuous vector representation.

4.1. Vertex embedding

Vertex embedding can be organized as an encoder-decoder framework. An encoder maps each node to a low-dimensional vector or embedding. And decoder decodes structural information about the graph from the learned embeddings. Adopting the encoder-decoder perspective, there are four methodological components for the various node embedding methods (Hamilton et al., 2017b):

  • A pairwise similarity function, which measures the similarity between nodes

  • An encoder function, which generates the node embeddings

  • A decoder function, which reconstructs pairwise similarity values from the generated embeddings

  • A loss function, which determines how the quality of the pairwise reconstructions is evaluated to train the model

The majority of node embedding algorithm reply on shallow embedding, whose encoder function just maps nodes to vector embedding. However, these shallow embedding vectors have some drawbacks.

  • No parameters are shared between nodes in the encoder, which makes it computationally inefficient.

  • Shallow embedding fails to leverage node attributes during encoding.

  • Shallow embedding can only generate embeddings for nodes that were present during the training phase, cannot generate embeddings for previously unseen nodes without additional rounds of optimization.

Recently, several deep neural network-based approaches have been proposed to address the above issues. They used autoencoders to compress information about a node’s local neighborhood.

4.1.1. Gcn

In the work of Graph Convolutional Networks (GCN) (Kipf and Welling, 2017), Kipf and Welling presented encoded the graph structure directly using a neural network model and trained on a supervised target. The adjacency matrix of the graph will then allow the model to distribute gradient information from the supervised loss and will enable it to learn representations of nodes both with and without labels.

The paper first introduces a simple and well-behaved layer-wise propagation rule for neural network models which operate directly on graphs as:


Where is the adjacency matrix of the undirected graph,

is identity matrix.

is the adjacency matrix with added self-connections with added self-connections. and is a layer-specific trainable weight matrix.

denotes an activation function and

is the matrix of activation functions in the layer. Considering a two-layer GCN as semi-supervised node classificaiton example. The pre-processing step calculates , then the forward model takes the simple form:


Here, is an input-to-hidden weight matrix for a hidden layer with feature maps. is a hidden-to-output weight matrix. For semi-supervised multi-class classification, the cross entropy error is evaluated over all labeled examples:


where is the set of node indices that have labels.


Figure 15. Visual illustration of the GraphSAGE sample and aggregate approach (Hamilton et al., (2017a))

4.1.2. GraphSAGE

Hamilton et al. presented GraphSAGE in (Hamilton et al., 2017a), a general inductive framework that leverages node feature information to efficiently generate node embeddings for previously unseen data. GraphSAGE can learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood as Figure 15. Instead of training individual embeddings for each node, a set of aggregator functions are learned to aggregate feature information from a node’s local neighborhood from a different number of hops away from a given node, for example, for aggregator function we have:


where is representation vector, is input node, is neighborhood function. GraphSAGE then concatenates the node’s current representation, , with the aggregated neighborhood vector. , and this concatenated vector is fed through a fully connected layer with nonlinear activation function like:


The learned aggregation functions are then applied to the entire unseen nodes to generate embeddings during the test phase.

4.1.3. Line

Tang et al. proposed a method for Large-scale Information Network Embedding: Line in (Tang et al., 2015), which is suitable for undirected, directed and/or weighted networks. The model optimizes an objective which preserves both the local and global network structures. The paper explores both first-order and second-order proximity between the vertices. Most existing graph embedding are designed to preserve first-order proximity, which is presented by observed links like vertex 6 and 7 in Figure 16, the objective function to preserve first-order proximity looks like:


where the joint probability between two vetices and is only valid for undirected edge . Besides, LINE explores the second-order proximity between the vertices, which is not determined through the observed tie strength but through the shared neighborhood structures of the vertices, such as vertex 5 and 6 should also be placed close as they share similar neighbors. In second-order proximity, each vertex is treated as a specific ”context” and vertices with similar distributions over the ”contexts” are assumed to be similar. To preserve the second-order proximity, LINE minimize the following objective function:


where is defined as the probability of ”context” generated by vertex for each directed edge .

The functions preserved first-order proximity and second-order proximity are trained separately and the embeddings trained by two methods are concatenated for each vertex.


Figure 16. A toy example of information network (Tang et al., (2015))

4.1.4. JK-Net

In order to overcome the limitations of neighborhood aggregation schemes, Xu et. al proposed Jumping Knowledge (JK) Networks strategy in (Xu et al., 2018) that flexibly leverages different neighborhood ranges to enable better structure-aware representation for each node. This architecture selectively combines different aggregations at the last layer, i.e., the representations “jump” to the last layer.


Figure 17. Illustration of a 4-layer JK-Net. N.A. stands for neighborhood aggregation (Xu et al., (2018))

The main idea of JK-Net is illustrated as Figure 17: as in common neighborhood aggregation networks, each layer increases the size of the influence distribution by aggregating neighborhoods from the previous layer. At the last layer, for each node, JK-Net selects from all of those intermediate representations (which “jump” to the last layer), potentially combining a few. If this is done independently for each node, then the model can adapt the effective neighborhood size for each node as needed, resulting in exactly the desired adaptivity. As a more general framework, JK-Net admits general layer-wise aggregation models and enable better structure-aware representations on graphs with complex structures.


Figure 18. Main components of DNGR: random surfing, PPMI and SDAE (Cao et al., (2016))

4.1.5. Dngr

In (Cao et al., 2016), Cao et al. adopted a random surfing model to capture graph structural information directly instead of using a sampling-based method. As illustrated in Figure 18

, the proposed DNGR model contains three major components: random surfing, calculation of PPMI matrix and feature reduction by SDAE. The random surfing model is motivated by the PageRank model and is used to capture graph structural information and generate a probabilistic co-occurrence matrix.

Random surfing first randomly orders the vertices in a graph and assume there is a transition matrix that captures the transition probabilities between different vertices. The proposed random surfing model allows contextual information to be weighted differently based on their distance to target. The generated co-occurrence matrix then used to calculate PPMI matrix (an improvement for pointwise mutual information PMI, details in (Levy and Goldberg, 2014)). Next, as high dimensional input data often contain redundant information and noise stacked denoising autoencoder (SDAE) is used to enhance the robustness of DNN, denoising autoencoder partially corrupt the input data before taking the training step. Specifically, it corrupts each input sample x randomly by assigning some of the entries in the vector to 0 with a certain probability.

4.2. Subgraph embedding

The goal of embedding subgraphs is to encode a set of nodes and edges into a low-dimensional vector embedding. Representation learning on subgraphs is closely related to the design of graph kernels, which define a distance measure between subgraphs. According to (Hamilton et al., 2017b), some subgraph embedding techniques use the convolutional neighborhood aggregation idea to generate embeddings for nodes then use additional modules to aggregate sets of node embeddings to subgraph, such as sum-based approaches, graph-coarsening approaches. Besides, there is some related work on ”graph neural networks” (GNN). Instead of aggregating information from neighbors, GNN uses backpropagation ”passing information” between nodes.

5. Malware Classification

5.0.1. Fcg

Hassen and Chan (2017)

proposed a linear time function call graph representation (FCG) vector representation. It starts with an FCG extraction module, which is a directed graph representation of code where the vertices of the graph correspond to functions and the directed edges represent the caller-callee relation between the function nodes. This module takes disassembled malware binaries and extract FCG representations. Thus they presented the caller-callee relation between functions as directed, unweighted edges. The next module is the function clustering. The algorithm used minhash to approximate Jaccard Index, to cluster functions of the given graph. The following module is vector extraction. The algorithm extracted vector representation from an FCG labeled using the cluster-ids. The representation consists of two parts, vertex weight, and edge weight. The vertex weight specifies the number of vertices in each cluster for that FCG and the edge weight describes the number of times an edge is found from one cluster to another cluster. The example workflow looks like figure



Figure 19. FCG Example (Hassen and Chan, 2018)

5.0.2. Cow, Cow Pc

Based on the work in (Hassen and Chan, 2017), Hassen and Chan (2018b) further introduced two new features: , which is the maximum predicted class probability for one instance:


And the entropy for probability distribution over classes:


The paper also introduced two algorithms: Classification in an Open World (COW) and COW PC. Both consist of two classifiers: outlier detector and multi-class classifier. The difference is in COW, the outlier detector was trained by all the classes. And during testing, test data will go through outlier detector first, if it is recognized as not outlier, then it will be sent in a multi-class classifier. While COW PC has a class-specific outlier detector, i.e. each class has its own outlier detector. The test data will come through a multi-class classifier first, then will be sent into the corresponding outlier detector afterward.

5.0.3. Random projections

Malware classifiers often use sparse binary features, and there can be hundreds of millions of potential features. In (Dahl et al., 2013)

, Dahl et al. used random projections to reduce the dimensionality of the original input space of neural networks. They first extracted three types of features including null-terminated patterns observed in the process’ memory, tri-grams of system API calls, and distinct combinations of a single system API call and one input parameter, next performed feature selection, ended with generating over 179 thousand sparse binary features. To make the problem more manageable, they projected each input vector into a much lower dimensional space using a sparse project matrix with entries sample iid from a distribution over

. Entries of 1 and -1 are equiprobable and , where is the original input dimensionality. The lower-dimensional data then serves as input to the neural network.

6. Conclusions

We provide a brief introduction of several deep neural network structures, and an overview of existing OSR, a discussion on learning graph representation and malware classification in this survey. It can be seen that those topics are advancing and profiting from each other in different areas. Also, despite the achieved great success, there are still serious challenges and great potential for them.


  • [1] (2017) 5th international conference on learning representations, ICLR 2017, toulon, france, april 24-26, 2017, conference track proceedings. External Links: Link Cited by: T. N. Kipf and M. Welling (2017).
  • [2] (2018) 6th international conference on learning representations, ICLR 2018, vancouver, bc, canada, april 30 - may 3, 2018, conference track proceedings. External Links: Link Cited by: K. Lee, H. Lee, K. Lee, and J. Shin (2018), S. Liang, Y. Li, and R. Srikant (2018).
  • [3] (2019) 7th international conference on learning representations, ICLR 2019, new orleans, la, usa, may 6-9, 2019. External Links: Link Cited by: D. Hendrycks, M. Mazeika, and T. G. Dietterich (2019).
  • D. Bahdanau, K. Cho, and Y. Bengio (2015) Neural machine translation by jointly learning to align and translate. See 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings, Bengio and LeCun, External Links: Link Cited by: Figure 6, §2.2.3, §2.2.
  • U. Bayer, A. Moser, C. Kruegel, and E. Kirda (2006) Dynamic analysis of malicious code. Journal in Computer Virology 2 (1), pp. 67–77. Cited by: §1.
  • A. Bendale and T. E. Boult (2016) Towards open set deep networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 1563–1572. Cited by: §3.3.2, §3.3.9, §3.3, Table 1, Table 2.
  • Y. Bengio and Y. LeCun (Eds.) (2015) 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings. External Links: Link Cited by: D. Bahdanau, K. Cho, and Y. Bengio (2015).
  • Y. Bengio and Y. LeCun (Eds.) (2016) 4th international conference on learning representations, ICLR 2016, san juan, puerto rico, may 2-4, 2016, conference track proceedings. External Links: Link Cited by: A. Radford, L. Metz, and S. Chintala (2016).
  • [9] (2017) British machine vision conference 2017, BMVC 2017, london, uk, september 4-7, 2017. BMVA Press. External Links: Link Cited by: Z. Ge, S. Demyanov, and R. Garnavi (2017).
  • C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang, W. Xu, et al. (2015) Look and think twice: capturing top-down visual attention with feedback convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2956–2964. Cited by: §2.1.1, §2.1.
  • S. Cao, W. Lu, and Q. Xu (2016) Deep neural networks for learning graph representations. In

    Thirtieth AAAI Conference on Artificial Intelligence

    Cited by: Figure 18, §4.1.5.
  • G. Chen (2016) A gentle tutorial of recurrent neural network with error backpropagation. arXiv preprint arXiv:1610.02583. Cited by: §2.2.1.
  • K. Cho, B. van Merrienboer, Ç. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. See Proceedings of the 2014 conference on empirical methods in natural language processing, EMNLP 2014, october 25-29, 2014, doha, qatar, A meeting of sigdat, a special interest group of the ACL, Moschitti et al., pp. 1724–1734. External Links: Link Cited by: Figure 5, §2.2.2, §2.2.
  • G. E. Dahl, J. W. Stokes, L. Deng, and D. Yu (2013) Large-scale malware classification using random projections and neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3422–3426. Cited by: §5.0.3.
  • J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019) Arcface: additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4690–4699. Cited by: §2.1.4, §2.1.
  • A. R. Dhamija, M. Günther, and T. Boult (2018) Reducing network agnostophobia. In Advances in Neural Information Processing Systems, pp. 9157–9168. Cited by: §3.1.5, §3.1.6, Table 1.
  • J. G. Dy and A. Krause (Eds.) (2018) Proceedings of the 35th international conference on machine learning, ICML 2018, stockholmsmässan, stockholm, sweden, july 10-15, 2018. Proceedings of Machine Learning Research, Vol. 80, PMLR. External Links: Link Cited by: K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka (2018).
  • C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: Figure 8, §2.5.1.
  • Z. Ge, S. Demyanov, and R. Garnavi (2017) Generative openmax for multi-class open set classification. See 9, External Links: Link Cited by: §3.2.1, §3.2.5, Table 1.
  • C. Geng, S. Huang, and S. Chen (2018) Recent advances in open set recognition: a survey. arXiv preprint arXiv:1811.08581. Cited by: §3.
  • F. A. Gers, J. Schmidhuber, and F. Cummins (1999) Learning to forget: continual prediction with lstm. Cited by: §2.2.1.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §2.3.
  • W. Hamilton, Z. Ying, and J. Leskovec (2017a) Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034. Cited by: Figure 15, §4.1.2.
  • W. L. Hamilton, R. Ying, and J. Leskovec (2017b) Representation learning on graphs: methods and applications. IEEE Data Eng. Bull. 40 (3), pp. 52–74. External Links: Link Cited by: §4.1, §4.2, §4.
  • M. Hassen and P. K. Chan (2017) Scalable function call graph-based malware classification. In Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy, pp. 239–248. Cited by: §1, §5.0.1, §5.0.2.
  • M. Hassen and P. K. Chan (2018a) Learning a neural-network-based representation for open set recognition. arXiv preprint arXiv:1802.04365. Cited by: Figure 11, §3.3.3, §3.3.9, §3.3, Table 1, Table 2.
  • M. Hassen and P. K. Chan (2018b) Learning to identify known and unknown classes: a case study in open world malware classification. In The Thirty-First International Flairs Conference, Cited by: Figure 19, §5.0.2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Figure 2, §2.1.5, §2.1.
  • X. He, Y. Zhou, Z. Zhou, S. Bai, and X. Bai (2018) Triplet-center loss for multi-view 3d object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1945–1954. Cited by: §2.1.3, §2.1.
  • D. Hendrycks, M. Mazeika, and T. G. Dietterich (2019) Deep anomaly detection with outlier exposure. See 3, External Links: Link Cited by: §3.1.4, §3.1.6, Table 1.
  • I. Jo, J. Kim, H. Kang, Y. Kim, and S. Choi (2018) Open set recognition by regularising classifier with fake data generated by generative adversarial networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2686–2690. Cited by: §3.2.4, §3.2.5, Table 1.
  • P. R. M. Júnior, R. M. de Souza, R. d. O. Werneck, B. V. Stein, D. V. Pazinato, W. R. de Almeida, O. A. Penatti, R. d. S. Torres, and A. Rocha (2017) Nearest neighbors distance ratio open-set classifier. Machine Learning 106 (3), pp. 359–386. Cited by: §3.3.5, §3.3.9, §3.3, Table 1, Table 2.
  • T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. See 1, External Links: Link Cited by: §4.1.1.
  • Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. nature 521 (7553), pp. 436–444. Cited by: §2.4, §2.
  • K. Lee, H. Lee, K. Lee, and J. Shin (2018) Training confidence-calibrated classifiers for detecting out-of-distribution samples. See 2, External Links: Link Cited by: §3.2.5, §3.2.5, Table 1.
  • O. Levy and Y. Goldberg (2014) Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pp. 2177–2185. Cited by: §4.1.5.
  • S. Liang, Y. Li, and R. Srikant (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. See 2, External Links: Link Cited by: §3.3.8, §3.3.9, §3.3, Table 1, Table 2.
  • T. Luong, H. Pham, and C. D. Manning (2015) Effective approaches to attention-based neural machine translation. See Proceedings of the 2015 conference on empirical methods in natural language processing, EMNLP 2015, lisbon, portugal, september 17-21, 2015, Màrquez et al., pp. 1412–1421. External Links: Link Cited by: §2.2.4, §2.2.
  • A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §2.3.2.
  • C. Mao, L. Yao, and Y. Luo (2018) Distribution networks for open set learning. External Links: 1809.08106 Cited by: §3.3.4, §3.3.9, §3.3, Table 1, Table 2.
  • L. Màrquez, C. Callison-Burch, J. Su, D. Pighin, and Y. Marton (Eds.) (2015)

    Proceedings of the 2015 conference on empirical methods in natural language processing, EMNLP 2015, lisbon, portugal, september 17-21, 2015

    The Association for Computational Linguistics. External Links: Link, ISBN 978-1-941643-32-7 Cited by: T. Luong, H. Pham, and C. D. Manning (2015).
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §2.4.1.
  • A. Moschitti, B. Pang, and W. Daelemans (Eds.) (2014) Proceedings of the 2014 conference on empirical methods in natural language processing, EMNLP 2014, october 25-29, 2014, doha, qatar, A meeting of sigdat, a special interest group of the ACL. ACL. External Links: Link, ISBN 978-1-937284-96-1 Cited by: K. Cho, B. van Merrienboer, Ç. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014).
  • L. Neal, M. Olson, X. Fern, W. Wong, and F. Li (2018) Open set learning with counterfactual images. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 613–628. Cited by: §3.2.3, §3.2.5, Table 1.
  • C. Olah (2015) Understanding lstm networks. External Links: Link Cited by: Figure 3, Figure 4.
  • M. Palmer, R. Hwa, and S. Riedel (Eds.) (2017) Proceedings of the 2017 conference on empirical methods in natural language processing, EMNLP 2017, copenhagen, denmark, september 9-11, 2017. Association for Computational Linguistics. External Links: Link, ISBN 978-1-945626-83-8 Cited by: L. Shu, H. Xu, and B. Liu (2017).
  • J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §2.4.2.
  • P. Perera and V. M. Patel (2019) Learning deep features for one-class classification. IEEE Transactions on Image Processing. Cited by: Figure 10, §3.1.6, §3.1.6, Table 1.
  • A. Radford, L. Metz, and S. Chintala (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. See 4th international conference on learning representations, ICLR 2016, san juan, puerto rico, may 2-4, 2016, conference track proceedings, Bengio and LeCun, External Links: Link Cited by: §2.3.1.
  • S. Ravi and H. Larochelle (2016) Optimization as a model for few-shot learning. Cited by: §2.5.1.
  • E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin (2017) Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2902–2911. Cited by: §2.5.3.
  • M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Why should i trust you?: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144. Cited by: §2.5.2.
  • S. Saha (2018) A comprehensive guide to convolutional neural networks-the eli5 way.

    Towards Data Science

    External Links: Link Cited by: Figure 1.
  • K. Saito, S. Yamamoto, Y. Ushiku, and T. Harada (2018) Open set domain adaptation by backpropagation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 153–168. Cited by: §3.1.1, §3.1.6, §3.1, Table 1.
  • A. Schultheiss, C. Käding, A. Freytag, and J. Denzler (2017) Finding the unknown: novelty detection with extreme value signatures of deep neural activations. In German Conference on Pattern Recognition, pp. 226–238. Cited by: §3.3.1, §3.3.9, Table 1, Table 2.
  • L. Shu, H. Xu, and B. Liu (2017) DOC: deep open classification of text documents. See Proceedings of the 2017 conference on empirical methods in natural language processing, EMNLP 2017, copenhagen, denmark, september 9-11, 2017, Palmer et al., pp. 2911–2916. External Links: Link Cited by: Figure 14, §3.3.9, §3.3.9, Table 1, Table 2.
  • L. Shu, H. Xu, and B. Liu (2018a) Unseen class discovery in open-world classification. arXiv preprint arXiv:1801.05609. Cited by: Figure 9, §3.1.2, §3.1.6, §3.1, Table 1.
  • Y. Shu, Y. Shi, Y. Wang, Y. Zou, Q. Yuan, and Y. Tian (2018b) ODN: opening the deep network for open-set action recognition. In 2018 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. Cited by: §3.1.3, §3.1.6, §3.1, Table 1.
  • C. Sierra (Ed.) (2017) Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017. External Links: Link, ISBN 978-0-9992411-0-3 Cited by: Y. Yu, W. Qu, N. Li, and Z. Guo (2017).
  • T. Silva (2018) An intuitive introduction to generative adversarial networks (gans). External Links: Link Cited by: Figure 7.
  • J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei (2015) Line: large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pp. 1067–1077. Cited by: Figure 16, §4.1.3.
  • Z. Wang, Z. Kong, H. Tao, S. Chandra, and L. Khan (2018) Co-representation learning for classification and novel class detection via deep networks. arXiv preprint arXiv:1811.05141. Cited by: Figure 12, §3.3.6, §3.3.9, Table 1, Table 2.
  • Y. Wen, K. Zhang, Z. Li, and Y. Qiao (2016) A discriminative feature learning approach for deep face recognition. In European conference on computer vision, pp. 499–515. Cited by: §2.1.2, §2.1.
  • K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka (2018) Representation learning on graphs with jumping knowledge networks. See Proceedings of the 35th international conference on machine learning, ICML 2018, stockholmsmässan, stockholm, sweden, july 10-15, 2018, Dy and Krause, pp. 5449–5458. External Links: Link Cited by: Figure 17, §4.1.4.
  • Y. Yu, W. Qu, N. Li, and Z. Guo (2017) Open category classification by adversarial sample generation. See Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI 2017, melbourne, australia, august 19-25, 2017, Sierra, pp. 3357–3363. External Links: Link, Document Cited by: §3.2.2, §3.2.5, Table 1.
  • H. Zhang and V. M. Patel (2016) Sparse representation-based open set recognition. IEEE transactions on pattern analysis and machine intelligence 39 (8), pp. 1690–1696. Cited by: Figure 13, §3.3.7, §3.3.9, §3.3, Table 1, Table 2.