Enriching Visual with Verbal Explanations for Relational Concepts – Combining LIME with Aleph

10/04/2019 ∙ by Johannes Rabold, et al. ∙ 24

With the increasing number of deep learning applications, there is a growing demand for explanations. Visual explanations provide information about which parts of an image are relevant for a classifier's decision. However, highlighting of image parts (e.g., an eye) cannot capture the relevance of a specific feature value for a class (e.g., that the eye is wide open). Furthermore, highlighting cannot convey whether the classification depends on the mere presence of parts or on a specific spatial relation between them. Consequently, we present an approach that is capable of explaining a classifier's decision in terms of logic rules obtained by the Inductive Logic Programming system Aleph. The examples and the background knowledge needed for Aleph are based on the explanation generation method LIME. We demonstrate our approach with images of a blocksworld domain. First, we show that our approach is capable of identifying a single relation as important explanatory construct. Afterwards, we present the more complex relational concept of towers. Finally, we show how the generated relational rules can be explicitly related with the input image, resulting in richer explanations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Explainable Artificial Intelligence (XAI) mostly refers to visual highlighting of information which is relevant for the classification decision of a given instance

[9, 19]. In general, the mode of an explanation can be visual, but also verbal or example-based [13]

. Visual explanations have been introduced to make black-box classifiers such as (deep) neural networks more transparent

[9, 19, 18]

. In the context of white-box machine learning approaches, such as decision trees or Inductive Logic Programming (ILP)

[16], it is argued that these models are already transparent and interpretable by humans [16]. In the context of ILP it has been shown that a local verbal explanation can easily be generated from symbolic rules with a template-based approach [20].

For image classification tasks, it is rather obvious that visual explanations are helpful for technical as well as for domain experts: Information about what pixels or patches of pixels most strongly contribute to a class decision can help to detect model errors which might have been caused by non-representative sampling. Highlighting can also support domain experts to assess the validity of a learned model [18]. In general, an explanation can be characterized as useful, if it meets the principles of cooperative conversations [13]. These pragmatic aspects of communication are described in the Gricean maximes [8] which encompass the following four categories: (1) quality – explanations should be based on truth or empirical evidence; (2) quantity – be as informative as required; (3) relation – explanations should communicate only relevant information; (4) manner – avoidance of obscurity and ambiguity. We argue that visual explanations can in general not avoid obscurity and ambiguity since they cannot or only partially capture the following kinds of information:

  • Feature values: Visual highlighting can explain that a specific aspect of an entity is informative for a specific class – e.g., that an emotion is expressed near the eye. However, the relevant information is whether the eye is wide open or the lid is tightened [21].

  • Negation: While approaches like LRP [19] allow to visualize which pixels have a negative contribution to the classification, it is not generally possible to inform that the absence of a feature or object is relevant. E.g., it might be relevant to explain that a person is not classified as a terrorist because he or she does not hold a weapon (but a flower).

  • Relations: If two parts of an image are highlighted, it is not possible to discriminate whether the conjunction (e.g., there is a green block and a blue block) or a more specific relation (e.g., the green block is on the blue block) is relevant.

ILP approaches [14] can capture all three kinds of information because the models are expressed as first-order Horn clauses. Relational concepts such as grandparent(X,Y) [15] or mutagenicity of chemical structures [23] can be induced. Furthermore, classes involving relations, such as the Michalski Train Domain [15], can be learned. Here, the decision whether a train is east- or west-bound depends on relational information of arbitrary complexity, e.g., that a waggon with six wheels needs to be followed by a waggon with an open top.

Recently, there have been proposed several deep learning approaches to tackle relational concepts, such as the differentiable neural computer [7], RelNNs [10], or RelNet [2]. In contrast to ILP, these approaches depend on very large sets of training examples and the resulting models are black-box. A helpful explanation interface should be able to take into account visual/image-based domains as well as abstract/graph-based domains. The model agnostic approach of LIME [18] provides linear explanations based on sets of super-pixels or words. This is not sufficient when more expressive relational explanations are necessary. Current focus of our work is to provide relational explanations for black-box, end-to-end classifiers for image-based domains. We believe that for image-based domains, a combination of visual and verbal explanations is most informative with respect to the Gricean maximes. Psychological experiments also give evidence that humans strongly profit from a combination of visual and verbal explanations [12].

(a) A house, because three windows left of each other.
(b) A tower, because three windows on top of each other.
Figure 1: Combining visual and symbolic explanations for house in contrast to tower. Photo of house by Pixasquare, photo of lighthouse by Joshua Hibbert, both on Unsplash

In a previous study [17], we could show that relational symbolic explanations (Prolog rules) can be generated by combining the ILP approach Aleph [22] with LIME [18]. However, simple visual concepts have been pre-defined and used as input to Aleph and not extracted automatically. In the following, we present an extension of [17]

covering end-to-end image classification with a convolutional neural network (CNN)

[11], partitioning images into sub-structures, as well as automatic extraction of visual attributes and spatial relations. Local symbolic explanations are learned with Aleph, providing logical descriptions of original and perturbed images. Finally, local symbolic explanations are related to visual highlighting of informative parts of the image to provide a combined visual-symbolic explanation. The symbolic explanation can be transformed in a verbal one with a template-based approach as demonstrated in [20]. An illustrative example is given in Figure 1. Here the concept of house is explained by the fact that three windows are next to each other. This information is given by identifying three relevant parts of the image, naming them (, , ), labeling them as windows (which might be done by another automatic image classification or by the user) and stating the spatial relation between the objects using the left_of relation. This example also demonstrates an important aspect of symbolic explanations: Which attributes and relations are useful to explain why some object belongs to some class depends on the contrasting class [5].

In the next section, we introduce the core concepts for our approach. Then we present a significantly extended version of the LIME-Aleph algorithm [17]. We demonstrate the approach on images of a blocksworld domain. In a first experiment we show that LIME-Aleph is capable of identifying a single relation (left_of(block1, block2)) as relevant for the learned concept. In a second experiment, we demonstrate that more complex relational concepts such as tower can be explained. Finally we show how the fusion of visual and symbolic explanations might be realized.

2 Explaining Relational Concepts with LIME-Aleph

2.1 Core Concepts

2.1.1 The ILP System Aleph.

To find symbolic explanations for relational concepts we use Aleph [22]. Aleph infers a logic theory given a set of positive () and negative () examples. An example is represented by the target predicate (e.g. stack(e1). or not stack(e2).) together with additional predicates (e.g. contains(b1, e1).) as background knowledge (BK). Predicates in BK are used to build the preconditions for the target rules. Aleph is based on specific-to-general refinement search. It finds rules covering as many positive examples as possible, avoiding covering negative ones. Search is guided by modes which impose a language bias. The general algorithm is [22]:

  1. As long as positive exist, select one. Otherwise halt.

  2. Construct the most-specific clause that entails the selected example and is within the language constraints

  3. Find a more-general clause which is a subset of the current literals in the clause.

  4. Remove covered by the current clause.

  5. Repeat from step 1.

An example of a rule from in Prolog is stack(Stack) :- contains(Block1, Stack), contains(Block2, Stack), on(Block2, Block1).
denoting that a stack is defined by one block on top of another.

2.1.2 LIME’s Identification of Informative Super-Pixels.

LIME (Local Interpretable Model-Agnostic Explanations) is an approach to explain the decision result of any learned model [18]. Explanations state the parts of an instance that are positively or negatively correlated to the class. It works by creating a simpler, local surrogate model around the instance to be explained. In case of an image, the explanation is a set of connected pixel patches called super-pixels.

Let be an image and

be the binary vector that states whether super-pixels

are switched on or off (see below). LIME finds a sparse linear model that locally approximates the unknown decision function represented by a black-box classifier. It effectively finds the coefficients for the super-pixel representations being variables in a simplified linear model. This is done by generating a pool of perturbed examples by taking the original super-pixel representation

and randomly selecting elements in a uniformly distributed fashion. That way, images

are obtained with some super-pixels still original and some altered according to a transform function effectively removing the information they contained (Switching them off). Each sample (The binary vector indicating if super-pixels are switched off in this sample) is stored in a sample pool along with the classifier result and a distance measure that expresses the distance of the perturbed example to the original image . For images this can be the Mean Squared Error. The distance is needed for the linear model to be locally faithful to the original function and thus has to be minimized. The “un-faithfulness” of the model to the black-box model with respect to the distance measure is expressed with the following formula [18]:

.

The goal is to find the coefficients for that minimize this un-faithfulness . The coefficients ultimately translate back to weights for the super-pixels. LIME uses K-Lasso to find the weights [3].

The original LIME uses the algorithm Quick Shift [25] to find super-pixel. It imposes an irregular pixel mask over the input image that segments it in terms of pixel similarity. The segmentation is performed in a 5D space consisting of the image space and the color space. Quick Shift is only one of several segmentation algorithms that are available for LIME. They all share the attribute of imposing an irregular mask over an image. In many domains, this irregularity is not wanted. For the domain used in this paper it is preferable to use a segmentation algorithm that divides an image into a regular grid with square cells.

(a) All positions in the image, where a block has to be located in order to be left, right, top, or bottom of block a.
(b) for the relations on and under, illustrated with on(b,a), on(c,d), resp. under(a,b) and under(d,c).
Figure 2: Diagrams to show the different concepts of the relations used.

2.2 Extraction of Image Parts and Their Relations

Based on image segmentation into a grid of super-pixels with domain-specific cell size, a set of attributes for cells and spatial relations between cells can be automatically extracted. Attributes are taken from a pool of attributes . An example for an attribute in is the mean color of in the RGB color space. To find a human-comprehensible name, the nearest color according to the Euclidean distance in a pool of commonly known color names is assigned. Other extractable attributes are the size or the general location in the image. The coordinates of the center point of are stored for spatial reasoning. Extracted attributes are converted into predicates for BK. The attribute that a given super-pixel SP is blue is represented as has_color(SP, blue).

Spatial relations can be defined between pairs of super-pixels. To restrict the number of pairs, we need a pre-selection of super-pixels that might be relevant for the concept. LIME’s describe the magnitude of relevance for either the true classification (positive weight) or the counter-class (negative weight). By introducing a user-defined constant , we restrict how many super-pixels the selection should contain, taking the super-pixels with the highest values in . Spatial relations are drawn from a pre-defined pool . For this work, we use the relations left_of, right_of, top_of, bottom_of as well as relations that represent an immediate adjacency in the regular grid mentioned earlier, namely on and under. Relations are defined with respect to the center coordinates of the super-pixels in . Figure 2 sketches the underlying semantics of these relations. It is possible to include additional relations as long as they are automatically extractable and their inverses are defined in the image space. In domains with super-pixels that differ in size, a larger relation between super-pixels could be defined. Also, a not_equal relation can be considered.

2.3 Learning Rules for Relational Concepts via Aleph

To generate symbolic relational explanations for visual domains, we combine LIME’s super-pixel weighting with Aleph’s theory generation. The input into LIME-Aleph is an image and a model

returning class probability estimations for

. Currently our approach is only applicable for explaining one class in contrast to all other classes, effectively re-framing the original classification as a concept learning problem. The output of LIME-Aleph is a theory of logic rules describing the relations between the super-pixels that lead to the class decision.

1:Require: Instance
2:Require: Classifier , Selection size , Threshold
3:Require: Attribute pool , Relation pool
4:    Selection of most important super-pixels.
5:    extract_attribute_values Find all attribute values for all .
6:    extract_relations Find all relations between all .
7:   
8:   
9:   for each do
10:       flip_in_image Flip the super-pixels in the image space.
11:       Obtain new predicate for the BK by flipping parameters.
12:       All relations in the BK; also the altered one.
13:       Re-calculate relations that are affected by the flipped relation.
14:       Obtain new estimator for the perturbed image.
15:      if do If estimator reaches threshold, add new positive example.
16:         
17:      else Else, add negative example.
18:         
19:   end for
20:    Aleph Obtain theory with Aleph.
21:return
Algorithm 1 Explanation Generation with LIME-Aleph.

LIME’s explanation relies on that linear surrogate model which contains the set of super-pixels with the highest positive weights for the true class. When dealing with the question which relations contribute most to the classification, identifying the most informative super-pixels has to be replaced by identifying the most informative pairs of super-pixels. Instead of turning super-pixels on and off, LIME-Aleph inverts extracted relations between super-pixels and observes the effects on the classification. Algorithm 1 shows our approach. Given the selection of super-pixels together with the extracted attributes, our approach first finds all relations that hold between them. For every relation , a new perturbed example from the image space is created by flipping the super-pixels and in the image space. To generate a new example for Aleph, the resulting perturbed image is first put through the classifier . If the estimator exceeds a threshold for the class we want to explain, a new positive example is declared. Otherwise, the example is declared negative. All relations holding for the perturbed image are written in the BK characterizing this example. The initial positive example for Aleph is always generated for the unaltered constellation.

3 Experiments and Results

We investigate the applicability of LIME-Aleph in a blocksworld domain consisting of differently colored squares that can be placed in a regular-grid world. For a first investigation, we decided to focus on artificially generated images rather that real world domains.

3.1 An Artificial Dataset for Relational Concepts

We implemented a generator to create a huge variety of positive and negative example images for different blocksworld concepts. All generated images are of size pixels consisting of a single-colored red background, containing constellations of colored squares of dimension pixels. The squares are single-colored (excluding red) with color-channel values either being set to 0.0 or 0.8. The squares are placed into the image according to an uniform grid. Positive examples are generated by first randomly placing a reference square. Then, the other squares are placed randomly following the relation conventions shown in Figure 2. For the experiments we restricted {color} with attribute values in {cyan, green, blue} and {left_of, right_of, top_of, bottom_of, on, under}.

3.2 Training a Black-Box Model

To obtain a black-box model for image classification, we used a small convolutional neural network [11] which we trained from scratch with commonly known best-practice hyper-parameters. The network consists of two convolution layers with kernel size

and ReLU activations. Each layer learns 16 filters to be able to robustly recognize the colored squares. After flattening the output, the convolution layers are followed by 2 fully connected layers each with a ReLU activation. The first layer consists of 256 neurons, the second one of 128 neurons. A small amount of dropout is applied past each layer to cope with potential overfitting

[24]. The network does not contain a pooling layer. That way, fewer location information is lost in aggregation during the learning process which we believe is crucial for preserving spatial relationships (see [6]

p. 331). For the experiments, we generated perfectly balanced datasets with 7.000 training-, 2000 validation- and 1000 test-images for both the concept and the counter-examples. We trained the networks for a maximum of 10 epochs with early stopping if the validation loss did not decrease after 5 epochs.

3.3 Experiment 1: Single Relation Concept

Figure 3: Positive (, ) and negative () for the first experiment.

The concept for the first experiment can be described by the single relation that a green square is left of a blue square in an image . Figure 3 shows two positive examples (, ) and one negative example (). After the full 10 epochs, the accuracy on the validation set reached 93.47%. For image 3.a the classifier gave the estimator for the concept to be 89.83%. For image 3.b the estimator was 94.18%. The estimator output for belonging to the concept for 3.c was 0.28% showing that the network is able to discriminate the positive and negative examples. To generate explanations for these three images, each image is separately fed into LIME. The number of kept super-pixels is set to 3. We choose this value for because we were aware that there are 2 squares in the image that are distinguishable from the background. One additional super-pixel was taken to generate a richer pool for selection containing also some background. In general, for many domains it is not that easy to estimate good values for . So in most of the cases it is preferable to over-estimate the value to not lose information for the explanation.

Finally, a symbolic explanation is generated with LIME-Aleph. We describe the procedure for the positive example 3.a. First, Algorithm 1 extracts the colors of the selected super-pixels and the relations between them. Then, all the relations get flipped one after the other to produce the example set and BK for Aleph. The original example 3.a is used as the seed for a set of perturbed versions of the image. Threshold indicates whether the perturbed example is classified positive or negative. Based on the final validation accuracy of the trained and from the estimator for the original image 3.a, it was set to . For example 3.a 3 positive and 4 negative examples were created. From these 7 examples, Aleph induced a theory consisting of a single rule with accuracy of 100%:

concept(A) :- contains(B,A), has_color(B,green), contains(C,A), has_color(C,blue), left_of(B,C).

The learned rule accurately resembles the construction regulation of the wanted concept; a green square has to be left of a blue square in an example A. Also, this explanation matches the input image.

For image 3.b we used the same hyper-parameters (, ). Again, Aleph came up with an accuracy of 100% and a rule structurally different, but conveying the same concept as the first rule:

concept(A) :- contains(B,A), has_color(B, blue), contains(C,A), has_color(C, green), left_of(C,B).

3.4 Experiment 2: Tower Concept

In the second experiment, we investigated a specific concept of towers. Positive examples consist of three differently colored blocks with a given restriction on their stacking order. An example belongs to the concept tower, if a blue square is present as a foundation. Directly on the foundation (one grid cell above) there has to be a square of either cyan or green color. Directly on that square has to be the remaining square (green or cyan). Figure 4 gives a positive and two negative examples.

Figure 4: Positive () and negative (, ) for the tower experiment.

We again trained the CNN for 10 epochs. The final validation accuracy was 98.70%. The original estimator for example 4.a gave . We first set being the smallest selection of which we know can contain a tower. Again setting , LIME-Aleph came up with 5 positive and 6 negative examples (accuracy 81.82%) and the following rule:

concept(A) :- contains(B,A), has_color(B,cyan), contains(C,A), on(B,C).

This rule expresses the fact, that the cyan square can not be the foundation. When setting the selection size , we let an additional background super-pixel be part of . The resulting rule is:

concept(A) :- contains(B,A), has_color(B,cyan), contains(C,A), has_color(C,blue), top_of(B,C).

This rule captures the fact, that a cyan block has to be above a blue one. The generated explanations are only partial representations of the intended concept. The symbolic explanations capture relevant aspects, but are too general.

4 Bringing Together Visual and Symbolic Explanations

The generated rules give explanations in symbolic form which can be re-written into verbal statements. We postulate that helpful explanations for images should relate highlighting of relevant parts of the image with explicit symbolic information of attributes and relations. In this section we give an example on how this fusion might look like. Let us take the tower example from Section 3.4. In Figure 5, the output of standard LIME is given with the 3 most important super-pixels matching the expected region in the image. Additionally, the relation from the instantiated rule from the experiment for is given. Since cyan is the only square that is mentioned in the rule, we take it as a reference. The relation on links the cyan square to another unknown square below. This relation is shown explicitly in the image by connecting the two squares and writing the instantiated relation.

Figure 5: An example for the combination of visual and verbal explanations. Here it is explained, why and where this particular image shows evidence for belonging to the concept tower.

5 Conclusion and Further Work

We proposed an approach to extract symbolic rules from images which can be used to explain classifier decisions in a more expressive way than visual highlighting alone. For a simple artificial domain we gave a proof of concept for our approach LIME-Aleph. The work presented here significantly extends [17] by providing a method of automated extraction of visual attributes and spatial relations from images. As a next step we want to also let the explanative power be evaluated by humans. Also we plan to cover real world image datasets like explaining differences between towers and houses as shown in Figure 1. The challenge here is to come up with arbitrarily placeable segmentations that are easily interchangeable. While our algorithm relies on a regular grid, in an image “in the wild”, the semantic borders of sub-objects can be irregular in shape and not easily be flipped in order to test for different relations. One idea to cope with this problems is to use relevance information from inner layers in a CNN (e.g., with LRP, [19]) to first pinpoint small important regions and sub-objects, then super-imposing a standardized selection shape (square, circle, etc.) over the pixel values to find interchangeable super-pixels for filling selection .

In general, it might be useful to consider a variety of explanation formats to accommodate specific personal preferences and situational contexts. For example, visual highlighting is a quick way to communicate what is important while verbal explanations convey more details. Likewise, examples which prototypically represent a class and near-miss counter-examples could be used to make system decisions more transparent [1]. Explanations might also not be a one-way street. In many domains, it is an illusion that the labeling of the training is really a ground truth. For example, in medical diagnosis, there are many cases where not even experts agree. Therefore, for many practical applications, learning should be interactive [4]. To constrain model adaption, the user could mark-up that parts of an explanation which are irrelevant or wrong. Such a cooperative approach might improve the joint performance of the human-machine-partnership.

References

  • [1] Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
  • [2] Bansal, T., Neelakantan, A., McCallum, A.: Relnet: End-to-end modeling of entities & relations. In: NIPS Workshop on Automated Knowledge Base Construction (AKBC) (2017)
  • [3] Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32(2), 407–499 (2004)
  • [4] Fails, J.A., Olsen Jr, D.R.: Interactive machine learning. In: Proceedings of the 8th International Conference on Intelligent User Interfaces. pp. 39–45. ACM (2003)
  • [5] Gentner, D., Markman, A.B.: Structural alignment in comparison: No difference without similarity. Psychological Science 5(3), 152–158 (1994)
  • [6] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT press (2016)
  • [7] Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., et al.: Hybrid computing using a neural network with dynamic external memory. Nature 538, 471–476 (2016)
  • [8] Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J. (eds.) Syntax & Semantics, vol. 3, pp. 41–58. Academic Press (1975)
  • [9] Gunning, D.: Explainable artificial intelligence (XAI) (2017), https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
  • [10] Kazemi, S.M., Poole, D.: RelNN: A deep neural model for relational learning. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18, New Orleans, Louisiana, USA, Feb.  2-7). pp. 6367–6375. AAAI Press (2018)
  • [11]

    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: 26th Annual Conference on Neural Information Processing Systems (NIPS, Lake Tahoe, NV, December 3-6). pp. 1106–1114 (2012)

  • [12] Mayer, R.E., Sims, V.K.: For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. Journal of Educational Psychology 86(3), 389–401 (1994)
  • [13] Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267, 1–38 (2019)
  • [14] Muggleton, S., De Raedt, L.: Inductive logic programming: Theory and methods. Journal of Logic Programming, Special Issue on 10 Years of Logic Programming 19-20, 629–679 (1994)
  • [15] Muggleton, S.H., Lin, D., Tamaddoni-Nezhad, A.: Meta-interpretive learning of higher-order dyadic datalog: Predicate invention revisited. Machine Learning 100(1), 49–73 (2015)
  • [16] Muggleton, S.H., Schmid, U., Zeller, C., Tamaddoni-Nezhad, A., Besold, T.: Ultra-strong machine learning: comprehensibility of programs learned with ILP. Machine Learning 107(7), 1119–1140 (2018)
  • [17] Rabold, J., Siebers, M., Schmid, U.: Explaining black-box classifiers with ILP - empowering LIME with aleph to approximate non-linear decisions with relational rules. In: Riguzzi, F., Bellodi, E., Zese, R. (eds.) 28th International Conference, (ILP, Ferrara, Italy, September 2-4). LNCS, vol. 11105, pp. 105–117. Springer (2018)
  • [18] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 1135–1144. ACM (2016)
  • [19] Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. ITU Journal: ICT Discoveries - Special Issue 1 - The Impact of Artificial Intelligence (AI) on Communication Networks and Services 1(1), 39–48 (2018)
  • [20] Siebers, M., Schmid, U.: Please delete that! why should I? – Explaining learned irrelevance classifications of digital objects. KI 33(1), 35–44 (2019)
  • [21] Siebers, M., Schmid, U., Seuß, D., Kunz, M., Lautenbacher, S.: Characterizing facial expressions by grammars of action unit sequences–a first investigation using abl. Information Sciences 329, 866–875 (2016)
  • [22] Srinivasan, A.: The Aleph Manual. http://www.cs.ox.ac.uk/
    activities/machinelearning/Aleph/ (2004)
  • [23] Srinivasan, A., Muggleton, S.H., Sternberg, M.J., King, R.D.: Theories for mutagenicity: A study in first-order and feature-based induction. Artificial Intelligence 85(1-2), 277–299 (1996)
  • [24] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1), 1929–1958 (2014)
  • [25]

    Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: European Conference on Computer Vision. pp. 705–718. Springer (2008)