Trained neural networks are usually imagined as black boxes, in that they do not give any direct indications why an output (e.g., a prediction) was made by the network. The reason for this lies in the distributed nature of the information encoded in the weighted connections of the network. Of course, for applications, e.g., safety-critical ones, this is an unsatisfactory situation. Methods are therefore sought to explain how the output of trained neural networks are reached.
This topic of explaining trained neural networks is not a new one, in fact there is already quite a bit of tradition and literature on the topic of rule extraction from such networks (see, e.g., [2, 9, 16]
), which pursued very similar goals. Rule extraction, however, utilized propositional rules as target logic for generating explanations, and as such remained very limited in terms of explanations which are human-understandable. Novel deep learning architectures attempt to retrieve explanations as well, but often the use-case is only for computer vision tasks like object or scene recognition. Moreover, explanations in this context actually encode greater details about the images provided as input, rather than explaining why or how the neural network was able to recognize a particular object or scene.
is concerned with data sharing, discovery, integration, and reuse. As field, it does not only target data on the World Wide Web, but its methods are also applicable to knowledge management and other tasks off the Web. Central to the field is the use of knowledge graphs (usually expressed using the W3C standard Resource Description Framework RDF) and type logics attached to these graphs, which are called ontologies and are usually expressed using the W3C standard Web Ontology Language OWL .
This paper introduces a new paradigm for explaining neural network behavior. It goes beyond the limited propositional paradigm, and directly targets the problem of explaining neural network activity rather than the qualities of the input. The paradigm leverages advances in knowledge representation on the World Wide Web, more precisely from the field of Semantic Web technologies. It in particular utilizes the fact that methods, tool, and structured data in the mentioned formats are now widely available, and that the amount of such structured data on the Web is in fact constantly growing [5, 18]. Prominent examples of large-scale datasets include Wikidata  and data coming from the schema.org  effort which is driven by major Web search engine providers. We will utilize this available data as background knowledge, on the hypothesis that background knowledge will make it possible to obtain more concise explanations. This addresses the issue in propositional rule extraction that extracted rulesets are often large and complex, and due to their sizes difficult to understand for humans. While the paper only attempts to explain input-output behavior, the authors are actively exploring ways to also explain internal node activations.
An illustrative example
Let us consider the following very simple example which is taken from . Assume that the input-output mapping of the neural network without background knowledge could be extracted as
Now assume furthermore that we also have background knowledge in form of the rules
The background knowledge then makes it possible to obtain the simplified input-output mapping , as
The simplification through the background knowledge is caused by acting as a “generalization” of both and . For the rest of the paper it may be beneficial to think of , and as classes or concepts, which are hierarchically related, e.g., being “oak,” being “maple,” and being “tree.”
Yet this example is confined to propositional logic.111How to go beyond the propositional paradigm in neural-symbolic integration is one of the major challenges in the field . In the following, we show how we can bring structured (non-propositional) Semantic Web background knowledge to bear on the problem of explanation generation for trained neural networks, and how we can utilize Semantic Web technologies in order to generate non-propositional explanations. This work is at a very early stage, i.e., we will only present the conceptual architecture of the approach and minimal experimental results which are encouraging for continuing the effort.
The rest of the paper is structured as follows. In Section 2 we introduce notation as needed, in particular regarding description logics which underly the OWL standard, and briefly introduce the DL-Learner tool which features prominently in our approach. In Section 3 we present the conceptual and experimental setup for our approach, and report on some first experiments. In Section 4 we conclude and discuss avenues for future work.
are a major paradigm in knowledge representation as a subfield of artificial intelligence. At the same time, they play a very prominent role in the Semantic Web field since they are the foundation for one of the central Semantic Web standards, namely the W3C Web Ontology Language OWL[11, 12].
Technically speaking, a description logic is a decidable fragment of first-order predicate logic (sometimes with equality or other extensions) using only unary and binary predicates. The unary predicates are called atomic classes,222or atomic concepts while the binary ones are refered to as roles,333or properties and constants are refered to as individuals. In the following, we formally define the fundamental description logic known as , which will suffice for this paper. OWL is a proper superset of .
Desciption logics allow for a simplified syntax (compared to first-order predicate logic), and we will introduce in this simplified syntax. A translation into first-order predicate logic will be provided further below.
Let be a finite set of atomic classes, be a finite set of roles, and be a finite set of individuals. Then class expressions (or simply, classes) are defined recursively using the following grammar, where denotes atomic classes from and denotes roles from . The symbols and denote conjunction and disjunction, respectively.
A TBox is a set of statements, called (general class inclusion) axioms, of the form , where and are class expressions – the symbol can be understood as a type of subset inclusion, or alternatively, as a logical implication. An ABox is a set of statements of the forms or , where is an atomic class, is a role, and are individuals. A description logic knowledge base consists of a TBox and an ABox. The notion of ontology is used in different ways in the literature; sometimes it is used as equivalent to TBox, sometimes as equivalent to knowledge base. We will adopt the latter usage.
We characterize the semantics of knowledge bases by giving a translation into first-order predicate logic. If is a TBox axiom of the form , then is defined inductively as in Figure 1, where is a class name. ABox axioms remain unchanged.
For purposes of illustrating DL-Learner, Figure 2 shows two sets of trains, the positive examples are on the left, the negative ones are on the right. Following , we use a simple encoding of the trains as a knowledge base: Each train is an individual, and has cars attached to it using the hasCar property, and each car then falls into different categories, e.g., the top leftmost car would fall into the classes Open, Rectangular and Short, and would also have information attached to it regarding symbol carried (in this case, square), and how many of them (in this case, one).
Given these examples and knowledge base, DL-Learner comes up with the class
which indeed is a simple class expression such that all positive examples fall under it, while no negative example does.
3 Approach and Experiments
In this paper, we follow the lead of the propositional rule extraction work mentioned in the introduction, with the intent of improving on it in several ways.
We generalize the approach by going significantly beyond the propositional rule paradigm, by utilizing description logics.
We include significantly sized and publicly available background knowledge in our approach in order to arrive at explanations which are more concise.
More concretely, we use DL-Learner as the key tool to arrive at the explanations. Figure 3
depicts our conceptual architecture: The trained artificial neural network (connectionist system) acts as a classifier. Its inputs are mapped to a background knowledge base and according to the networks’ classification, positive and negative examples are distinguished. DL-Learner is then run on the example sets and provides explanations for the classifications based on the background knowledge.
In the following, we report on preliminary experiments we have conducted using our approach. Their sole purpose is to provide first and very preliminary insights into the feasibility of the proposed method. All experimental data is available from http://daselab.org/projects/human-centered-big-data.
We utilize the ADE20K dataset [23, 24]. It contains 20,000 images of scenes which have been pre-classified regarding scenes depicted, i.e., we assume that the classification is done by a trained neural network.444Strictly speaking, this is not true for the training subset of the ADE20K dataset, but that doesn’t really matter for our demonstration. For our initial test, we used six images, three of which have been classified as “outdoor warehouse” scenes (our positive examples), and three of which have not been classified as such (our negative examples). In fact, for simplicity, we took the negative examples from among the images which had been classified as “indoor warehouse” scenes. The images are shown in Figure 4.
The ADE20K dataset furthermore provides annotations for each image which identify information about objects which have been identified in the image. The annotations are in fact richer than that and also talk about the number of objects, whether they are occluded, and some more, but for our initial experiment we only used presence or absence of an object. To keep the initial experiment simple, we furthermore only used those detected objects which could easily be mapped to our chosen background knowledge, the Suggested Upper Merged Ontology (SUMO).555http://www.adampease.org/OP/ Table 1 shows, for each image, the objects we kept.
|image :||road, window, door, wheel, sidewalk, truck, box, building|
|image :||tree, road, window, timber, building, lumber|
|image :||hand, sidewalk, clock, steps, door, face, building, window, road|
|image :||shelf, ceiling, floor|
|image :||box, floor, wall, ceiling, product|
|image :||ceiling, wall, shelf, floor, product|
The Suggested Upper Merged Ontology was chosen because it contains many, namely about 25,000 common terms which cover a wide range of domains. At the same time, the ontology arguably structures the terms in a relatively straightforward manner which seemed to simplify matters for our initial experiment.
In order to connect the annotations to SUMO, we used a single role called “contains.” Each image was made an individual in the knowledge base. Furthermore, for each of the object identifying terms in Table 1, we either identified a corresponding matching SUMO class, or created one and added it to SUMO by inserting it at an appropriate place within SUMO’s class hierarchy. We furthermore created individuals for each of the object identifying terms, including duplicates, in Table 1, and added them to the knowledge base by typing them with the corresponding class. Finally, we related each image individual to each corresponding object individual via the “contains” role.
To exemplify – for the image we added individuals road1, window1, door1, wheel1, sidewalk1, truck1, box1, building1, declared Road(road1), Window(window1), etc., and finally added the ABox statements , , etc., to the knowledge base. For the image , we added , , etc. as well as the corresponding type declarations Tree(tree2), Road(road2), etc.
The mapping of the image annotations to SUMO is of course very simple, and this was done deliberately in order to show that a straightforward approach already yields interesting results. As our work progresses, we do of course anticipate that we will utilize more complex knowledge bases and will need to generate more complex mappings from picture annotations (or features) to the background knowledge.
Finally, we ran DL-Learner on the knowledge base, with the positive and negative examples as indicated. DL-Learner returns 10 solutions, which are listed in Figure 5. Of these, some are straightforward from the image annotations, such as (1), (5), (8, (9) and (10). Others, such as (2), (4), (6), (7) are much more interesting as they provide solutions in terms of the background knowledge without using any of the terms from the original annotation. Solution (3
) looks odd at first sight, but is meaningful in the context of the SUMO ontology: SelfConnectedObject is an abstract class which is a direct child of the class Object in SUMO’s class hierarchy. Its natural language definition is given as “A SelfConnectedObject is any Object that does not consist of two or more disconnected parts.” As such, the class is a superclass of the class Road, which explains why (3) is indeed a solution in terms of the SUMO ontology.
We have conducted four additional experiments along the same lines as described above. We briefly describe them below – the full raw data and results are available from http://daselab.org/projects/human-centered-big-data.
In the second experiment, we chose four workroom pictures as positive examples, and eight warehouse pictures (indoors and outdoors) as negative examples. An example explanation DL-Learner came up with is
On of the outdoor warehouse pictures indeed shows timber. DurableGoods in SUMO include furniture, machinery, and appliances.
In the third experiment, we chose the same four workroom pictures as negative examples, and the same eight warehouse pictures (indoors and outdoors) as positve examples. An example explanation DL-Learner came up with is
i.e., “contains neither furniture nor industrial supply”. IndustrialSupply in SUMO includes machinery. Indeed it turns out that furniture alone is insufficient for distingushing between the positive and negative exaples, because “shelf” is not classified as funiture in SUMO. This shows the dependency of the explanations on the conceptualizations encoded in the background knowledge.
In the fourth experiment, we chose eight market pictures (indoors and outdoors) as positive examples, and eight warehouse pictures (indoors and outdoors) as well as four workroom pictures as negative examples. An example explanation DL-Learner came up with is
And indeed it turns out that people are shown on all the market pictures. There is actually also a man shown on one of the warehouse pictures, driving a forklift, however “man” or “person” was not among the annotations used for the picture. This example indicates how our approach could be utilized: A human monitor inquiring with an interactive system about the reasons for a certain classification may notice that the man was missed by the software on that particular picture, and can opt to interfere with the decision and attempt to correct it.
In the fifth experiment, we chose four mountain pictures as positive examples, and eight warehouse pictures (indoors and outdoors) as well as four workroom pictures as negative examples. An example explanation DL-Learner came up with is
Indeed, it turns out that all mountain pictures in the example set show either a river or a lake. Similar to the previous example, a human monitor may be able to catch that some misclassifications may occur because presence of a body of water is not always indicative of presence of a mountain.
4 Conclusions and Further Work
We have laid out a conceptual sketch how to approach the issue of explaining artificial neural networks’ classification behaviour using Semantic Web background knowledge and technologies, in a non-propositional setting. We have also reported on some very preliminary experiments to support our concepts.
The sketch already indicates where to go from here: We will need to incorporate more complex and more comprehensive background knowledge, and if readily available structured knowledge turns out to be insufficient, then we foresee using state of the art knowledge graph generation and ontology learning methods [13, 19] to obtain suitable background knowledge. We will need to use automatic methods for mapping network input features to the background knowledge [7, 21]
, while the features to be mapped may have to be generated from the input in the first place, e.g. using object recognition software in the case of images. And finally, we also intend to apply the approach to sets of hidden neurons in order to understand what their activations indicate.
Acknowledgements. This work was supported by the Ohio Federal Research Network project Human-Centered Big Data.
-  Baader, F., Calvanese, D., McGuinness, D., Nardi, D., Patel-Schneider, P.F. (eds.): The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, 2nd edn. (2010)
-  Bader, S., Hitzler, P.: Dimensions of neural-symbolic integration – A structured survey. In: Artëmov, S.N., Barringer, H., d’Avila Garcez, A.S., Lamb, L.C., Woods, J. (eds.) We Will Show Them! Essays in Honour of Dov Gabbay, Volume One. pp. 167–194. College Publications (2005)
-  Beckett, D., Berners-Lee, T., Prud’hommeaux, E., Carothers, G.: RDF 1.1. Turtle – Terse RDF Triple Language. W3C Recommendation (25 February 2014), available at http://www.w3.org/TR/turtle/
-  Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web. Scientific American 284(5), 34–43 (May 2001)
-  Bizer, C., Heath, T., Berners-Lee, T.: Linked Data – The Story So Far. International Journal on Semantic Web and Information Systems 5(3), 1–22 (2009)
-  Bühmann, L., Lehmann, J., Westphal, P.: DL-Learner – A framework for inductive learning on the semantic web. Journal of Web Semantics 39, 15–24 (2016)
-  Euzenat, J., Shvaiko, P.: Ontology Matching, Second Edition. Springer (2013)
-  Garcez, A., Besold, T., de Raedt, L., Földiak, P., Hitzler, P., Icard, T., Kühnberger, K.U., Lamb, L., Miikkulainen, R., Silver, D.: Neural-symbolic learning and reasoning: Contributions and challenges. In: Gabrilovich, E., Guha, R., McCallum, A., Murphy, K. (eds.) Proceedings of the AAAI 2015 Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. Technical Report, vol. SS-15-03. AAAI Press, Palo Alto, CA (2015)
-  d’Avila Garcez, A.S., Zaverucha, G.: The connectionist inductive lerarning and logic programming system. Applied Intelligence 11(1), 59–77 (1999)
-  Guha, R.V., Brickley, D., Macbeth, S.: Schema.org: evolution of structured data on the web. Commun. ACM 59(2), 44–51 (2016)
-  Hitzler, P., Krötzsch, M., Parsia, B., Patel-Schneider, P.F., Rudolph, S. (eds.): OWL 2 Web Ontology Language Primer (Second Edition). W3C Recommendation (11 December 2012), http://www.w3.org/TR/owl2-primer/
-  Hitzler, P., Krötzsch, M., Rudolph, S.: Foundations of Semantic Web Technologies. CRC Press/Chapman & Hall (2010)
-  Ji, H., Grishman, R.: Knowledge base population: Successful approaches and challenges. In: Lin, D., Matsumoto, Y., Mihalcea, R. (eds.) The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA. pp. 1148–1158. The Association for Computer Linguistics (2011)
-  Labaf, M., Hitzler, P., Evans, A.B.: Propositional rule extraction from neural networks under background knowledge. In: Proceedings of the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy’17, London, UK, July 2017 (2017), to appear
-  Larson, J., Michalski, R.S.: Inductive inference of VL decision rules. SIGART Newsletter 63, 38–44 (1977)
-  Lehmann, J., Bader, S., Hitzler, P.: Extracting reduced logic programs from artificial neural networks. Applied Intelligence 32(3), 249–266 (2010)
-  Lehmann, J., Hitzler, P.: Concept learning in description logics using refinement operators. Machine Learning 78(1-2), 203–250 (2010)
-  Lehmann, J., Isele, R., Jakob, M., Jentzsch, A., Kontokostas, D., Mendes, P.N., Hellmann, S., Morsey, M., van Kleef, P., Auer, S., Bizer, C.: DBpedia – A large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web 6(2), 167–195 (2015)
-  Lehmann, J., Völker, J.: Perspectives on Ontology Learning, Studies on the Semantic Web, vol. 18. IOS Press (2014)
-  Muggleton, S., Raedt, L.D.: Inductive logic programming: Theory and methods. Journal of Logic Programming 19/20, 629–679 (1994)
-  Uren, V.S., Cimiano, P., Iria, J., Handschuh, S., Vargas-Vera, M., Motta, E., Ciravegna, F.: Semantic annotation for knowledge management: Requirements and a survey of the state of the art. J. Web Sem. 4(1), 14–28 (2006)
-  Vrandecic, D., Krötzsch, M.: Wikidata: a free collaborative knowledgebase. Commun. ACM 57(10), 78–85 (2014)
-  Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Semantic understanding of scenes through the ADE20K dataset. arXiv preprint arXiv:1608.05442 (2016)
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ADE20K dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)