DeepAI
Log In Sign Up

Interval Temporal Logic Decision Tree Learning

Decision trees are simple, yet powerful, classification models used to classify categorical and numerical data, and, despite their simplicity, they are commonly used in operations research and management, as well as in knowledge mining. From a logical point of view, a decision tree can be seen as a structured set of logical rules written in propositional logic. Since knowledge mining is rapidly evolving towards temporal knowledge mining, and since in many cases temporal information is best described by interval temporal logics, propositional logic decision trees may evolve towards interval temporal logic decision trees. In this paper, we define the problem of interval temporal logic decision tree learning, and propose a solution that generalizes classical decision tree learning.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/17/2021

Decision Tree Learning with Spatial Modal Logics

Symbolic learning represents the most straightforward approach to interp...
06/26/2011

Temporal Decision Trees: Model-based Diagnosis of Dynamic Systems On-Board

The automatic generation of decision trees based on off-line reasoning o...
11/02/2010

Significance of Classification Techniques in Prediction of Learning Disabilities

The aim of this study is to show the importance of two classification te...
11/13/2017

A Robust Genetic Algorithm for Learning Temporal Specifications from Data

We consider the problem of mining signal temporal logical requirements f...
02/24/2020

Trees and Forests in Nuclear Physics

We present a detailed introduction to the decision tree algorithm using ...
06/18/2017

Data set operations to hide decision tree rules

This paper focuses on preserving the privacy of sensitive patterns when ...
11/13/2017

Bi-National Delay Pattern Analysis For Commercial and Passenger Vehicles at Niagara Frontier Border

Border crossing delays between New York State and Southern Ontario cause...

1 Introduction

It is commonly recognized that modern decision trees are of primary importance among classification models [30]. They owe their popularity mainly to the fact that they can be trained and applied efficiently even on big datasets, and that they are easily interpretable, meaning that they are not only useful for prediction per se, but also for understanding the reasons behind the predictions. Interpretability is of extreme importance in domains in which understanding the classification process is as important as the accuracy of the classification itself, such in the case of production business systems or in the computer-aided medicine domain. A typical decision tree is constructed in a recursive manner, following the traditional Top Down Induction of Decision Trees (TDIDT) approach [26]: starting from the root, at each node the attribute that best partitions the training data, according to a predefined score, is chosen as a test to guide the partitioning of instances into child nodes. The process continues until a sufficiently high degree of purity (with respect to the target class), or a minimum cardinality constraint (with respect to the number of instances reaching the node), is achieved in the generated partitions. This is the case of the well-known decision tree learning algorithm ID3 [26], which is the precursor of the commonly-used C4.5 [27]. A decision tree can be seen as a structured set of rules: every node of the tree can be thought of as a decision point, and, in this way, each branch becomes a conjunction of such conditional statements, that is, a rule, whose right-hand part is the class. A conditional statement may have many forms: it can be a yes/no statement (for binary categorical attributes), a categorical value statement (for non-binary categorical attributes), or a splitting value statement (for numerical attributes); the ariety of the resulting tree is two if all attributes are binary or numerical, or more, if there are categorical attributes with more than two categories. Each statement can be equivalently represented with propositional letters, so that a decision tree can be also seen as a structured set of propositional logic rules.

Temporal classification: static solutions. Just focusing on the static aspects of data is not always adequate for classification; for example, in the medical domain, one may want to take into account which symptoms a patient was experiencing at the same time, or whether two symptoms were overlapping. That is, in some application domains, the temporal aspects of the information may be essential to an accurate prediction. Within static decision tree learning, temporal information may be aggregated in order to circumvent the absence of explicit tools for dealing with temporal information (for example, a patient can be labelled with a natural number describing how many times he/she has been running a fever during the observation period); the ability of a decision tree to perform a precise classification based on such processed data, however, strongly depends on how well data are prepared, and therefore on how well the underlying domain is understood. Alternatively, decision trees have been proposed that use frequent patterns [15, 19, 22] in nodes, considering the presence/absence of a frequent pattern as a categorical attribute [13, 18]. Nevertheless, despite being the most common approach to (explicit) temporal data classification, frequent patterns in sequences or series have a limited expressive power, as they are characterized by being existential and by intrinsically representing temporal information with instantaneous events.

Our approach: interval temporal logic decision trees. A different approach to temporal classification is mining temporal logic formulas, and since temporal databases universally adopt an interval-based representation of time, the ideal choice to represent temporal information in data is interval temporal logic. The most representative propositional interval temporal logic is Halpern and Shoham’s Modal Logic of Allen’s Relations [20], also known as HS. Its language encompasses one modal operator for each interval-to-interval relation, such as meets or before, and the computational properties of HS and its fragments have been studied in the recent literature (see, e.g. [10, 11, 12]). The very high expressive power of HS, as well as its versatility, make HS the ideal candidate to serve as the basis of a temporal decision tree learning algorithm. Based on these premises, we propose in this paper a decision tree learning algorithm that produces HS-based trees. Our proposal, Temporal ID3, is a direct generalization of the ID3 algorithm [26], founded on the logical interpretation of tree nodes, and focuses on data representation and node generation; we borrow other aspects, such as splitting based on information gain and the overall learning process from the original algorithm. The accuracy of a decision tree and its resilience to over-fitting also depends on the stopping criterion and possible post-pruning operations, but we do not discuss these aspects here.

Existing approaches to temporal logic decision trees. Learning temporal logic decision trees is an emerging field in the analysis of physical systems, and, among the most influential approaches, we mention learning of automata [3] and learning Signal Temporal Logic (STL) formulas [6, 14, 24, 28]. In particular, STL is a point-based temporal logic with until that encompasses certain metric capabilities, and learning formulas of STL has been focused on both the fine tuning of the metric parameters of a predefined formula and the learning the innermost structure of a formula; among others, decision trees have been used to this end [8]. Compared with STL decision tree learning, our approach has the advantage of learning formulas written in a well-known, highly expressive interval-based temporal logic language; because of the nature of the underlying language and of the interval temporal logic models, certain application domains fit naturally into this approach. Moreover, since our solution generalizes the classical decision tree learning algorithm ID3, and, particularly, the notion of information gain, it is not limited to binary classification only. Moreover, in [7]

a first-order framework for TDIDT is presented with the attempt to make such paradigm more attractive to inductive logic programming (ILP). Such a framework provides a sound basis for logical decision tree induction; in opposition, we employ the framework to represent

modal, instead of first-order, relational data. Additionally, our approach should not be confused with [23], in which the term interval indicates an uncertain numerical value (e.g., the patient has a fever of 38 Celsius versus the patient has a fever between 37.5 and 38.5 Celsius), and in which an algorithm for inducing decision trees on such uncertain data is presented that is based on the so-called Kolmogorov-Smirnov criterion, but the data that are object of that study are not necessarily temporal, and the produced trees do not employ any temporal (logical) relation. In [4, 29] and [21], the authors present two other approaches to a temporal generalization of decision tree learning. In the former, the authors provide a general method for building point-based temporal decision trees, but with no particular emphasis on any supporting formal language. In the latter, the constructed trees can be seen as real-time algorithms that have the ability to make decisions even if the entire description of the instance is not yet available. Finally, in [16], a generalization of the decision tree model is presented in which nodes are possibly labelled with a timestamp to indicate when a certain condition should be checked.

Summarizing, our approach is essentially different from those presented in the literature in several aspects. As a matter of fact, by giving a logical perspective to decision tree learning, we effectively generalize the learning model to a temporal one, instead of introducing a new paradigm. In this way, instances that present some temporal component are naturally seen as timelines, and, thanks to the expressive power provided by HS, our algorithm can learn a decision tree based on the temporal relations between values, instead of the static information carried by the values.

2 Preliminaries

Decision trees. Decision tree induction is based on the following simple concepts [27]. Given a set of observable values

, with associated probabilities

, the information conveyed by (or entropy), is defined as:

Assume that a dataset has instances, each characterized by the attributes , and distributed over classes . Each class can be seen as the subset of composed of precisely those instances classified as , so that the information needed to identify the class of an element of is:

Intuitively, the entropy is inversely proportional to the purity degree of with respect to the class values. Splitting, which is the main operation in decision tree learning, is performed over a specific attribute . If is categorical and its domain consists of distinct values, we can split into , each being characterized by having precisely the value (i.e., ). The information of a categorical split, therefore, is:

If, on the other hand, is numerical, then the set of actual values for that are present in gives rise to possible splits, all of them binary, and the information conveyed by each possible split is, then, a function not only of the attribute but also of the chosen value:

where (respectively, ) encompasses all and only those instances with (respectively, ). The information conveyed by an attribute can be consequently defined as:

and the information gain brought by is defined as:

The information gain, which can be also seen as the reduction of the expected entropy when the attribute has been chosen, is used to drive the splitting process, that is, to decide over which attribute (and how) the next split is performed. The underlying principle to decision tree building consists of recursively splitting the dataset over the attribute that guarantees the greatest information gain until a certain stopping criterion is met. Each split can be seen as a propositional condition if then -, else -. When splitting is performed over a numerical attribute, e.g., , then the corresponding propositional statement is simply the condition itself (in our example, is a propositional letter ); when it is performed over a categorical attribute, e.g., , , …, then each statement is a propositional statement (in our example, , ,…) on its own.

HS

Allen’s relations

Graphical representation

Figure 1: Allen’s interval relations and HS modalities.

Interval temporal logic. Let . In the strict interpretation, an interval over

is an ordered pair

, where and , and we denote by the set of all intervals over . If we exclude the identity relation, there are 12 different Allen’s relations between two intervals in a linear order [1]: the six relations (adjacent to), (later than), (begins), (ends), (during), and (overlaps), depicted in Fig. 1, and their inverses, that is, , for each , where . Halpern and Shoham’s modal logic of temporal intervals (HS) is defined from a set of propositional letters , and by associating a universal modality and an existential one to each Allen’s relation . Formulas of HS are obtained by

where and . The other Boolean connectives and the logical constants, e.g., and , as well as the universal modalities , can be defined in the standard way. For each , the modality (corresponding to the inverse relation of ) is said to be the transpose of the modalities , and vice versa. The semantics of HS formulas is given in terms of timelines 111We deliberately use the symbol to indicate both a timeline and an instance in a dataset., where is a linear order and is a valuation function which assigns to each atomic proposition the set of intervals on which holds. The truth of a formula on a given interval in an interval model is defined by structural induction on formulas as follows:

HS is a very general interval temporal language and its satisfiability problem is undecidable [20]. Our purpose here, however, is to study the problem of formula induction in the form of decision trees, and not of formula deduction, and therefore the computational properties of the satisfiability problem can be ignored at this stage.

3 Motivations

Patient Symptom TimeStamp
[3,4]
[4,5]
[3,5]
[2,4]
[3,5]
[2,4]
[4,6]

Patient Class

Patient Class

static

temporal
Figure 2: Example of static and temporal treatment of information in the medical domain.

In this section, we present some realistic scenarios in which learning a temporal decision tree may be convenient, and, then, we discuss aspects of data preprocessing related to the temporal component.

Learning. There are several application domains in which learning a temporal decision tree may be useful. Consider, for example, a medical scenario in which we consider a dataset of classified patients, each one characterized by its medical history, as in Fig. 2, top. Assume, first, that we are interested in learning a static (propositional) classification model. The history of our patients, that is, the collection of all relevant pieces of information about tests, results, symptoms, and hospitalizations of the patient that occurred during the entire observation period, must be processed so that temporal information is subsumed in propositional letters. For instance, if some patient has been running a fever during the observation period, we may use a proposition , with positive values for those patient that have had fever, and negative values for the others (as in Fig. 2, bottom, left). Depending on the specific case, we may, instead, use the actual temperature of each patient, and a static decision tree learning system may split over , for some threshold temperature , effectively introducing a new propositional letter, and therefore a binary split. Either way, the temporal information is lost in the preprocessing. For example, we can no longer take into account whether occurred before, after, or while the patient was experiencing headache (), which may be a relevant information for a classification model. By generating, instead, the timeline of each patient (as in Fig. 2, bottom, right), we keep all events and their relative qualitative relations. By learning a decision tree on a preprocessed dataset such as the one in Fig. 2 (bottom, left), we see that the attribute

has zero variance, and therefore zero predictive capabilities; then, we are forced to build a decision tree using attribute

alone, which results in a classifier with 75% accuracy. On the contrary, by using the temporal information in the learning process, we are able to distinguish the two classes: is characterized by presenting both and , but not overlapping, and this classifier has, in this toy example, accuracy. In this example, the term accuracy refers to the training set accuracy (we do not consider independent trainining and test data), that is, the ability of the classification system to discern among classes on the data used to train the system itself; it should not be confused with test set accuracy, which measures the real classification performances that can be expected on future, real-life examples.

Alternatively, consider a problem in the natural language processing domain. In this scenario, a timeline may represent a

conversation between two individuals. It is known that, in automatic processing of conversations, it is sometimes interesting to label each interval of time with one or more context, that is, a particular topic that is being discussed [2, 5, 25], in order to discover the existence of unexpected or interesting temporal relations among them. Suppose, for example, that a certain company wants to analyze conversations between selling agents and potential customers: the agents contact the customers with the aim of selling a certain product, and it is known that certain contexts, such as the price of the product (price), its known advantages (advantages) over other products, and its possible minor defects (disadvantages) are interesting. Assume that each conversation has been previously classified between those that have been successful and those that ended without the product being acquired. Now, we want to learn a model able to predict such an outcome. By using only static information, nearly every conversation would be labelled with the three contexts, effectively hiding the underlying knowledge, if it exists. By keeping the relative temporal relations between contexts, instead, we may learn useful information, such as, for example, if price and disadvantages are not discussed together, the conversation will be likely successful.

Preprocessing. Observe, now, how switching from static to temporal information influences data preparation. First, in a context such as the one described in our first example, numerical attributes may become less interesting: for instance, the information on how many times a certain symptom occurred, or its frequency, are not needed anymore, considering that each occurrence is taken into account in the timeline. Moreover, since the focus is on attributes relative temporal positions, even categorical attributes may be ignored in some contexts: for instance, in our scenario, we may be interested in establishing the predictive value of the relative temporal position of and regardless of the sex or age of the patient. It is also worth underlying that propositional attributes over intervals allow us to express a variety of situations, and sometimes propositional labelling may result in gaining information, instead of loosing it. Consider, again, the case of fever, and suppose that a certain patient is experiencing low fever in an interval , say, a given day, and that during just one hour of that day, that is, over the interval strictly contained in , he/she has an episode of high fever. A natural choice is to represent such a situation by labelling the interval with and its sub-interval with . On the other hand, representing the same pieces of information as three intervals respectively labelled with , , and , which would be the case with a point-based representation (or with an interval-based representation under the homogeneity assumption), would be unnatural, and it would entail hiding a potentially important information such as: “the patient presented low fever during the entire day, except for a brief episode of high fever”. Building on such considerations, our approach in the rest of this paper is based on propositional, non-numerical attributes only.

4 Learning Interval Temporal Logic Decision Trees

In this section we describe a generalization of the algorithm ID3 that is capable of learning a binary decision tree over a temporal dataset, as in the examples of the previous section; as in classical decision trees, every branch of a temporal decision tree can be read as a logical formula, but instead of classical propositional logic we use the temporal logic HS. To this end, we generalize the notion of information gain, while, at this stage, we do not discuss pre-pruning, post-pruning, and purity degree of a sub-tree [9, 27].

Data preparation and presentation. We assume that the input dataset contains timelines as instances. For the sake of simplicity, we also assume that all timelines are based on the same finite domain of length (from 0 to ). The dataset can be seen as an array of structures; represents the -th timeline of the dataset, and it can be thought of as an interval model. Given a dataset , we denote by the set of all propositional letters that occur in it.

Temporal information. We are going to design the learning process based on the same principles of classical decision tree learning. This means that we need to define a notion of splitting as well as a notion of information conveyed by a split, and, to this end, we shall use the truth relation as defined in Section 2 applied to a timeline. Unlike the atemporal case, splits are not performed over attributes, but, instead, over propositional letters. Splitting is defined relatively to an interval , and it can be local, if it is applied on itself, or temporal, in which case it depends on the existence of an interval related to and the particular relation such that (or the other way around). A local split of into and , where is the reference interval of , and is the propositional letter over which the split takes place is defined by:

(1)

On the contrary, a temporal split, in the same situation, over the temporal relation , is defined by:

(2)

Consequently, the local information gain of a propositional letter is defined as:

where and are defined as in (1), while the temporal information gain of a propositional letter is defined as:

where and are defined as in (2) and depend on the relation . Therefore, the information gain of a propositional letter becomes:

and, at each step, we aim to find the letter that maximizes the gain.

Figure 3: The algorithm Temporal ID3.

The algorithm. Let us analyze the code in Fig. 3. At the beginning, the timelines in are not assigned any reference interval, and we say that the dataset is unanchored. The procedure FindBestUnanchoredSplit systematically explores every possible reference interval of an unanchored dataset, and, for each one of them, calls FindBestAnchoredSplit, which, in turn, tries every propositional letter (and, implicitly, every temporal relation) in the search of the best split. This procedure returns the best possible triple , where is an interval relation, if the best split is temporal, or it has no value, if the best split is local, is a propositional letter, and is the information gain. Temporal ID3 first creates a root node, and then calls Learn. The latter, in turn, first checks possible stopping conditions, and then finds the best split into two datasets and . Of these, the former is now anchored (to the reference interval returned by FindBestUnanchoredSplit), while the latter is still unanchored. During a recursive call, when is analyzed to find its best split, the procedure for this task will be FindBestAnchoredSplit, called directly, instead of passing through FindBestUnanchoredSplit. So, in our learning model, all splits are binary. Given a node, the ‘lefthand’ outgoing edge is labeled with the chosen , or just , when the split is local, whereas the corresponding ‘righthand’ edge is labeledhttps://www.overleaf.com/project with (or just ); also, the node is labeled with a new reference interval if its corresponding dataset is unanchored. After a split, every (the existential dataset, which is now certainly anchored) is associated with a new witnessing interval: in fact, those instances satisfy on , and, for each one of them, there is a possibly distinct witness. Witnesses are assigned by the function Split; while the witnessing interval of an instance may change during the process, its reference interval is set only once.

Consider, now, the function AssignReferenceInterval and the example shown in Figure 4. As can be seen, neglecting the temporal dimension, one may classify the instances with just a single split based on the presence of the symptom fever (or headache). On the contrary, given the temporal dataset with domain it is not possible discriminate the classes within a single step. A natural solution consists of augmenting in such a way to simulate the behaviour of an infinite domain model. In our example, it suffices to consider , so that a single split may be based on the rule: , otherwise holding on (or, equivalently, its inverse formulation on ). Thus, the function AssignReferenceIntervals, while searching all possible reference intervals, takes into consideration two extra points at each side of the domain. Although it is possible to obtain a similar result by adding less than four points (in our example, -2 and -1 suffice), this is no longer true if we include the possibility that Temporal ID3 is called on a subset of HS modalities, for example, for computational efficiency reasons. Adding four points, on the other hand, guarantees that the most discriminative split can always be found.

Patient Symptom TimeStamp
[0,2]
[1,3]
[0,2]
[1,3]

Patient Class

Figure 4: Example of a problematic dataset.

Analysis. We now analyze the computational complexity of Temporal ID3. To this end, we first compute the cost of finding the best splitting. Since the cardinality of the domain of each timeline is , there are possible intervals. This means that, fixed a propositional letter and a relation , computing and costs , where is the number of timelines. Therefore, the cost of FindBestAnchoredSplit is obtained by multiplying the cost of a single (tentative) splitting by the number of propositional letters and the number of temporal relations (plus one, to take into account the local splitting), which sums up to . The cost of FindBestUnanchoredSplit increases by a factor of , as the for cycle ranges over all possible intervals, and therefore it becomes . We can increase the efficiency of the implementation by suitably pre-compute the value of for each temporal relation, each propositional letter, and each interval, thus eliminating a factor of from both costs.

If we consider as fixed, and as a constant, the cost of finding the best splitting becomes , and, under such (reasonable) assumption, we can analyze the complexity of an execution of Learn in terms of the number of timelines. Two cases are particularly interesting. In the worst case, every binary split leads to a very unbalanced partition of the dataset, with and (or the other way around). The recurrence that describes such a situation is:

which can be immediately solved to obtain However, computing the worst case has only a theoretical value; we can reasonably expect Temporal ID3 to behave like a randomized divide-and-conquer algorithm, and its computational complexity to tend towards the average case. In the average case, every binary split leads to a non-unbalanced partition, but we cannot foresee the relative cardinality of each side of the partition. Assuming that every partition is equally probable, the recurrence that describes this situation is:

We want to prove that . To this end, we first prove a useful bound for the expression , as follows:

Now, we prove, by induction, that for some positive constants , as follows:

Figure 5: A decision tree learned by Temporal ID3 on the example in Fig. 2.

Example of execution. Consider our initial example of Fig. 2, with four timelines distributed over two classes. Since this is a toy example, there are many different combination of intervals, relations, and propositional letters that give the same information gain. Fig. 5 gives one possible outcome, which seems to indicate that, looking at the entire history, the class is characterized by presenting headache and overlapping fever, or no fever at all.

There are several running parameters that can be modulated for an execution of Temporal ID3, and further analysis is required to understand how they influence the final result, and, particularly, the properties of the resulting classifier. The most important ones are: (i) how to behave in case of two splits with the same information gain; (ii) how to behave in case of more than one possible witness interval for a given timeline; (iii) how to behave in case of more than one optimal reference interval for a given unanchored temporal dataset.

If we allow, in all such cases, a random choice, the resulting learning algorithm is not deterministic anymore, and different executions may result in different decision trees. This is a typical situation in machine learning (e.g., in algorithms such as

k-means clustering, or random forest), that involves some experience in order to meaningfully assess the results.

5 Conclusions

Classical decision trees, which are a popular class of learning algorithms, are designed to interpret categorical and numerical attributes. In decision trees, every node can be seen as a propositional letter; therefore, a decision tree can be seen as a structured set of propositional logic rules, the right-hand part of which is a class. Since classifying based on the static aspects of data is not always adequate, and since decision tree learning cannot deal with temporal knowledge in an explicit manner, we considered the problem of learning a classification model capable to combine propositional knowledge with qualitative temporal information. Towards its solution, we showed how temporal data can be prepared in a optimal way for a temporal decision tree to be learned and presented a generalization of the classical decision tree learning algorithm ID3 that is able to split the dataset based on temporal, instead of static, information, using the well-known temporal logic HS. Future work include testing our method on real data, improving the capabilities of Temporal ID3 by enriching the underlying language, and studying the effect of different pruning and stopping conditions. Moreover, it would be interesting to study adapting ID3 to other logical languages, although this may require re-designing some key elements, such as the representation of temporal datasets, or the process that underlies the splitting algorithm.

Machine learning is generically focused on a non-logical approach to knowledge representation. However, when learning should take into account temporal aspects of data, a logical approach can be associated to classical methods, and besides decision tree learning, interval temporal logics has been already proposed as a possible tool, for example, for temporal rules extraction [17]. Focusing these approaches on fragments of interval temporal logics whose satisfiability problem is decidable (and tractable) may result into an integrated systems that pairs induction and deduction of formulas, intelligent elimination of redundant rules, and automatic verification of inducted knowledge against formal requirement. Also, using a logical approach in learning may require non-standard semantics for logical formulas (e.g., fuzzy semantics, or multi-valued propositional semantics); these, in turn, pose original and interesting questions on the theoretical side concerning the computational properties of the problems associated with these logics (i.e., satisfiability), generating, de facto, a cross-feeding effect on the two fields.

References

  • [1] J. F. Allen (1983) Maintaining knowledge about temporal intervals. Communications of the ACM 26 (11), pp. 832–843. Cited by: §2.
  • [2] R. Alluhaibi (2015) Simple interval temporal logic for natural language assertion descriptions. In Proc. of the 11th International Conference on Computational Semantics (IWCS), pp. 283–293. Cited by: §3.
  • [3] D. Angluin (1987) Learning regular sets from queries and counterexamples. Information and Computation 75 (2), pp. 87–106. Cited by: §1.
  • [4] S. Antipov and M. Fomina (2011) A method for compiling general concepts with the use of temporal decision trees. Scientific and Technical Information Processing 38 (6), pp. 409–419. External Links: ISSN 1934-8118 Cited by: §1.
  • [5] R.A. Baeza-Yates (2004) Challenges in the interaction of information retrieval and natural language processing. In Proc. of the 5th International on Computational Linguistics and Intelligent Text Processing (CICLing), pp. 445–456. Cited by: §3.
  • [6] E. Bartocci, L. Bortolussi, and G. Sanguinetti (2014) Data-driven statistical learning of temporal logic properties. In Proc. of the 12th International Conference on Formal Modeling and Analysis of Timed Systems, pp. 23–37. Cited by: §1.
  • [7] H. Blockeel and L. De Raedt (1998) Top-down induction of first-order logical decision trees. Artificial Intelligence 101 (1-2), pp. 285–297. External Links: ISSN 0004-3702 Cited by: §1.
  • [8] G. Bombara, C.I. Vasile, F. Penedo, H. Yasuoka, and C. Belta (2016) A decision tree approach to data classification using signal temporal logic. In Proc. of the 19th International Conference on Hybrid Systems: Computation and Control, pp. 1–10. Cited by: §1.
  • [9] L. Breiman, J. Friedman, R. Olshen, and C. Stone (1984) Classification and regression trees. Wadsworth and Brooks, Monterey, CA. External Links: ISBN 0-534-98053-8 Cited by: §4.
  • [10] D. Bresolin, D. Della Monica, V. Goranko, A. Montanari, and G. Sciavicco (2013) Metric propositional neighborhood logics on natural numbers. Software and System Modeling 12 (2), pp. 245–264. Cited by: §1.
  • [11] D. Bresolin, D. D. Monica, A. Montanari, P. Sala, and G. Sciavicco (2014) Interval temporal logics over strongly discrete linear orders: expressiveness and complexity. Theoretical Computer Science 560, pp. 269–291. Cited by: §1.
  • [12] D. Bresolin, P. Sala, and G. Sciavicco (2012) On begins, meets, and before. International Journal of Foundations of Computer Science 23 (3), pp. 559–583. Cited by: §1.
  • [13] A. Brunello, E. Marzano, A. Montanari, and G. Sciavicco (2018) J48S: A sequence classification approach to text analysis based on decision trees. In Proc. of the 24th International Conference on Information and Software Technologies, (ICIST), pp. 240–256. Cited by: §1.
  • [14] S. Bufo, E. Bartocci, G. Sanguinetti, M. Borelli, U. Lucangelo, and L. Bortolussi (2014) Temporal logic based monitoring of assisted ventilation in intensive care patients. In Proc. of the 6th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation, pp. 391–403. Cited by: §1.
  • [15] H. Cheng, X. Yan, J. Han, and C.W. Hsu (2007) Discriminative frequent pattern analysis for effective classification. In Proc. of the 23rd International Conference on Data Engineering (ICDE), pp. 716–725. Cited by: §1.
  • [16] L. Console, C. Picardi, and D.T. Dupré (2003) Temporal decision trees: model-based diagnosis of dynamic systems on-board. J. Artif. Intell. Res. 19, pp. 469–512. Cited by: §1.
  • [17] D. Della Monica, D. de Frutos-Escrig, A. Montanari, A. Murano, and G. Sciavicco (2017) Evaluation of temporal datasets via interval temporal logic model checking. In Proc. of the 24th International Symposium on Temporal Representation and Reasoning (TIME), pp. 11:1–11:18. Cited by: §5.
  • [18] W. Fan, K. Zhang, H. Cheng, J. Gao, X. Yan, J. Han, P. Yu, and O. Verscheure (2008) Direct mining of discriminative and essential frequent patterns via model-based search tree. In Proc. of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 230–238. Cited by: §1.
  • [19] P. Fournier-Viger, A. Gomariz, M. Šebek, and M. Hlosta (2014) VGEN: fast vertical mining of sequential generator patterns. In Proc. of the 16th International Conference on Data Warehousing and Knowledge Discovery (DaWaK), pp. 476–488. Cited by: §1.
  • [20] J.Y. Halpern and Y. Shoham (1991) A propositional modal logic of time intervals. Journal of the ACM 38 (4), pp. 935–962. Cited by: §1, §2.
  • [21] K. Karimi and H. J. Hamilton (2001) Temporal rules and temporal decision trees: A C4.5 approach. Technical report Technical Report CS-2001-02, Department of Computer Science, University of Regina. Cited by: §1.
  • [22] W. Lin and M.A. Orgun (2000) Temporal data mining using hidden periodicity analysis. In Proc. of the 12th International Symposium on Foundations of Intelligent Systems (ISMIS), pp. 49–58. Cited by: §1.
  • [23] C. Mballo and E. Diday (2005) Decision trees on interval valued variables. Symbolic Data Analysis 3 (1), pp. 8–18. External Links: ISSN 1723-5081 Cited by: §1.
  • [24] L.V. Nguyen, J. Kapinski, X. Jin, J.V. Deshmukh, K. Butts, and T.T. Johnson (2017) Abnormal data classification using time-frequency temporal logic. In Proc. of the 20th International Conference on Hybrid Systems: Computation and Control, pp. 237–242. Cited by: §1.
  • [25] I. Pratt-Hartmann (2005) Temporal prepositions and their logic. Artificial Intelligence 166 (1–2), pp. 1–36. Cited by: §3.
  • [26] J.R. Quinlan (1986) Induction of decision trees. Machine Learning 1, pp. 81–106. Cited by: §1, §1.
  • [27] J.R. Quinlan (1999) Simplifying decision trees. International Journal of Human-Computer Studies 51 (2), pp. 497–510. Cited by: §1, §2, §4.
  • [28] A. Rajan (2006) Automated requirements-based test case generation. SIGSOFT Software Engeneering Notes 31 (6), pp. 1–2. Cited by: §1.
  • [29] V. Vagin, O. Morosin, M. Fomina, and S. Antipov (2018) Temporal decision trees in diagnostics systems. In 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD), Vol. , pp. 1–10. Cited by: §1.
  • [30] I.H. Witten, E. Frank, M.A. Hall, and C.J. Pal (2016) Data mining: practical machine learning tools and techniques. Morgan Kaufmann. Cited by: §1.