The relatively recent discipline of Data Mining involves a wide spectrum of techniques, inherited from different origins such as Statistics, Databases, or Machine Learning. Among them, Association Rule Mining is a prominent conceptual tool and, possibly, a cornerstone notion of the field, if there is one. Currently, the amount of available knowledge regarding association rules has grown to the extent that the tasks of creating complete surveys and websites that maintain pointers to related literature become daunting. A survey, with plenty of references, is, and additional materials are available in ; see also , , , , , , and the references and discussions in their introductory sections.
Given an agreed general set of “items”, association rules are defined with respect to a dataset that consists of “transactions”, each of which is, essentially, a set of items. Association rules are customarily written as , for sets of items and , and they hold in the given dataset with a specific “confidence” quantifying how often appears among the transactions in which appears.
A close relative of the notion of association rule, namely, that of exact implication in the standard propositional logic framework, or, equivalently, association rule that holds in 100% of the cases, has been studied in several guises. Exact implications are equivalent to conjunctions of definite Horn clauses: the fact, well-known in logic and knowledge representation, that Horn theories are exactly those closed under bitwise intersection of propositional models leads to a strong connection with Closure Spaces, which are characterized by closure under intersection (see the discussions in  or ). Implications are also very closely related to functional dependencies in databases. Indeed, implications, as well as functional dependencies, enjoy analogous, clear, robust, hardly disputable notions of redundancy that can be defined equivalently both in semantic terms and through the same syntactic calculus. Specifically, for the semantic notion of entailment, an implication is entailed from a set of implications if every dataset in which all the implications of hold must also satisfy ; and, syntactically, it is known that this happens if and only if is derivable from via the Armstrong axiom schemes, namely, Reflexivity ( for ), Augmentation (if and then , where juxtaposition denotes union) and Transitivity (if and then ).
Also, such studies have provided a number of ways to find implications (or functional dependencies) that hold in a given dataset, and to construct small subsets of a large set of implications, or of functional dependencies, from which the whole set can be derived; in Closure Spaces and in Data Mining these small sets are usually called “bases”, whereas in Dependency Theory they are called “covers”, and they are closely related to deep topics such as hypergraph theory. Associated natural notions of minimality (when no implication can be removed), minimum size, and canonicity of a cover or basis do exist; again it is inappropriate to try to give a complete set of references here, but see, for instance, , , , , , , , , , and the references therein.
However, the fact has been long acknowledged (e.g. already in ) that, often, it is inappropriate to search only for absolute implications in the analysis of real world datasets. Partial rules are defined in relation to their “confidence”: for a given rule , the ratio of how often and are seen together to how often is seen. Many other alternative measures of intensity of implication exist , ; we keep our focus on confidence because, besides being among the most common ones, it has a natural interpretation for educated users through its correspondence with the observed conditional probability.
The idea of restricting the exploration for association rules to frequent itemsets, with respect to a support threshold, gave rise to the most widely discussed and applied algorithm, called Apriori , and to an intense research activity. Already with full-confidence implications, the output of an association mining process often consists of large sets of rules, and a well-known difficulty in applied association rule mining lies in that, on large datasets, and for sensible settings of the confidence and support thresholds and other parameters, huge amounts of association rules are often obtained. Therefore, besides the interesting progress in the topic of how to organize and query the rules discovered (see , , ), one research topic that has been worthy of attention is the identification of patterns that indicate redundancy of rules, and ways to avoid that redundancy; and each proposed notion of redundancy opens up a major research problem, namely, to provide a general method for constructing bases of minimum size with respect to that notion of redundancy.
For partial rules, the Armstrong schemes are not valid anymore. Reflexivity does hold, but Transitivity takes a different form that affects the confidence of the rules: if the rule (or , which is equivalent) and the rule both hold with confidence at least , we still know nothing about the confidence of ; even the fact that both and hold with confidence at least only gives us a confidence lower bound of for (assuming ). Augmentation does not hold at all; indeed, enlarging the antecedent of a rule of confidence at least may give a rule with much smaller confidence, even zero: think of a case where most of the times appears it comes with , but it only comes with when is not present; then the confidence of may be high whereas the confidence of may be null. Similarly, if the confidence of is high, it means that and appear together in most of the transactions having , whence the confidences of and are also high; but, with respect to the converse, the fact that both and appear in fractions at least of the transactions having does not inform us that they show up together at a similar ratio of these transactions: only a ratio of is guaranteed as a lower bound. In fact, if we look only for association rules with singletons as consequents (as in some of the analyses in , or in the “basic association rules” of , or even in the traditional approach to association rules  and the useful apriori implementation of Borgelt available on the web ) we are almost certain to lose information. As a consequence of these failures of the Armstrong schemes, the canonical and minimum-size cover construction methods available for implications or functional dependencies are not appropriate for partial association rules.
On the semantic side, a number of formalizations of the intuition of redundancy among association rules exist in the literature, often with proposals for defining irredundant bases (see , , , , , , , the survey , and section 6 of the survey ). All of these are weaker than the notion that we would consider natural by comparison with implications (of which we start the study in the last section of this paper). We observe here that one may wish to fulfill two different roles with a basis, and that both appear (somewhat mixed) in the literature: as a computer-supported data structure from which confidences and supports of rules are computed (a role for which we use the closures lattice instead) or, in our choice, as a means of providing the user with a smallish set of association rules for examination and, if convenient, posterior enumeration of the rules that follow from each rule in the basis. That is, we will not assume to have available, nor to wish to compute, exact values for the confidence, but only discern whether it stays above a certain user-defined threshold. We compute actual confidences out of the closure lattice only at the time of writing out rules for the user.
This paper focuses mainly on several such notions of redundancy, defined in a rather general way, by resorting to confidence and support inequalities: essentially, a rule is redundant with respect to another if it has at least the same confidence and support of the latter for every dataset. We also discuss variants of this proposal and other existing definitions given in set-theoretic terms. For the most basic notion of redundancy, we provide formal proofs of the so far unstated equivalence among several published proposals, including a syntactic calculus and a formal proof of the fact, also previously unknown, that the existing basis known as the Essential Rules or the Representative Rules (, , ) is of absolutely minimum size.
It is natural to wish further progress in reducing the size of the basis. Our theorems indicate that, in order to reduce further the size without losing information, more powerful notions or redundancy must be deployed. We consider for this role the proposal of handling separately, to a given extent, full-confidence implications from lower-than-1-confidence rules, in order to profit from their very different combinatorics. This separation is present in many constructions of bases for association rules , , . We discuss corresponding notions of redundancy and completeness, and prove new properties of these notions; we give a sound and complete deductive calculus for this redundancy; and we refine the existing basis constructions up to a point where we can prove again that we attain the limit of the redundancy notion.
Next, we discuss yet another potential for strengthening the notion of redundancy. So far, all the notions have just related one partial rule to another, possibly in the presence of full implications. Is it possible to combine two partial rules, of confidence at least , and still obtain a partial rule obeying that confidence level? Whereas the intuition is that these confidences will combine together to yield a confidence lower than , we prove that there is a specific case where a rule of confidence at least is nontrivially entailed by two of them. We fully characterize this case and obtain from the caracterization yet another deduction scheme. We hope that further progress along the notion of a set of partial rules entailing a partial rule will be made along the coming years.
Our notation and terminology are quite standard in the Data Mining literature. All our developments take place in the presence of a “universe” set of atomic elements called items; their absence or presence in sets or items plays the same role as binary-valued attributes of a relational table. Subsets of are called itemsets. A dataset is assumed to be given; it consists of transactions, each of which is an itemset labeled by a unique transaction identifier. The identifiers allow us to distinguish among transactions even if they share the same itemset. Upper-case, often subscripted letters from the end of the alphabet, like or , denote itemsets. Juxtaposition denotes union of itemsets, as in ; and denotes proper subsets, whereas is used for the usual subset relationship with potential equality.
For a transaction , we denote the fact that is a subset of the itemset corresponding to , that is, the transaction satisfies the minterm corresponding to in the propositional logic sense.
From the given dataset we obtain a notion of support of an itemset: is the cardinality of the set of transactions that include it, ; sometimes, abusing language slightly, we also refer to that set of transactions itself as support. Whenever is clear, we drop the subindex: . Observe that whenever ; this is immediate from the definition. Note that many references resort to a normalized notion of support by dividing by the dataset size. We chose not to, but there is no essential issue here. Often, research work in Data Mining assumes that a threshold on the support has been provided and that only sets whose support is above the threshold (then called “frequent”) are to be considered. We will require this additional constraint occassionally for the sake of discussing the applicability of our developments.
We immediately obtain by standard means (see, for instance,  or ) a notion of closed itemsets, namely, those that cannot be enlarged while maintaining the same support. The function that maps each itemset to the smallest closed set that contains it is known to be monotonic, extensive, and idempotent, that is, it is a closure operator. This notion will be reviewed in more detail later on. Closed sets whose support is above the support threshold, if given, are usually termed closed frequent sets.
Association rules are pairs of itemsets, denoted as for itemsets and . Intuitively, they suggest the fact that occurs particularly often among the transactions in which occurs. More precisely, each such rule has a confidence associated: the confidence of an association rule in a dataset is . As with support, often we drop the subindex . The support in of the association rule is .
We can switch rather freely between right-hand sides that include the left-hand side and right-hand sides that don’t:
Rules and are equivalent by reflexivity if and .
Clearly, and, likewise, for any ; that is, the support and confidence of rules that are equivalent by reflexivity always coincide. A minor notational issue that we must point out is that, in some references, the left-hand side of a rule is required to be a subset of the right-hand side, as in  or , whereas many others require the left- and right-hand sides of an association rule to be disjoint, such as  or the original . Both the rules whose left-hand side is a subset of the right-hand side, and the rules that have disjoint sides, may act as canonical representatives for the rules equivalent to them by reflexivity. We state explicitly one version of this immediate fact for later reference:
If rules and are equivalent by reflexivity, , and , then they are the same rule: and .
In general, we do allow, along our development, rules where the left-hand side, or a part of it, appears also at the right-hand side, because by doing so we will be able to simplify the mathematical arguments. We will assume here that, at the time of printing out the rules found, that is, for user-oriented output, the items in the left-hand side are removed from the right-hand side; accordingly, we write our rules sometimes as to recall this convention.
Also, many references require the right-hand side of an association rule to be nonempty, or even both sides. However, empty sets can be handled with no difficulty and do give meaningful, albeit uninteresting, rules. A partial rule with an empty right-hand side is equivalent by reflexivity to , or to for any , and all of these rules have always confidence 1. A partial rule with empty left-hand side, as employed, for instance, in , actually gives the normalized support of the right-hand side as confidence value:
In a dataset of transactions, .
Again, these sorts of rules could be omitted from user-oriented output, but considering them conceptually valid simplifies the mathematical development. We also resort to the convention that, if (which implies that as well) we redefine the undefined confidence as 1, since the intuitive expression “all transactions having do have also ” becomes vacuously true. This convention is irrespective of whether .
Throughout the paper, “implications” are association rules of confidence 1, whereas “partial rules” are those having a confidence below 1. When the confidence could be 1 or could be less, we say simply “rule”.
3. Redundancy Notions
We start our analysis from one of the notions of redundancy defined formally in . The notion is employed also, generally with no formal definition, in several papers on association rules, which subsequently formalize and study just some particular cases of redundancy (e.g. , ); thus, we have chosen to qualify this redundancy as “standard”. We propose also a small variation, seemingly less restrictive; we have not found that variant explicitly defined in the literature, but it is quite natural.
 has standard redundancy with respect to if the confidence and support of are larger than or equal to those of , in all datasets.
has plain redundancy with respect to if the confidence of is larger than or equal to the confidence of , in all datasets.
Generally, we will be interested in applying these definitions only to rules where since, otherwise, for all datasets and the rule is trivially redundant. We state and prove separately, for later use, the following new technical claim:
Assume that rule is plainly redundant with respect to rule , and that . Then .
Assume , to argue the contrapositive. Then, we can consider a dataset consisting of one transaction and, say, transactions . No transaction includes , therefore ; however, is either 1 or , which can be pushed up as much as desired by simply increasing . Then, plain redundancy does not hold, because it requires to hold for all datasets whereas, for this particular dataset, the inequality fails.∎
The first use of this lemma is to show that plain redundancy is not, actually, weaker than standard redundancy.
Consider any two rules and where . Then has standard redundancy with respect to if and only if has plain redundancy with respect to .
Standard redundancy clearly implies plain redundancy by definition. Conversely, plain redundancy implies, first, by definition and, further, by Lemma 3; this implies in turn , for all datasets, and standard redundancy holds.∎
The reference  also provides two more direct definitions of redundancy:
if and , rule is simply redundant with respect to .
if and , rule is strictly redundant with respect to .
Simple redundancy in  is explained as a potential connection between rules that come from the same frequent set, in our case . The formal definition is not identical to our rendering: in its original statement in , rule is simply redundant with respect to , provided that . The reason is that, in that reference, rules are always assumed to have disjoint sides, and then both formalizations are clearly equivalent. We do not impose disjointness, so that the natural formalization of their intuitive explanation is as we have just stated in Definition 3. The following is very easy to see (and is formally proved in ).
 Both simple and strict redundancies imply standard redundancy.
Note that, in principle, there could possibly be many other ways of being redundant beyond simple and strict redundancies: we show below, however, that, in essence, this is not the case. We can relate these notions also to the cover operator of :
 Rule covers rule when and .
Here, again, the original definition, according to which rule covers rule if and (plus some disjointness and nonemptiness conditions that we omit) is appropriate for the case of disjoint sides. The formalization we give is stated also in  as a property that characterizes covering. Both simple and strict redundancies become thus merged into a single definition. We observe as well that the same notion is also employed, without an explicit name, in .
Again, it should be clear that, in Definition 3, the covered rule is indeed plainly redundant: whatever the dataset, changing from to the confidence stays equal or increases since, in the quotient that defines the confidence of a rule , the numerator cannot decrease from to , whereas the denominator cannot increase from to . Also, the proposals in Definition 3 and 3 are clearly equivalent:
Rule covers rule if and only if rule is either simply redundant or strictly redundant with respect to , or they are equivalent by reflexivity.
It turns out that all these notions are, in fact, fully equivalent to plain redundancy; indeed, the following converse statement is a main new contribution of this section:
Assume rule is plainly redundant with respect to , where . Then rule covers rule .
By Lemma 3, . To see the other inclusion, , assume to the contrary that . Then we can consider a dataset in which one transaction consists of and, say, transactions consist of . Since , these transactions do not count towards the supports of or , so that the confidence of is 1; also, is not adding to the support of since . As , exactly one transaction includes , so that , which can be made as low as desired. This would contradict plain redundancy. Hence, plain redundancy implies the two inclusions in the definition of cover.∎
Combining the statements so far, we obtain the following characterization:
Consider any two rules and where . The following are equivalent:
and (that is, rule covers rule );
rule is either simply redundant or strictly redundant with respect to rule , or they are equivalent by reflexivity;
rule is plainly redundant with respect to rule ;
rule is standard redundant with respect to rule .
Marginally, we note here an additional strength of the proofs given. One could consider attempts at weakening the notion of plain redundancy by allowing for a “margin” or “slack”, appropriately bounded, but whose value is independent of the dataset, upon comparing confidences. The slack could be additive or multiplicative: conditions such as or , for all and for independent of , could be considered. However, such approaches do not define different redundancy notions: they result in formulations actually equivalent to plain redundancy. This is due to the fact that the proofs in Lemma 3 and Theorem 3 show that the gap between the confidences of rules that do not exhibit redundancy can be made as large as desired within . Likewise, if we fix a confidence threshold beforehand and use it to define redundancy as for all , again an equivalent notion is obtained, independently of the concrete value of ; whereas, for , this is, instead, a characterization of Armstrong derivability.
3.1. Deduction Schemes for Plain Redundancy
From the characterization just given, we extract now a sound and complete deductive calculus. It consists of three inference schemes: right-hand Reduction (), where the consequent is diminished; right-hand Augmentation (), where the consequent is enlarged; and left-hand Augmentation (), where the antecedent is enlarged. As customary in logic calculi, our rendering of each rule means that, if the facts above the line are already derived, we can immediately derive the fact below the line.
We also allow always to state trivial rules:
Clearly, scheme could be stated equivalently with below the line, by :
In fact, is exactly the simple redundancy from Definition 3 and, in the cases where , it provides a way of dealing with one direction of equivalence by reflexivity; the other direction is a simple combination of the other two schemes. The Reduction Scheme allows us to “lose” information from the right-hand side; it corresponds to strict redundancy.
As further alternative options, it is easy to see that we could also join and into a single scheme:
but we consider that this option does not really simplify, rather obscures a bit, the proof of our Corollary 3.1 below. Also, we could allow as trivial rules whenever , which includes the case of ; such rules also follow from the calculus given by combining with and .
The following can be derived now from Corollary 3:
The calculus given is sound and complete for plain redundancy; that is, rule is plainly redundant with respect to rule if and only if can be derived from using the inference schemes , , and .
Soundness, that is, all rules derived are plainly redundant, is simple to argue by checking that, in each of the inference schemes, the confidence of the rule below the line is greater than or equal to the confidence of the rule above the line: these facts are actually the known statements that each of equivalence by reflexivity, simple redundancy, and strict redundancy imply plain redundancy. Also, trivial rules with empty right-hand side always hold. To show completeness, assume that rule is plainly redundant with respect to rule . If , apply and use to copy and, if necessary, to leave just in the right-hand side. If , by Corollary 3, we know that this implies that and . Now, to infer from , we chain up applications of our schemes as follows:
where the second step makes use of the inclusion , and the last step makes use of the inclusion . Here, the standard derivation symbol denotes derivability by application of the scheme indicated as a subscript.∎
We note here that  proposes a simpler calculus that consists, essentially, of (called there “weak left augmentation”) and (called there “decomposition”). The point is that these two schemes are sufficient to prove completeness of the “representative basis” as given in that reference, due to the fact that, in that version, the rules of the representative basis include the left-hand side as part of the right-hand side; but such a calculus is incomplete with respect to plain redundancy because it offers no rule to move items from left to right.
3.2. Optimum-Size Basis for Plain Redundancy
A basis is a way of providing a shorter list of rules for a given dataset, with no loss of information, in the following sense:
Given a set of rules , is a complete basis if every rule of is plainly redundant with respect to some rule of .
Bases are analogous to covers in functional dependencies, and we aim at constructing bases with properties that correspond to minimum size and canonical covers. The solutions for functional dependencies, however, are not valid for partial rules due to the failure of the Armstrong schemes.
In all practical applications, is the set of all the rules “mined from” a given dataset at a confidence threshold . That is, the basis is a set of rules that hold with confidence at least in , and such that each rule holds with confidence at least in if and only if it is plainly redundant with respect to some rule of ; equivalently, the rules in can be inferred from through the corresponding deductive calculus. All along this paper, such a confidence threshold is denoted , and always . We will employ two simple but useful definitions.
Fix a dataset . Given itemsets and , is a -antecedent for if , that is, .
Note that we allow , that is, the set itself as its own -antecedent; this is just to simplify the statement of the following rather immediate lemma:
If is a -antecedent for and , then is a -antecedent for and is a -antecedent for .
From we have , so that . The lemma follows.∎
We make up for proper antecedents as part of the next notion:
Fix a dataset . Given itemsets and (proper subset), is a valid -antecedent for if the following holds:
is a -antecedent of ,
no proper subset of is a -antecedent of , and
no proper superset of has as a -antecedent.
The basis we will focus on now is constructed from each and each valid antecedent of ; we consider that this is the most clear way to define and study it, and we explain below why it is essentially identical to two existing, independent proposals.
Fix a dataset and a confidence threshold . The representative rules for at confidence are all the rules for all itemsets and for all valid -antecedents of .
In the following, we will say “let be a representative rule” to mean “let be a set having valid -antecedents, and let be one of them”; the parameter will always be clear from the context. Note that some sets may not have valid antecedents, and then they do not generate any representative rules.
By the conditions on valid antecedents in representative rules, the following relatively simple but crucial property holds; beyond the use of our Corollary 3, the argument follows closely that of related facts in :
Let rule be among the representative rules for at confidence . Assume that it is plainly redundant with respect to rule , also of confidence at least ; then, they are equivalent by reflexivity and, in case , they are the same rule.
Let be a representative rule, so that and is a valid -antecedent of . By Corollary 3, must cover : . As , is a -antecedent of . We first show that ; assume , and apply Lemma 3.2 to : is also a -antecedent of , and the minimality of valid -antecedent gives us . is, thus, a -antecedent of which properly includes , contradicting the third property of valid antecedents.
Hence, , so that is a -antecedent of ; but again is a minimal -antecedent of , so that necessarily , which, together with , proves equivalence by reflexivity. Under the additional condition , both rules coincide as per Proposition 2.∎
It easily follows that our definition is equivalent to the definition given in , except for a support bound that we will explain later; indeed, we will show in Section 4.5 that all our results carry over when a support bound is additionally enforced.
Fix a dataset and a confidence threshold . Let . The following are equivalent:
Rule is among the representative rules for at confidence ;
 and there does not exist any other rule with , of confidence at least in , that covers .
Let rule be among the representative rules for at confidence , and let rule cover it, while being also of confidence at least and with . Then, by Corollary 3 makes plainly redundant, and by Proposition 3.2 they must coincide. To show the converse, we must see that is a representative rule under the conditions given. The fact that gives that is a -antecedent of , and we must see its validity. Assume that a proper subset is also a -antecedent of : then the rule would be a different rule of confidence at least covering , which cannot be. Similarly, assume that is a -antecedent of where : then the rule would be a different rule of confidence at least covering , which cannot be either.∎
Similarly, and with the same proviso regarding support, our definition is equivalent to the “essential rules” of . There, the set of minimal -antecedents of a given itemset is termed its “boundary”. The following statement is also easy to prove:
Fix a dataset and a confidence threshold . Let . The following are equivalent:
Rule is among the representative rules for at confidence ;
 is in the boundary of but is not in the boundary of any proper superset of ; that is, is a minimal -antecedent of but is not a minimal -antecedent of any itemset strictly containing .
If is among the representative rules, must be a minimal -antecedent of by the conditions of valid antecedents; also, is not a -antecedent at all (and, thus, not a minimal -antecedent) of any properly including . Conversely, assume that is in the boundary of but is not in the boundary of any proper superset of ; first, must be a minimal -antecedent of so that the first two conditions of valid -antecedents hold. Assume that is not among the representative rules; the third property must fail, and must be a -antecedent of some with . Our hypotheses tell us that is not a minimal -antecedent of . That is, there is a proper subset that is also a -antecedent of . It suffices to apply Lemma 3.2 to to reach a contradiction, since it implies that is a -antecedent of and therefore would not be a minimal -antecedent of .∎
The representative rules are indeed a basis:
all the representative rules hold with confidence at least ;
all the rules of confidence at least in are plainly redundant with respect to the representative rules.
The first part follows directly from the use of -antecedents as left-hand sides of representative rules. For the second part, also almost immediate, suppose , and let ; since is now a -antecedent of , it must contain a minimal -antecedent of , say . Let be the largest superset of such that is still a -antecedent of . Thus, is among the representative rules and covers . Small examples of the construction of representative rules can be found in the same references; we also provide one below.
An analogous fact is proved in  through an incomplete deductive calculus consisting of the schemes that we have called and , and states that every rule of confidence at least can be inferred from the representative rules by application of these two inference schemes. Since representative rules in the formulation of  have a right-hand side that includes the left-hand side, this inference process does not need to employ .
Now we can state and prove the most interesting novel property of this basis, which again follows from our main result in this section, Corollary 3. As indicated, representative rules were known to be irredundant with respect to simple and strict redundancy or, equivalently, with respect to covering. But, for standard redundancy, in principle there was actually the possibility that some other basis, constructed in an altogether different form, could have less rules. We can state and prove now that this is not so: there is absolutely no other way of constructing a basis smaller than this one, while preserving completeness with respect to plain redundancy, because it has absolutely minimum size among all complete bases. Therefore, in order to find smaller bases, a notion of redundancy more powerful than plain (or standard) redundancy is unavoidably necessary.
Fix a dataset , and let be the set of rules that hold with confidence in . Let be an arbitrary basis, complete so that all the rules in are plainly redundant with respect to . Then, must have at least as many rules as the representative rules. Moreover, if the rules in are such that antecedents and consequents are disjoint, then all the representative rules belong to .
By the assumed completeness of , each representative rule must be redundant with respect to some rule . By Corollary 3, covers . Then Proposition 3.2 applies: they are equivalent by reflexivity. This means and , hence uniquely identifies which representative rule it covers, if any; hence, needs, at least, as many rules as the number of representative rules. Moreover, as stated also in Proposition 3.2, if the disjointness condition holds, then both rules coincide.∎
We consider a small example consisting of 12 transactions, where there are actually only 7 itemsets, but some of them are repeated across several transactions. We can simplify our study as follows: if is not a closed set for the dataset, that is, if it has some superset with the same support, then clearly it has no valid -antecedents (see also Fact 4 below); thus we concentrate on closed sets. Figure 1 shows the example dataset and the corresponding (semi-)lattice of closures, depicted as a Hasse diagram (that is, transitive edges have been removed to clarify the drawing); edges stand for the inclusion relationship.
For this example, the implications can be summarized by six rules, namely, , , , , , and , which are also the representative rules at confidence 1. At confidence , we find that, first, the left-hand sides of the six implications are still valid -antecedents even at this lower confidence, so that the implications still belong to the representative basis. Then, we see that two of the closures, and , have additionally one valid -antecedent each, whereas has two. The following four rules hold: , , , and . These four rules, jointly with the six implications indicated, constitute exactly the ten representative rules at confidence 0.75.
4. Closure-Based Redundancy
Theorem 3.2 in the previous section tells us that, for plain redundancy, the absolute limit of a basis at any given confidence threshold is reached by the set of representative rules. Several studies, prominently , have put forward a different notion of redundancy; namely, they give a separate role to the full-confidence implications, often through their associated closure operator. Along this way, one gets a stronger notion of redundancy and, therefore, a possibility that smaller bases can be constructed.
Indeed, implications can be summarized better, because they allow for Transitivity and Augmentation to apply in order to find redundancies; moreover, they can be combined in certain forms of transitivity with partial rules: as a simple example, if and , that is, if a fraction or more of the support of has and all the transactions containing do have as well, clearly this implies that . Observe, however, that the directionality is relevant: from and we infer nothing about , since the high confidence of might be due to a large number of transactions that do not include .
We will need some notation about closures. Given a dataset , the closure operator associated to maps each itemset to the largest itemset that contains and has the same support as in : , and is as large as possible under this condition. It is known and easy to prove that exists and is unique. Implications that hold in the dataset correspond to the closure operator (, , , , ): , and is as large as possible under this condition. Equivalently, the closure of itemset is the intersection of all the transactions that contain ; this is because implies that all transactions counted for the support of are counted as well for the support of , hence, if the support counts coincide they must count exactly the same transactions.
Along this section, as in , we denote full-confidence implications using the standard logic notation ; thus, if and only if .
A basic fact from the theory of Closure Spaces is that closure operators are characterized by three properties: extensivity (), idempotency (), and monotonicity (if then ). As an example of the use of these properties, we note the following simple consequence for later use:
, and .
We omit the immediate proof. A set is closed if it coincides with its closure. Usually we speak of the lattice of closed sets (technically it is just a semilattice but it allows for a standard transformation into a lattice ). When we also say that is a generator of ; if the closures of all proper subsets of are different from , we say that is a minimal generator. Note that some references use the term “generator” to mean our “minimal generator”; we prefer to make explicit the minimality condition in the name. In some works, often database-inspired, minimal generators are termed sometimes “keys”. In other works, often matroid-inspired, they are termed also “free sets”. Our definition says explicitly that . We will make liberal use of this fact, which is easy to check also with other existing alternative definitions of the closure operator, as stated in , , and others. Several quite good algorithms exist to find the closed sets and their supports (see section 4 of ).
Given a dataset and the corresponding closure operator, two partial rules and such that and have the same support and the same confidence.
The rather immediate reason is that , and . Therefore, groups of rules sharing the same closure of the antecedent, and the same closure of the union of antecedent and consequent, give cases of redundancy. On account of these properties, there are some proposals of basis constructions from closed sets in the literature, reviewed below. But the first fact that we must mention to relate the closure operator with our explanations so far is the following:
The proof is direct from Definitions 3.2 and 3.2, and can be found in , , . These references employ this property to improve on the earlier algorithms to compute the representative rules, which considered all the frequent sets, by restricting the exploration to closures and minimal generators. Also the authors of  do the same, seemingly unaware that the algorithm in  already works just with closed itemsets. Fact 4 may shed doubts on whether closure-based redundancy actually can lead to smaller bases. We prove that this is sometimes the case, due to the fact that the redundancy notion itself changes, and allows for a form of Transitivity, which we show can take again the form of a deductive calculus. Then, we will be able to refine the notion of valid antecedent of the previous section and provide a basis for which we can prove that it has the smallest possible size among the bases for partial rules, with respect to closure-based completeness. That is, we will reach the limit of closure-based redundancy in the same manner as we did for standard redundancy in the previous section.
4.1. Characterizing Closure-Based Redundancy
Let be the set of implications in the dataset ; alternatively, can be any of the bases already known for implications in a dataset. In our empirical validations below we have used as the Guigues-Duquenne basis, or GD-basis, that has been proved to be of minimum size , . An apparently popular and interesting alternative, that has been rediscovered over and over in different guises, is the so-called iteration-free basis of , which coincides with the proposal in  and with the exact min-max basis of  (also called sometimes generic basis ); because of Fact 4, it coincides exactly also with the representative rules of confidence 1, that is: implications that are not plainly redundant with any other implication according to Definition 3. Also, it coincides with the “closed-key basis” for frequent sets in , which in principle is not intended as a basis for rules, and has a different syntactic sugar, but differs in essence from the iteration-free basis only in the fact that the support of each rule is explicitly recorded together with it.
Closure-based redundancy takes into account as follows:
Let be a set of implications. Partial rule has closure-based redundancy relative to with respect to rule , denoted , if any dataset in which all the rules in hold with confidence 1 gives .
In some cases, it might happen that the dataset at hand does not satisfy any nontrivial rule with confidence 1; then, this notion will not be able to go beyond plain redundancy. However, it is usual that some full-confidence rules do hold, and, in these cases, as we shall see, closure-based redundancy may give more economical bases. More generally, all our results only depend on the implications reaching indeed full confidence in the dataset; but they are not required to capture all of these: the implications in (with their consequences according to the Armstrong schemes) could constitute just a part of the full-confidence rules in the dataset. In particular, plain redundancy reappears by choosing , whether the dataset satisfies or not any full-confidence implication.
We continue our study by showing a necessary and sufficient condition for closure-based redundancy, along the same lines as the one in the previous section.
Let be a set of exact rules, with associated closure operator mapping each itemset to its closure . Let be a rule not implied by , that is, where . Then, the following are equivalent:
The direct proof is simple: the inclusions given imply that and ; then .
Conversely, for , we argue that, if either of and fails, then there is a dataset where holds with confidence 1 and holds with high confidence but the confidence of is low.
We observe first that, in order to satisfy , it suffices to make sure that all the transactions in the dataset we are to construct are closed sets according to the closure operator corresponding to .
Assume now that : then a dataset consisting only of one or more transactions with itemset satisfies (vacuously) with confidence 1 but, given that , leads to confidence zero for . It is also possible to argue without resorting to vacuous satisfaction: simply take one transaction consisting of and, in case this transaction satisfies , obtain as low a confidence as desired for by adding as many transactions as necessary; these will not change the confidence of since .
Then consider the case where , whence the other inclusion fails: . Consider a dataset of, say, transactions, where one transaction consists of the itemset and transactions consist of the itemset . The confidence of is at least , which can be made as close to 1 as desired by increasing , whereas the presence of at least one and no transaction at all containing gives confidence zero to . Thus, in either case, we see that redundancy does not hold.∎
4.2. Deduction Schemes for Closure-Based Redundancy
We provide now a stronger calculus that is sound and complete for this more general case of closure-based redundancy. For clarity, we chose to avoid the closure operator in our deduction schemes, writing instead explicitly each implication.
Our calculus for closure-based redundancy consists of four inference schemes, each of which reaches a partial rule from premises including a partial rule. Two of the schemes correspond to variants of Augmentation, one for enlarging the antecedent, the other for enlarging the consequent. The other two correspond to composition with an implication, one in the antecedent and one in the consequent: a form of controlled transitivity. Their names , , , and indicate whether they operate at the right or left-hand side and whether their effect is Augmentation or composition with an Implication.
Again we allow to state rules with empty right-hand side directly:
Alternatively, we could state trivial rules with a subset of the left-hand side at the right-hand side. Note that this opens the door to using with an empty , and this allows us to “downgrade” an implication into the corresponding partial rule. Again, could be stated equivalently as like in Section 3.1. In fact, the whole connection with the simpler calculus in Section 3.1 should be easy to understand: first, observe that the rules are identical. Now, if implications are not considered separately, the closure operator trivializes to identity, for every , and the only cases where we know that are those where ; we see that corresponds, in that case, to , whereas the schemes only differ on cases of equivalence by reflexivity. Finally, in that case becomes fully trivial since becomes and, together with , leads to : then, the partial rules above and below the line would coincide.
Similarly to the plain case, there exists an alternative deduction system, more compact, whose equivalence with our four schemes is rather easy to see. It consists of just two forms of combining a partial rule with an implication:
However, in our opinion, the use of these schemes in our further developments is less intuitive, so we keep working with the four schemes above.
In the remainder of this section, we denote as the fact that, in the presence of the implications in the set , rule can be derived from rule using zero or more applications of the four deduction schemes; along such a derivation, any rule of (or derived from by the Armstrong schemes) can be used whenever an implication of the form is required.
4.3. Soundness and Completeness
We can characterize the deductive power of this calculus as follows: it is sound and complete with respect to the notion of closure-based redundancy; that is, all the rules it can prove are redundant, and all the redundant rules can be proved:
Let consist of implications. Then, if and only if rule has closure-based redundancy relative to with respect to rule : .
Soundness corresponds to the fact that every rule derived is redundant: it suffices to prove it individually for each scheme; the essentials of some of these arguments are also found in the literature. For , the inclusions prove that the partial rules above and below the line have the same confidence. For , one has , thus and the confidence of the rule below the line is at least that of the one above, or possibly greater. Scheme is unchanged from the previous section. Finally, for , we have so that , and so that , and again the confidence of the rule below the line is at least the same as the confidence of the one above.
Now we can write a derivation in our calculus, taking into account these inclusions, as follows:
Thus, indeed the redundant rule is derivable, which proves completeness.∎
4.4. Optimum-Size Basis for Closure-Based Redundancy
In a similar way as we did for plain redundancy, we study here bases corresponding to closure-based redundancy.
Since the implications become “factored out” thanks to the stronger notion of redundancy, we can focus on the partial rules. A formal definition of completeness for a basis is, therefore, as follows:
Given a set of partial rules and a set of implications , closure-based completeness of a set of partial rules holds if every partial rule of has closure-based redundancy relative to with respect to some rule of .
Again is intended to be the set of all the partial rules “mined from” a given dataset at a confidence threshold (recall that always ), whereas is intended to be the subset of rules in that hold with confidence 1 in or, rather, a basis for these implications. There exist several proposals for constructing bases while taking into account the implications and their closure operator. We use the same intuitions and modus operandi to add a new proposal which, conceptually, departs only slightly from existing ones. Its main merit is not the conceptual novelty of the basis itself but the mathematical proof that it achieves the minimum possible size for a basis with respect to closure-based redundancy, and is therefore at most as large as any alternative basis and, in many cases, smaller than existing ones.
Our new basis is constructed as follows. For each closed set , we will consider a number of closed sets properly included in as candidates to act as antecedents:
Fix a dataset , and consider the closure operator corresponding to the implications that hold in with confidence 1. For each closed set , a closed proper subset is a basic -antecedent if the following holds:
is a -antecedent of : ;
no proper closed subset of is a -antecedent of , and
no proper closed superset of has as a -antecedent.
Basic antecedents follow essentially the same pattern as the valid antecedents (Definition 3.2), but restricted to closed sets only, that is, instead of minimal antecedents, we pick just minimal closed antecedents. Then we can use them as before:
Fix a dataset and a confidence threshold .
The basis consists of all the rules for all closed sets and all basic -antecedents of .
A minmax variant of the basis is obtained by replacing each left-hand side in by a minimal generator: that is, for a closed set , each rule becomes for one minimal generator of the (closed) basic -antecedent .
A minmin variant of the basis is obtained by replacing by a minimal generator both the left-hand and the right-hand sides in : for each closed set and each basic -antecedent of , the rule becomes where is chosen a minimal generator of and is chosen a minimal generator of .
The variants are defined only for the purpose of discussing the relationship to previous works along the next few paragraphs; generally, we will use only the first version of . Note the following: in a minmax variant, at the time of substituting a generator for the left-hand side closure, in case we consider a rule from that has a left-hand side with several minimal generators, only one of them is to be used. Also, all of (and not only ) can be removed from the right-hand side: can be used to recover it.
The basis is uniquely determined by the dataset and the confidence threshold, but the variants can be constructed, in general, in several ways, because each closed set in the rule may have several minimal generators, and even several different generators of minimum size. We can see the variants as applications of our deduction schemes. The result of substituting a generator for the left-hand side of a rule is equivalent to the rule itself: in one direction it is exactly scheme , and in the other is a chained application of to add the closure to the right-hand side and to put it back in the left-hand side. Substituting a generator for the right-hand side corresponds to scheme in both directions.
The use of generators instead of closed sets in the rules is discussed in several references, such as  or . In the style of , we would consider a minmax variant, which allows one to show to the user minimal sets of antecedents together with all their nontrivial consequents. In the style of , we would consider a minmin variant, thus reducing the total number of symbols if minimum-size generators are used, since we can pick any generator. Each of these known bases incurs a risk of picking more than one minimum generator for the same closure as left-hand sides of rules with the same closure of the right-hand side: this is where they may be (and, in actual cases, have been empirically found to be) larger than , because, in a sense, they would keep in the basis all the variants. Facts analogous to Corollaries 3.2 and 3.2 hold as well if the closure condition is added throughout, and provide further alternative definitions of the same basis. We use one of them in our experimental setting, described in Section 4.6. We now see that this set of rules entails exactly the rules that reach the corresponding confidence threshold in the dataset:
Fix a dataset and a confidence threshold . Let be any basis for implications that hold with confidence 1 in .
All the rules in hold with confidence at least .
is a complete basis for the partial rules under closure-based redundancy.
All the rules in must hold indeed because all the left-hand sides are actually -antecedents. To prove that all the partial rules that hold are entailed by rules in