For a concrete example of how we shall define information summarization, consider the following weather report.
|High UV index||✓|
Our stance is that such a report is overly detailed, and wish to design a system that produces summaries such as the following.
|High UV index||✓||High UV index||✓|
In this example it is important to note a typhoon already implies high winds, heavy rain and low visibility, and heavily implies the absence of snow. At the same time, the presence of a typhoon does not generally indicate a high UV index or the lack of smog; these events should still be reported.
In the abstract, the goal of summarization is to reduce the dimension of data, without excessive loss of “information.” This abstract notion is very similar to that of compression and rate-distortion theory [1, Chapter 5, 10]; summarization is distinguished in two important ways. First, unlike compression and rate distortion which feature both an encoder to construct the efficient representation and a decoder to interpret this representation, a summarizer only has a single element, the output of which should be ready for immediate consumption. Second, we must take into account the importance of the underlying information, as opposed to simply the likelihood. For instance, smog may be less likely than a typhoon, but the typhoon is more essential to include given that both occur.
Despite similarities to established concepts in information theory, to the best of our knowledge, summarization has never been considered from the information theoretic perspective. Instead most of the literature exists within the natural language processing community and the machine learning community, see[2, 3, 4] and references there within. The approach of these communities is to directly engage the general problem, searching for efficient practical solutions which provide empirically good results111For discussion on this viewpoint, see Simeone [5, Chapter 1].. This differs from a traditional information theoretic approach, where a simplified model is established and analyzed in order to determine fundamental limits of the operational parameters, and to gain insight into how to achieve those limits222For discussion on this viewpoint, see Han [6, Preface]..
To simplify this model, we shall make the following assumptions. First, the data to be summarized is a length- binary sequence, which has an arbitrary (not necessarily independent) probability distribution relating each symbol. While the probability distribution over a length- binary sequence is arbitrary, every length-
sequence the summarizer observes is independent and identically distributed. Second, we assume that the summarizers output needs be “extractive,” meaning that the summarizer can only produce a subset of what is input, as in the weather example. Finally, we assume the existence of an arbitrary set function that can be used to measure the “semantic information” of a random variable. This last assumption will be further justified in SectionIII via example, but it is worth mentioning that, as shown by Yeung , Shannon’s measure of (nonsemantic) information (entropy) has such a representation. Spurred by this, Lin and Bilmes [3, 4] have recently argued for the use of submodular functions in an effort to axiomatically define a notion of semantic information. Regardless, we will make no other assumptions on this function other than existence and that it is finite and positive.
Random variables (RV(s)) will be written in upper case, constants in lower case, and sets in calligraphic font. For example can take on value from A -length sequence of random variables, variables or sets will be denoted with the power , such as . Among sets
will hold special meaning as the set of conditional probability distributions on the setwhen given a value from the set . That is, if , then and for all . For convenience, given then , where only symbols in have non-zero probability. The symbol will be used to relate probability distributions and random variables. For example if and , for some and , then . When a set is used in a probability distribution, such as for some and it means
We shall use to denote the empirical distribution of a series , for example while The set denotes the set of -length sequences with the same empirical distribution as , that is It will be important to note that
Next denotes the set of valid empirical distributions for -length sequences of symbols from . Again it is important to note
is the indicator function,
and is the expected value operator, where the expectation is taken over all random variables.
Iii System Model and Justification
The objects to be summarized will be referred to as reports. A sequence of -reports will be denoted by a sequence of RVs , where and is the finite set of possible events. Without loss of generality we will assume for some positive integer , and accordingly will refer to as a -symbol binary sequence with denoting possible event occurring for report . From here forward or the power set of , for convenience.
Although given -reports the summarizer only needs to summarize , as shown in Figure 2; to summarize is to produce a subset of possible events and indicate whether or not they each occurred. Formally, the summarizer produces , , , where is a subset containing possible events and is the indication of whether or not the possible events in occurred. Because the summarizer only needs to summarize the first report, we will refer to as the current report, and as the report history. Note that finding the optimal summary for also finds the optimal summary algorithm for for since they are identically distributed.
Notice that a summary does not necessarily provide all of the information about the report, and moreover there are multiple reports for which a given summary may be the representative. A specific summary may be the summary for any report in333We write if a summary could lead to That is, if then if and only if .
Clearly for a given summarization algorithm, each does not necessarily generate summary .
To relate the output of the summarizer to the input we introduce a conditional probability distribution called the summary interpretation,
The summary interpretation is equal to the probability of a report for a given summary when averaged uniformly over all possible summarizers. In this way it represents an end user which has absolutely no information about the summarizing algorithm, but knows perfectly the distribution of what is being summarized.
Having related the report and the summary, our goal will be to produce summaries which capture most of the “semantic information” of the current report. In order to capture “semantic information,” included in the model will be a set function denoting semantic weights. The semantic weights assign to each pair of and subset a number representative of the importance that the summary convey (if then ). The motivation behind the semantic weights is best viewed by the weather example. With each report there are a number of different possible implications, for instance, the report might imply extra layers of clothing are needed, or that it is a nice day to go outside, or that serious meteorological event is occurring. Each of these implications can be considered semantic information, as it has some intuitive meaning. Furthermore, each of these implications is valid for a large number of possible reports. In that sense, each set is representative of some semantic meaning shared between the collective of reports in the set, and is represents how important this semantic meaning is to the report .
Having defined all aspects endemic to the model, we now move to discussing the operational parameters. To aggregate and measure the performance of the summarizer, we shall use the semantic loss, which is the summarizer analog to a distortion criteria in rate distortion theory.
The semantic loss of to with semantic weights is
Consider the semantic loss when there is a single such that In this case the semantic loss is the variational distance between the summary interpretation and the closest distribution such that only reports in occur. Clearly, if only reports in were possible, given a particular summary, then this summary would losslessly convey that a report in occurred. Using an -divergence (see [8, Chapter 4]), namely variational distance, give us a well studied way to then measure the distance between the summary interpretation and the convex set of distributions which perfectly convey . This distance is then averaged over all semantic meanings according to the semantic weights.
We conclude the section with a more formal definition of a summarizer. For the purpose of easily specifying operational parameters, we shall refer to a summarizer by the probability distribution relating the summary and the reports .
For each and , a summarizer has length if
and has semantic loss for reports and semantic weights if
Iii-a Universal summarization
In the universal setting, the summarizer is no longer aware of the distribution by which Since we still assume the end user is aware of this distribution, the summary interpretation remains unchanged. But, as our results demonstrate, knowing the summary interpretation is of vital importance to cultivating good summarizers. Since the summary interpretation is no longer known, the summarizer must be able to adapt itself based upon the report history.
To measure the performance in this case, we will consider the semantic loss when averaged uniformly over all possible distributions of . In that way, we can ensure that the number of distributions for which the summarizer performs poorly are relatively small.
A summarizer has -uniform average semantic loss for semantic weights if
for and uniform over
Our objective is to find optimal, or close to optimal, summarization algorithms. To this end, we first classify what the semantic loss is for a summarizer, and then use that value to determine which summarizer produces the smallest value.
Summarizer has a semantic loss for reports and semantic weights of
The minimum semantic loss for reports and semantic weights is
See Appendix B for proof.
Lemma 4 demonstrates that the semantic loss is the weighted average of the summary interpretation’s concentration outside ; that is the semantic loss is the weighted average of the various semantic meanings being false under the summary interpretation. Corollary 5 also suggests a summarization algorithm to achieve it. In particular, given reports , the summarizer selects the summary which minimizes
To do so though, requires the summarizer know a priori.
When moving to the universal setting the value of is unknown, and instead the distribution has to be inferred from the reports. Here we seek to derive the uniform average semantic loss, for semantic weights , and then find the summarization algorithm to optimize it.
Summarizer has a uniform average semantic loss for semantic weights of
See Appendix C for proof.
For positive integers such that ,
See Appendix D for proof.
See Appendix E for proof.
Note that , and thus Theorem 8 shows that a summarizer that to minimize
will asymptotically with report history minimize the uniform average semantic loss for semantic weights . Hence regardless of the set function to characterize semantic meaning, the optimal summarizer still treats the underlying summary interpretation as
Going forward it will be important to derive representations for the semantic weights which are practical and perform well in practice. Indeed, one aspect not previously mentioned is that for any “optimal” summary, regardless of how optimal is defined, there are a set of semantic weights such that it is also the optimal summary in our model. To see this, consider an optimal (deterministic) summarizer defined by the mapping , and recognize that this is also an optimal summary in our model for semantic weights
While the above is clearly not edifying, it does demonstrate the generality of our model. Nevertheless, determination of simple semantic weights that perform well in practice would validate the presented model.
-  T. M. Cover and J. A. Thomas, Elements of information theory. New York, NY, USA: Wiley-Interscience, 2nd ed., 2006.
-  C.-Y. Lin, G. Cao, J. Gao, and J.-Y. Nie, “An information-theoretic approach to automatic evaluation of summaries,” in N.A.A.C.L.-H.L.T., (Stroudsburg, PA, USA), pp. 463–470, Association for Computational Linguistics, 2006.
H. Lin and J. Bilmes, “Multi-document summarization via budgeted maximization of submodular functions,” inN.A.A.C.L-H.L.T, pp. 912–920, Association for Computational Linguistics, 2010.
-  H. Lin and J. Bilmes, “Learning mixtures of submodular shells with application to document summarization,” in U.A.I., (Arlington, Virginia, United States), pp. 479–490, AUAI Press, 2012.
-  O. Simeone, “A brief introduction to machine learning for engineers,” CoRR, vol. abs/1709.02840, 2017.
-  T. S. Han, Information-Spectrum Methods in Information Theory. Applications of mathematics, Springer, 2003.
-  R. W. Yeung, “A new outlook of shannon’s information measures,” IEEE T.-I.T., vol. 37, no. 3, pp. 466–474, 1991.
-  I. Csiszár, P. C. Shields, et al., “Information theory and statistics: A tutorial,” Foundations and Trends in Communications and Information Theory, vol. 1, no. 4, pp. 417–528, 2004.
Appendix A Lemmas
for all non negative integers and real number .
First observe that
The final integral can be found in , but we include it for completeness. Specifically,
by using integration by parts (setting and ). Thus
from recursion. ∎
Distribution , where , is the uniform distribution over
, is the uniform distribution over.
First note that is a convex set
where Hence if
is the uniform probability density function overthen there exists a constant positive real number such that and
for all positive integers and real numbers .
First note that is convex in since , and thus
For positive real numbers,
For any positive real number , by definition
Hence we must show that is a monotonically increasing function of for all , since then
This derivative is always positive, since the function for all . Indeed since
For positive integers , and positive integer ,
That , where follows directly from
for all positive values of . ∎
For positive integers such that
For the lemma follows because and for all
Appendix B Lemma 4
Appendix C Proofs of main results
To begin the proof, note the uniform average semantic loss for a given summarizer can be written
and is uniform over , due to the integral function being linear. The proof proceeds by evaluating and specifically showing
To help in evaluating the integrals, assume that , and let
for all Of importance throughout the proof will be that
for all integers and that
Two notable values are and . Also, without loss of generality assume that , and
With this new notation
where , since by corollary 10 and is the convex set
Of the two integrals in (16) we shall only show
follows similarly. At this point note that Equation (13) directly follows from Equations (16), (17) and (18), so validating Equation (18) would finish the proof. We shall prove Equation (17) through a rather tedious recursion process. To aid in this recursion we shall, in an abuse of notation, write to denote the convex set
and use to denote the differential sequence
Write the LHS of (17)
where (20) follows via Equation (14), this time to show term. This process of using Lemma 9 to evaluate the integral, and then using Equation (14) to put the result into a form which can be evaluated using Lemma 9 can be repeated to evaluate the integrals over ; doing so yields
At this point the recursion no longer directly applies since the next variable of integration, , is contained in the denominator of the fraction. To address this, use the Taylor series expansion of , specifically as follows