# An information theoretic model for summarization, and some basic results

A basic information theoretic model for summarization is formulated. Here summarization is considered as the process of taking a report of v binary objects, and producing from it a j element subset that captures most of the important features of the original report, with importance being defined via an arbitrary set function endemic to the model. The loss of information is then measured by a weight average of variational distances, which we term the semantic loss. Our results include both cases where the probability distribution generating the v-length reports are known and unknown. In the case where it is known, our results demonstrate how to construct summarizers which minimize the semantic loss. For the case where the probability distribution is unknown, we show how to construct summarizers whose semantic loss when averaged uniformly over all possible distribution converges to the minimum.

## Authors

• 14 publications
• 22 publications
• 12 publications
• ### Audio Summarization with Audio Features and Probability Distribution Divergence

The automatic summarization of multimedia sources is an important task t...
01/20/2020 ∙ by Carlos-Emiliano González-Gallardo, et al. ∙ 0

• ### An Information-Theoretic Proof of a Finite de Finetti Theorem

A finite form of de Finetti's representation theorem is established usin...
04/08/2021 ∙ by Lampros Gavalakis, et al. ∙ 0

• ### Assessing the Performance of Deep Learning Algorithms for Newsvendor Problem

In retailer management, the Newsvendor problem has widely attracted atte...
06/09/2017 ∙ by Yanfei Zhang, et al. ∙ 0

• ### On the Average Case of MergeInsertion

MergeInsertion, also known as the Ford-Johnson algorithm, is a sorting a...
05/23/2019 ∙ by Florian Stober, et al. ∙ 0

• ### On the restrictiveness of the hazard rate order

Every element θ=(θ_1,…,θ_n) of the probability n-simplex induces a proba...
09/04/2020 ∙ by S. Fried, et al. ∙ 0

• ### The Birthday Problem and Zero-Error List Codes

As an attempt to bridge the gap between classical information theory and...
02/13/2018 ∙ by Parham Noorzad, et al. ∙ 0

• ### Run-Length Encoding in a Finite Universe

Text compression schemes and compact data structures usually combine sop...
09/15/2019 ∙ by N. Jesper Larsson, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

For a concrete example of how we shall define information summarization, consider the following weather report.

Phenomena
High winds
High UV index
Heavy Rain
Snow
Low visibility
Smog
Typhoon

Our stance is that such a report is overly detailed, and wish to design a system that produces summaries such as the following.

Phenomena or Phenomena
High UV index High UV index
Typhoon Smog
Typhoon

In this example it is important to note a typhoon already implies high winds, heavy rain and low visibility, and heavily implies the absence of snow. At the same time, the presence of a typhoon does not generally indicate a high UV index or the lack of smog; these events should still be reported.

In the abstract, the goal of summarization is to reduce the dimension of data, without excessive loss of “information.” This abstract notion is very similar to that of compression and rate-distortion theory [1, Chapter 5, 10]; summarization is distinguished in two important ways. First, unlike compression and rate distortion which feature both an encoder to construct the efficient representation and a decoder to interpret this representation, a summarizer only has a single element, the output of which should be ready for immediate consumption. Second, we must take into account the importance of the underlying information, as opposed to simply the likelihood. For instance, smog may be less likely than a typhoon, but the typhoon is more essential to include given that both occur.

Despite similarities to established concepts in information theory, to the best of our knowledge, summarization has never been considered from the information theoretic perspective. Instead most of the literature exists within the natural language processing community and the machine learning community, see

[2, 3, 4] and references there within. The approach of these communities is to directly engage the general problem, searching for efficient practical solutions which provide empirically good results111For discussion on this viewpoint, see Simeone [5, Chapter 1].. This differs from a traditional information theoretic approach, where a simplified model is established and analyzed in order to determine fundamental limits of the operational parameters, and to gain insight into how to achieve those limits222For discussion on this viewpoint, see Han [6, Preface]..

To simplify this model, we shall make the following assumptions. First, the data to be summarized is a length- binary sequence, which has an arbitrary (not necessarily independent) probability distribution relating each symbol. While the probability distribution over a length- binary sequence is arbitrary, every length-

sequence the summarizer observes is independent and identically distributed. Second, we assume that the summarizers output needs be “extractive,” meaning that the summarizer can only produce a subset of what is input, as in the weather example. Finally, we assume the existence of an arbitrary set function that can be used to measure the “semantic information” of a random variable. This last assumption will be further justified in Section

III via example, but it is worth mentioning that, as shown by Yeung [7], Shannon’s measure of (nonsemantic) information (entropy) has such a representation. Spurred by this, Lin and Bilmes [3, 4] have recently argued for the use of submodular functions in an effort to axiomatically define a notion of semantic information. Regardless, we will make no other assumptions on this function other than existence and that it is finite and positive.

## Ii Notation

Random variables (RV(s)) will be written in upper case, constants in lower case, and sets in calligraphic font. For example can take on value from A -length sequence of random variables, variables or sets will be denoted with the power , such as . Among sets

will hold special meaning as the set of conditional probability distributions on the set

when given a value from the set . That is, if , then and for all . For convenience, given then , where only symbols in have non-zero probability. The symbol will be used to relate probability distributions and random variables. For example if and , for some and , then . When a set is used in a probability distribution, such as for some and it means

We shall use to denote the empirical distribution of a series , for example while The set denotes the set of -length sequences with the same empirical distribution as , that is It will be important to note that

 |Tn(xn)| =(nnπ(xn)(1),…,nπ(xn)(|X|)).

Next denotes the set of valid empirical distributions for -length sequences of symbols from . Again it is important to note

 |Pn(X)| =(n+|X|−1|X|−1).

is the indicator function,

 1^X(x)={1 if x∈^X0 o.w. ,

and is the expected value operator, where the expectation is taken over all random variables.

## Iii System Model and Justification

The objects to be summarized will be referred to as reports. A sequence of -reports will be denoted by a sequence of RVs , where and is the finite set of possible events. Without loss of generality we will assume for some positive integer , and accordingly will refer to as a -symbol binary sequence with denoting possible event occurring for report . From here forward or the power set of , for convenience.

Although given -reports the summarizer only needs to summarize , as shown in Figure 2; to summarize is to produce a subset of possible events and indicate whether or not they each occurred. Formally, the summarizer produces , , , where is a subset containing possible events and is the indication of whether or not the possible events in occurred. Because the summarizer only needs to summarize the first report, we will refer to as the current report, and as the report history. Note that finding the optimal summary for also finds the optimal summary algorithm for for since they are identically distributed.

Notice that a summary does not necessarily provide all of the information about the report, and moreover there are multiple reports for which a given summary may be the representative. A specific summary may be the summary for any report in333We write if a summary could lead to That is, if then if and only if .

 X(y)≜{x∈X:y⊂x}.

Clearly for a given summarization algorithm, each does not necessarily generate summary .

To relate the output of the summarizer to the input we introduce a conditional probability distribution called the summary interpretation,

 iy(x)≜1X(y)(x)p(x)p(X(y)).

The summary interpretation is equal to the probability of a report for a given summary when averaged uniformly over all possible summarizers. In this way it represents an end user which has absolutely no information about the summarizing algorithm, but knows perfectly the distribution of what is being summarized.

Having related the report and the summary, our goal will be to produce summaries which capture most of the “semantic information” of the current report. In order to capture “semantic information,” included in the model will be a set function denoting semantic weights. The semantic weights assign to each pair of and subset a number representative of the importance that the summary convey (if then ). The motivation behind the semantic weights is best viewed by the weather example. With each report there are a number of different possible implications, for instance, the report might imply extra layers of clothing are needed, or that it is a nice day to go outside, or that serious meteorological event is occurring. Each of these implications can be considered semantic information, as it has some intuitive meaning. Furthermore, each of these implications is valid for a large number of possible reports. In that sense, each set is representative of some semantic meaning shared between the collective of reports in the set, and is represents how important this semantic meaning is to the report .

Having defined all aspects endemic to the model, we now move to discussing the operational parameters. To aggregate and measure the performance of the summarizer, we shall use the semantic loss, which is the summarizer analog to a distortion criteria in rate distortion theory.

###### Definition 1.

The semantic loss of to with semantic weights is

 ℓ(X;Y|u) ≜E[∑W⊆Xinfq∈P(W|X)u(X,W)∑x∈X|qX(x)−iY(x)|2].

Consider the semantic loss when there is a single such that In this case the semantic loss is the variational distance between the summary interpretation and the closest distribution such that only reports in occur. Clearly, if only reports in were possible, given a particular summary, then this summary would losslessly convey that a report in occurred. Using an -divergence (see [8, Chapter 4]), namely variational distance, give us a well studied way to then measure the distance between the summary interpretation and the convex set of distributions which perfectly convey . This distance is then averaged over all semantic meanings according to the semantic weights.

We conclude the section with a more formal definition of a summarizer. For the purpose of easily specifying operational parameters, we shall refer to a summarizer by the probability distribution relating the summary and the reports .

###### Definition 2.

For each and , a summarizer has length if

 Y=(Vj)×{0,1}j

and has semantic loss for reports and semantic weights if

 ℓ(X(1);Y|u)≤δ

for

### Iii-a Universal summarization

In the universal setting, the summarizer is no longer aware of the distribution by which Since we still assume the end user is aware of this distribution, the summary interpretation remains unchanged. But, as our results demonstrate, knowing the summary interpretation is of vital importance to cultivating good summarizers. Since the summary interpretation is no longer known, the summarizer must be able to adapt itself based upon the report history.

To measure the performance in this case, we will consider the semantic loss when averaged uniformly over all possible distributions of . In that way, we can ensure that the number of distributions for which the summarizer performs poorly are relatively small.

###### Definition 3.

A summarizer has -uniform average semantic loss for semantic weights if

 ∫P(X)ℓ(X(1);Y|u)dr≤δ

for and uniform over

## Iv Results

.

Our objective is to find optimal, or close to optimal, summarization algorithms. To this end, we first classify what the semantic loss is for a summarizer, and then use that value to determine which summarizer produces the smallest value.

###### Lemma 4.

Summarizer has a semantic loss for reports and semantic weights of

 ∑x∈X,y∈Yp(x)sx(y)∑W⊆Xu(x,W)(1−iy(W)),

where .

###### Corollary 5.

The minimum semantic loss for reports and semantic weights is

 ∑xp(x)miny∈Y∑W⊆Xu(x,W)(1−iy(W)).

See Appendix B for proof.

Lemma 4 demonstrates that the semantic loss is the weighted average of the summary interpretation’s concentration outside ; that is the semantic loss is the weighted average of the various semantic meanings being false under the summary interpretation. Corollary 5 also suggests a summarization algorithm to achieve it. In particular, given reports , the summarizer selects the summary which minimizes

 ∑Wu(x(1),W)(1−p(W∩X(y)p(X(y))).

To do so though, requires the summarizer know a priori.

When moving to the universal setting the value of is unknown, and instead the distribution has to be inferred from the reports. Here we seek to derive the uniform average semantic loss, for semantic weights , and then find the summarization algorithm to optimize it.

###### Theorem 6.

Summarizer has a uniform average semantic loss for semantic weights of

 ∑xn∈Xn,y∈Y,W⊆Xsxn(y)u(x(1),W)|Tn(xn)||Pn(X)|(1−ηxn,y,W)

where

 ηxn,y,W ≜q(xn)(W∩X(y)) ⋅∞∑k=0(n+|X|−(n+|X|+1)^q(xn)(X(y))+k)!(n+|X|)(n+|X|−(n+|X|+1)^q(xn)(X(y)))!(n+|X|+k)!, q(xn)(a) ≜nπ(xn)(a)+1n+|X|  ∀a∈X, ^q(xn)(a) ≜⎧⎪ ⎪⎨⎪ ⎪⎩nπ(xn)(a)+2n+|X|+1 if a=x(1)nπ(xn)(a)+1n+|X|+1 else  ∀a∈X.

See Appendix C for proof.

Theorem 6 though, unlike Lemma 4, is not a closed form solution. In order to assuage this malady the following approximation is provided.

###### Lemma 7.

For positive integers such that ,

 c+1c−b≤∞∑t=0(b+t)!c!(c+t)!b!≤c+1c−b(1+ε(c−b))

where

 ε(a)=31+ln(a)a+4e112⋅2−a2.

See Appendix D for proof.

Using Theorem 6 and Lemma 7 will allow us to construct a theorem analogous to Corollary 5.

###### Theorem 8.

The minimum uniform average semantic loss for semantic weights , is equal to

 −λn+∑xn∈Xnμ(xn)|Tn(xn)||Pn(X)|

where

 μ(xn)≜miny∈Y∑W⊆Xu(x(1),W)(1−q(xn)(W∩X(y))^q(xn)(X(y)))

and and from Theorem 6, while satisfies

 0<λn

with and from Lemma 7.

See Appendix E for proof.

Note that , and thus Theorem 8 shows that a summarizer that to minimize

 ∑W⊆Xu(x(1),W)(1−q(xn)(W∩X(y))^q(xn)(X(y)))

will asymptotically with report history minimize the uniform average semantic loss for semantic weights . Hence regardless of the set function to characterize semantic meaning, the optimal summarizer still treats the underlying summary interpretation as

## V Conclusion

Going forward it will be important to derive representations for the semantic weights which are practical and perform well in practice. Indeed, one aspect not previously mentioned is that for any “optimal” summary, regardless of how optimal is defined, there are a set of semantic weights such that it is also the optimal summary in our model. To see this, consider an optimal (deterministic) summarizer defined by the mapping , and recognize that this is also an optimal summary in our model for semantic weights

 u(x,W)={1 if W=X(yx)0otherwise.

While the above is clearly not edifying, it does demonstrate the generality of our model. Nevertheless, determination of simple semantic weights that perform well in practice would validate the presented model.

## References

• [1] T. M. Cover and J. A. Thomas, Elements of information theory. New York, NY, USA: Wiley-Interscience, 2nd ed., 2006.
• [2] C.-Y. Lin, G. Cao, J. Gao, and J.-Y. Nie, “An information-theoretic approach to automatic evaluation of summaries,” in N.A.A.C.L.-H.L.T., (Stroudsburg, PA, USA), pp. 463–470, Association for Computational Linguistics, 2006.
• [3]

H. Lin and J. Bilmes, “Multi-document summarization via budgeted maximization of submodular functions,” in

N.A.A.C.L-H.L.T, pp. 912–920, Association for Computational Linguistics, 2010.
• [4] H. Lin and J. Bilmes, “Learning mixtures of submodular shells with application to document summarization,” in U.A.I., (Arlington, Virginia, United States), pp. 479–490, AUAI Press, 2012.
• [5] O. Simeone, “A brief introduction to machine learning for engineers,” CoRR, vol. abs/1709.02840, 2017.
• [6] T. S. Han, Information-Spectrum Methods in Information Theory. Applications of mathematics, Springer, 2003.
• [7] R. W. Yeung, “A new outlook of shannon’s information measures,” IEEE T.-I.T., vol. 37, no. 3, pp. 466–474, 1991.
• [8] I. Csiszár, P. C. Shields, et al., “Information theory and statistics: A tutorial,” Foundations and Trends in Communications and Information Theory, vol. 1, no. 4, pp. 417–528, 2004.

## Appendix A Lemmas

###### Lemma 9.
 ∫c0(c−x)axb  dx=a!b!(a+b+1)!ca+b+1

for all non negative integers and real number .

###### Proof:

First observe that

 ∫c0(c−x)axbdx =ca+b∫c0(1−xc)a(xa)b =ca+b+1∫10(1−t)atbdt. (1)

The final integral can be found in [1], but we include it for completeness. Specifically,

 ∫10(1−t)atbdt =[1b+1(1−t)atb+1]10+ab+1∫10(1−t)a−1tb+1dt =ab+1∫10(1−t)a−1tb+1dt

by using integration by parts (setting and ). Thus

 ∫10(1−t)atbdt=b!a!(a+b+1)!,

from recursion. ∎

###### Corollary 10.

Distribution , where

, is the uniform distribution over

.

###### Proof:

First note that is a convex set

 ⎧⎪ ⎪⎨⎪ ⎪⎩(p1,…,p|X|)∈R|X|:⎛⎜ ⎜⎝pi∈[0,¯pi]    i∈{1,…,|X|−1},p|X|=¯p|X|⎞⎟ ⎟⎠⎫⎪ ⎪⎬⎪ ⎪⎭,

where Hence if

is the uniform probability density function over

then there exists a constant positive real number such that and

 ∫10∫¯p20…∫¯p|X|−10cdp1…dp|X|−1=1. (2)

Using Lemma 9 to repeatedly evaluate the LHS of (2) yields

 ∫10∫¯p20…∫¯p|X|−10cdp1…dp|X−1|=c(|X|−1)!,

hence

 r(p)=c=(|X|−1)!.

###### Lemma 11.
 (k+1)−(t−1)t−1≤∞∑j=k+1j−t≤k−(t−1)t−1

for all positive integers and real numbers .

###### Proof:

First note that is convex in since , and thus

 (k+1)−t+1t−1 =∫∞k+1x−tdx≤∞∑j=k+1j−t ≤∫∞k+1(x−1)−tdx=k−t+1t−1.

###### Lemma 12.

For positive real numbers,

 (ab)be−(a−b)≤1.
###### Proof:

For any positive real number , by definition

 limx→∞(1+yx)x=ey. (3)

Hence we must show that is a monotonically increasing function of for all , since then

 (ab)b=(1+a−bb)b≤ea−b.

Clearly though

 ∂∂xf(x,y)=(1+yx)x[ln(x+yx)+xy+x−1]. (4)

This derivative is always positive, since the function for all . Indeed since

 ddug(u)=u−1u2≥0 ∀u≥1,

and

###### Lemma 13.

For positive integers , and positive integer ,

 e−b(1+bj)b+j+12e−a(1+aj)a+j+12≤1.
###### Proof:

That , where follows directly from

 ∂∂xf(x)=f(x)[12(x+j)+ln(1+xj)]≥0

for all positive values of . ∎

###### Lemma 14.

For positive integers such that

 √cb≤2c−b2.
###### Proof:

For the lemma follows because and for all

For

 log2c−log2b≤(c−b)⋅maxx∈[b,c]dlog2xdx=c−bbln2

implies that

 √cb=212[log2c−log2b]≤2c−b2.

## Appendix B Lemma 4

###### Proof:

If

 (5)

then clearly

 ℓ(X;Y|u) =∑x∈X,y∈Yp(x)sx(y)∑W⊆Xu(x,W)(1−iy(W)) (6)

by definition.

To prove Equation (5), first obtain a lower bound to the LHS of Equation (5) via

 ∑x∈X12|q^x(x)−iy(x)| =max~X⊆Xqx(~X)−iy(~X) (7) ≥qx(W)−iy(W)=1−iy(W), (8)

where Equation (7) is an alternative equation for the variational distance. Next let be any distribution such that for all Now obtain an upper bound to Equation (5) via

 ≤∑x∈X12|~q^x(x)−iy(x)| (9) =∑a∈W~q^x(x)−iy(x)2+∑a∈X−Wiy(x)2 (10) =1−iy(W)2+iy(X−W)2=1−iy(W). (11)

## Appendix C Proofs of main results

###### Proof:

To begin the proof, note the uniform average semantic loss for a given summarizer can be written

 ∑xn∈Xn,y∈Y,W⊆Xsxn(y)u(x(1),W)α(W,xn,y) (12)

where

and is uniform over , due to the integral function being linear. The proof proceeds by evaluating and specifically showing

 α(W,xn,y) =1|Tn(xn)||Pn(X)|(1−ηxn,y,W). (13)

To help in evaluating the integrals, assume that , and let

 pm ≜p(m), ¯pm ≜1−k−1∑m=1pk, tm ≜nπ(xn)(m), ¯tm ≜|X|−m+|X|∑k=mtk=|X|−m+n−m−1∑k=1tk,

for all Of importance throughout the proof will be that

 ¯pm=¯pm−1−pm−1 (14)

for all integers and that

 ¯tm=¯tm+1+tm+1. (15)

Two notable values are and . Also, without loss of generality assume that , and

With this new notation

 α(W,xn,y)= (|X|−1)!∫P(X)⎛⎝|X|∏m=1ptmm⎞⎠dpn −(|X|−1)!∫P(X)⎛⎝|X|∏m=1ptmm⎞⎠∑wm=1pm∑zm=1pmdpn (16)

where , since by corollary 10 and is the convex set

Of the two integrals in (16) we shall only show

 ∫P(X)⎛⎝|X|∏m=1ptmm⎞⎠∑wm=1pm∑zm=1pmdpn=ηxn,y,W(|X|−1)!|Tn(xn)||Pn(X)| (17)

since

 ∫P(X)⎛⎝|X|∏m=1ptmm⎞⎠dpn=1(|X|−1)!|Tn(xn)||Pn(X)| (18)

follows similarly. At this point note that Equation (13) directly follows from Equations (16), (17) and (18), so validating Equation (18) would finish the proof. We shall prove Equation (17) through a rather tedious recursion process. To aid in this recursion we shall, in an abuse of notation, write to denote the convex set

 {(p1,…,pk)∈Rk:(pm∈[0,¯pm]    m∈{1,…,k−1})},

and use to denote the differential sequence

Write the LHS of (17)

 ∫~P(|X|−2)⎛⎝|X|−2∏m=1ptmm⎞⎠∑wm=1pm∑zm=1pm ⋅(∫¯p|X|−10pt|X|−1|X|−1(¯p|X|−1−p|X|−1)t|X|dp|X|−1)dp|X|−2, (19)

by using , via Equation (14). The inner integration can be performed via Lemma 9 yielding

 ⋅(∫¯p|X|−20pt|X|−2|X|−2(¯p|X|−2−p|X|−2)¯t|X|−1dp|X|−2)dp|X|−3 (20)

where (20) follows via Equation (14), this time to show term. This process of using Lemma 9 to evaluate the integral, and then using Equation (14) to put the result into a form which can be evaluated using Lemma 9 can be repeated to evaluate the integrals over ; doing so yields

 (21)

At this point the recursion no longer directly applies since the next variable of integration, , is contained in the denominator of the fraction. To address this, use the Taylor series expansion of , specifically as follows

 1∑zm=1pm =11−(¯pz−pz)=(¯pz−pz)k. (22)

Plugging (22) into (21) and exchanging the summations and integrals results in

 (23)

From here, evaluating all remaining integrals using the recursive process by which Equation (20) and (21) are obtained yields

 (24)

Then

 1(|X|−1)!∏|X|m=1tm!n!n!(|X|−1)!(n+|X|−1)! (25)

follows from “simplifying” (24). Equation (17) is therefore verified, completing the proof, having shown the LHS equals (25) since

 ∏|X|m=1tm!n! =∏|X|m=1(nπ(xn)(m))!n!=|Tn(xn)| n!(|X|−1)!(n+|X|−1)! =|P