# Minimax Learning for Remote Prediction

The classical problem of supervised learning is to infer an accurate predictor of a target variable Y from a measured variable X by using a finite number of labeled training samples. Motivated by the increasingly distributed nature of data and decision making, in this paper we consider a variation of this classical problem in which the prediction is performed remotely based on a rate-constrained description M of X. Upon receiving M, the remote node computes an estimate Ŷ of Y. We follow the recent minimax approach to study this learning problem and show that it corresponds to a one-shot minimax noisy source coding problem. We then establish information theoretic bounds on the risk-rate Lagrangian cost and a general method to design a near-optimal descriptor-estimator pair, which can be viewed as a rate-constrained analog to the maximum conditional entropy principle used in the classical minimax learning problem. Our results show that a naive estimate-compress scheme for rate-constrained prediction is not in general optimal.

## Authors

• 14 publications
• 4 publications
• 25 publications
• 4 publications
• ### Generalized Maximum Entropy for Supervised Classification

The maximum entropy principle advocates to evaluate events' probabilitie...
07/10/2020 ∙ by Santiago Mazuelas, et al. ∙ 0

• ### A Max-Norm Constrained Minimization Approach to 1-Bit Matrix Completion

We consider in this paper the problem of noisy 1-bit matrix completion u...
09/24/2013 ∙ by T. Tony Cai, et al. ∙ 0

• ### Information-Theoretic Lower Bounds for Zero-Order Stochastic Gradient Estimation

In this paper we analyze the necessary number of samples to estimate the...
03/31/2020 ∙ by Abdulrahman Alabdulkareem, et al. ∙ 0

• ### Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Cons

In this work, we consider the distributed optimization setting where inf...
03/13/2021 ∙ by Rajarshi Saha, et al. ∙ 11

• ### Compression-Based Regularization with an Application to Multi-Task Learning

This paper investigates, from information theoretic grounds, a learning ...
11/19/2017 ∙ by Matías Vera, et al. ∙ 0

• ### Minimax Estimation of Distances on a Surface and Minimax Manifold Learning in the Isometric-to-Convex Setting

We start by considering the problem of estimating intrinsic distances on...
11/25/2020 ∙ by Ery Arias-Castro, et al. ∙ 4

• ### Distributed Coding of Quantized Random Projections

In this paper we propose a new framework for distributed source coding o...
10/06/2020 ∙ by Maxim Goukhshtein, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The classical problem of supervised learning is to infer an accurate predictor of a target variable from a measured variable on the basis of labeled training samples

independently drawn from an unknown joint distribution

. The standard approach for solving this problem in statistical learning theory is empirical risk minimization (ERM). For a given set of allowable predictors and a loss function that quantifies the risk of each predictor, ERM chooses the predictor with minimal risk under the empirical distribution of samples. To avoid overfitting, the set of allowable predictors is restricted to a class with limited complexity.

Recently, an alternative viewpoint has emerged which seeks distributionally robust predictors. Given the labeled training samples, this approach learns a predictor by minimizing its worst-case risk over an ambiguity distribution set centered at the empirical distribution of samples. In other words, instead of restricting the set of allowable predictors, it aims to avoid overfitting by requiring that the learned predictor performs well under any distribution in a chosen neighborhood of the empirical distribution. This minimax approach has been investigated under different assumptions on how the ambiguity set is constructed, e.g., by restricting the moments

[1], forming the -divergence balls [2] and Wasserstein balls [3] (see also references therein).

In these previous works, the learning algorithm finds a predictor that acts directly on a fresh (unlabeled) sample to predict the corresponding target variable . Often, however the fresh sample may be only remotely available, and when designing the predictor it is desirable to also take into account the cost of communicating . This is motivated by the fact that bandwidth and energy limitations on communication in networks and within multiprocessor systems often impose significant bottlenecks on the performance of algorithms. There are also an increasing number of applications in which data is generated in a distributed manner and it (or features of it) are communicated over bandwidth-limited links to a central processor to perform inference. For instance, applications such as Google Goggles and Siri process the locally collected data on clouds. It is thus important to study prediction in distributed and rate-constrained settings.

In this paper, we study an extension of the classical learning problem in which given a finite set of training samples, the learning algorithm needs to infer a descriptor-estimator pair with a desired communication rate in between them. This is especially relevant when both and

come from a large alphabet or are continuous random variables as in regression problems, so neither the sample

nor its predicted value of can be simply communicated in a lossless fashion. We adopt the minimax framework for learning the descriptor-estimator pair. Given a set of labeled training samples, our goal is to find a descriptor-estimator pair by minimizing their resultant worst-case risk over an ambiguity distribution set, where the risk now incorporates both the statistical risk and the communication cost. One of the important conclusions that emerge from the minimax approach to supervised learning in [1] is that the problem of finding the predictor with minimal worst-case risk over an ambiguity set can be broken into two smaller steps: (1) find the worst-case distribution in the ambiguity set that maximizes the (generalized) conditional entropy of given

, and (2) find the optimal predictor under this worst-case distribution. In this paper, we show that an analogous principle approximately holds for rate-constrained prediction. The descriptor-estimator pair with minimal worst-case risk can be found in two steps: (1) find the worst-case distribution in the ambiguity set that maximizes the risk-information Lagrangian cost, and (2) find the optimal descriptor-estimator pair under this worst-case distribution. We then apply our results to characterize the optimal descriptor-estimator pairs for two applications: rate-constrained linear regression and rate-constrained classification. While a simple scheme whereby we first find the optimal predictor ignoring the rate constraint, then compress and communicate the predictor output, is optimal for the linear regression application, we show via the classification application that such an estimate-compress approach is not optimal in general. We show that when prediction is rate-constrained, the optimal descriptor aims to send sufficiently (but not necessarily maximally) informative features of the observed variable, which are at the same time easy to communicate. When applied to the case in which the ambiguity distribution set contains only a single distribution (for example, the true or empirical distribution of

) and the loss function for the prediction is logarithmic loss, our results provide a new one-shot operational interpretation of the information bottleneck problem. A key technical ingredient in our results is the strong functional representation lemma (SFRL) developed in [4], which we use to design the optimal descriptor-estimator pair for the worst-case distribution.

### Notation

We assume that is base 2 and the entropy is in bits. The length of a variable-length description is denoted as . For random variables , denote the joint distribution by and the conditional distribution of given by . For brevity we denote the distribution of as . We write for when , and is clear from the context.

## Ii Problem Formulation

We begin by reviewing the minimax approach to the classical learning problem [1].

### Ii-a Minimax Approach to Supervised Learning

Let and be jointly distributed random variables. The problem of statistical learning is to design an accurate predictor of a target variable from a measured variable on the basis of a number of independent training samples drawn from an unknown joint distribution. The standard approach for solving this problem is to use empirical risk minimization (ERM) in which one defines an admissible class of predictors that consists of functions (where the reconstruction alphabet can be in general different from ) and a loss function . The risk associated with a predictor when the underlying joint distribution of and is is

 L(f,P)≜EP[ℓ(f(X),Y)].

ERM simply chooses the predictor with minimal risk under the empirical distribution of the training samples.

Recently, an alternative approach has emerged which seeks distributionally robust predictors. This approach learns a predictor by minimizing its worst-case risk over an ambiguity distribution set , i.e.,

 fn=argminfmaxP∈Γ(Pn)L(f,P), (1)

where can be any function and can be constructed in various ways, e.g., by restricting the moments, forming the -divergence balls or Wasserstein balls. While in ERM it is important to restrict the set of admissible predictors to a low-complexity class to prevent overfitting, in the minimax approach overfitting is prevented by explicitly requiring that the chosen predictor is distributionally robust. The learned function can be then used for predicting when presented with fresh samples of . The learning and inference phases are illustrated in Figure 1.

### Ii-B Minimax Learning for Remote Prediction

In this paper, we extend the minimax learning approach to the setting in which the prediction needs to be performed based on a rate-constrained description of . In particular, given a set of finite training samples independently drawn from an unknown joint distribution , our goal is to learn a pair of functions , where is a descriptor used to compress into (a prefix-free code), and is an estimator that takes the compression and generates an estimate of . See Figure 2.

Let be the rate of the descriptor and be the risk associated with the descriptor-estimator pair , when the underlying distribution of is , and define the risk-rate Lagrangian cost (parametrized by ) as

 Lλ(e,f,P)=L(e,f,P)+λR(e,P). (2)

Note that this cost function takes into account both the resultant statistical prediction risk of , as well as the communication rate they require. The task of a minimax learner is to find an pair that minimizes the worst-case over the ambiguity distribution set , i.e.,

 (en,fn)=argmin(e,f)maxP∈Γ(Pn)Lλ(e,f,P), (3)

for an appropriately chosen centered at the empirical distribution of samples . Note that we allow here all possible pairs. We also assume that the descriptor and the estimator can use unlimited common randomness which is independent of the data, i.e., and can be expressed as functions of and , respectively, and the prefix-free codebook for can depend on . The availability of such common randomness can be justified by the fact that in practice, although the inference scheme is one-shot, it is used many times (by the same user and by different users), hence the descriptor and the estimator can share a common randomness seed before communication commences without impacting the communication rate.

## Iii Main Results

We first consider the case where consists of a single distribution , which may be the empirical distribution as in ERM. Define the minimax risk-rate cost as

 L∗λ(Γ)=inf(e,f)supP∈ΓLλ(e,f,P). (4)

While it is difficult to minimize the risk-rate cost (2) directly, the minimax risk-rate cost can be bounded in terms of the mutual information between and .

###### Theorem 1.

Let . Then

 L∗λ≥ infP^Y|X(E[ℓ(^Y,Y)]+λI(X;^Y)), L∗λ≤

As in other one-shot compression results (e.g., zero-error compression), there is a gap between the upper and lower bound. While the logarithmic gap in Theorem 1 is not as small as the 1-bit gap in the zero-error compression, it is dominated by the linear term when it is large.

To prove Theorem 1, we use the strong functional representation lemma given in [4] (also see [5, 6]): for any random variables , there exists random variable independent of , such that is a function of , and

 H(^Y|W) ≤I(X;^Y)+log(I(X;^Y)+1)+4. (5)

Here, can be intuitively viewed as the part of which is not contained in . Note that for any such that is a function of and is independent of , . The statement (5) ensures the existence of an , independent of , which comes close to this lower bound, and in this sense it is most informative about . This is critical for the proof of Theorem 1 as we will see next. Identifying the part of which is not contained in allows us to generate and share this part between the descriptor and the estimator ahead of time, eliminating the need to communicate it during the course of inference. To find , we use the Poisson functional representation construction detailed in [4].

###### Proof:

Recall that . The lower bound follows from the fact that . To establish the upper bound, fix any . Let be obtained from (5). Note that is independent of and can be generated from a random seed shared between the descriptor and the estimator ahead of time. For a given , take to be the Huffman codeword of according to the distribution (recall that is a function of ), and take to be the decoding function of the Huffman code. The expected codeword length

 E[|M|]≤H(^Y|W)+1≤I(X;^Y)+log(I(X;^Y)+1)+5.

Taking an infimum over all completes the proof. ∎

###### Remark 1.

If we consider the logarithmic loss , where is a distribution over , then the lower bound in Theorem 1 reduces to

 infPU|X(H(Y|U)+λI(X;U))=H(Y)+infPU|X(λI(X;U)−I(Y;U)),

which is the information bottleneck function [7]. Therefore the setting of remote prediction provides an approximate one-shot operational interpretation of the information bottleneck (up to a logarithmic gap). In [8, 9]

it was shown that the asymptotic noisy source coding problem also provides an operational interpretation of the information bottleneck. Our operational interpretation, however, is more satisfying since the feature extraction problem originally considered in

[7] is by nature one-shot.

We now extend Theorem 1 to the minimax setting.

###### Theorem 2.

Suppose is convex. Then

 L∗λ≥ infP^Y|XsupP∈Γ(EP[ℓ(^Y,Y)]+λIP(X;^Y)) L∗λ≤ +λ(IP(X;^Y)+2log(IP(X;^Y)+1)+6)).

This result is related to minimax noisy source coding [10]. The main difference is that we consider the one-shot expected length instead of the asymptotic rate.

To prove this theorem, we first invoke a minimax result for relative entropy in [11] (which generalizes the redundancy-capacity theorem [12]). Then we apply the following refined version of the strong functional representation lemma that is proved in the proof of Theorem 1 in [4] (also see [5]).

###### Lemma 1.

For any and , there exists random variable , and functions and such that , and

 E[logk(x,W)]≤D(P^Y|X(⋅|x)∥∥~P^Y)+1.6. (6)

We are now ready to prove Theorem 2.

###### Proof:

The lower bound follows from . To prove the upper bound, we fix any , and show that the following risk-rate cost is achievable:

 L′ =supP∈Γ(EP[ℓ(^Y,Y)] +λ(IP(X;^Y)+2log(IP(X;^Y)+1)+6)).

Let

 g(P,~P^Y) =EP[ℓ(^Y,Y)]+λ(∫D(P^Y|X=x∥∥~P^Y)dP(x) +2log(∫D(P^Y|X=x∥∥~P^Y)dP(x)+1)+6).

Note that is concave in for fixed since and are linear in . Also is quasiconvex in for fixed since is convex in , and is lower semicontinuous in since is lower semicontinuous with respect to the topology of weak convergence [13], and hence is lower semicontinuous by Fatou’s lemma.

Write for the distribution of when and . Let and be the closure of in the topology of weak convergence. It can be shown using the same arguments as in [11] (on instead of relative entropy, and using Sion’s minimax theorem [14] instead of Lemma 2 in [11]) that if is uniformly tight, then there exists such that

 supP∈Γg(P,~P∗^Y)=supP∈Γinf~P^Yg(P,~P^Y)=L′.

If is not uniformly tight, then by Lemma 4 in [11], , and hence .

Applying Lemma 1 to , we obtain independent of , random variable , and following the conditional distribution , and

 E[logK|X=x]≤D(P^Y|X∥∥P∗^Y∣∣X=x)+1.6

for any . Then we use Elias delta code [15] for to produce . Note that the average length of the Elias delta code is upper bounded by . Hence, we have

 EP[|M|] ≤EP[logK]+2log(EP[logK]+1)+1 ≤∫D(P^Y|X=x∥∥P∗^Y)dP(x) +2log(∫D(P^Y|X=x∥∥P∗^Y)dP(x)+1)+6.

Hence

 ~L∗λ≤ supP∈Γ(EP[ℓ(^Y,Y)+λ|M|])≤supP∈Γg(P,P∗^Y)≤L′.

Theorem 2 suggest that we can simplify the analysis of the risk-rate cost (2) by replacing the rate with the mutual information . Define the risk-information cost as

 ~Lλ(P^Y|X,P)=EP[ℓ(^Y,Y)]+λIP(X;^Y). (7)

Theorem 2 implies that the minimax risk-rate cost can be approximated by the minimax risk-information cost

 ~L∗λ(Γ)=infP^Y|XsupP∈Γ~Lλ(P^Y|X,P), (8)

within a logarithmic gap. Theorem 2 can also be stated in the following slightly weaker form

 ~L∗λ≤L∗λ≤~L∗λ+2λlog(λ−1~L∗λ+1)+7λ.

The risk-information cost has more desirable properties than the risk-rate cost. For example, it is convex in for fixed , and concave in for fixed . This allows us to exchange the infimum and supremum in Theorem 2 by Sion’s minimax theorem [14], which gives the following proposition.

###### Proposition 1.

Suppose , and are finite, is convex and closed, and , then

 ~L∗λ(Γ)=infP^Y|XsupP∈Γ~Lλ(P^Y|X,P)=supP∈ΓinfP^Y|X~Lλ(P^Y|X,P).

Moreover, there exists attaining the infimum in the left hand side, which also attains the infimum on the right hand side when is fixed to , the distribution that attains the supremum on the right hand side.

Proposition 1 means that in order to design a robust descriptor-estimator pair that work for any , we only need to design them according to the worst-case distribution as follows.

Principle of maximum risk-information cost: Given a convex and closed , we design the descriptor-estimator pair based on the worst-case distribution

 P∗=argmaxP∈ΓinfP^Y|X~Lλ(P^Y|X,P).

We then find that minimizes and design the descriptor-estimator pair accordingly, e.g. using Lemma 1 on and the induced distribution from and .

## Iv Applications

### Iv-a Rate-constrained Minimax Linear Regression

Suppose , , is the mean-squared loss, and we observe the data . Take to be the set of distributions with the same first and second moments as given by the empirical distribution, i.e.,

 Γ Var[Y]=σ2Y,Cov[X,Y]=CXY}, (9)

where are the corresponding statistics of the empirical distribution. The following proposition shows that is Gaussian.

###### Proposition 2 (Linear regression with rate constraint).

Consider mean-squared loss and define as in (9). Then the minimax risk-information cost (8) is

 ~L∗λ =⎧⎪⎨⎪⎩σ2Y−CTXYΣ−1XCXY+λ2log2eCTXYΣ−1XCXYλlogeifλloge2

where the optimal is Gaussian with its mean and covariance matrix specified in (9), and the optimal estimate

 ^Y=⎧⎨⎩aCTXYΣ−1XX+b+Zifλloge2

where

 a=1−λloge2CTXYΣ−1XCXY,b=μY−aCTXYΣ−1XμX,

and is independent of with .

Note that this setting does not satisfy the conditions in Proposition 1. We directly analyze (8) to obtain the optimal . Given the optimal , Theorem 2 and Lemma 1 can be used to construct the scheme. Operationally, is a random quantizer of such that the quantization noise follows . With this natural choice of the ambiguity set, our formulation recovers a compressed version of the familiar MMSE estimator.

Figure 3 plots the tradeoff between the rate and the risk when , , , for the scheme constructed using the Poisson functional representation in [4], with the lower bound given by the minimax risk-information cost , and the upper bound given in Theorem 2.

###### Proof:

Without loss of generality, assume and . We first prove “ ” in (10). For this, fix as given in the proposition and consider any . When , we have

 EP[ℓ(^Y,Y)] =EP[(^Y−Y)2] ≤σ2Y+λloge2−CTXYΣ−1XCXY, and IP(X;^Y) =h(^Y)−h(^Y|X) ≤12log(2CTXYΣ−1XCXYλloge).

Therefore,

 infP^Y|XsupP∈Γ(EP[ℓ(^Y,Y)]+λIP(X;^Y)) ≤R.H.S. of (???).

It can also be checked that the above relation holds when , and thus we have proved “ ” in (10).

To prove “ ” in (10), fix a Gaussian with its mean and covariance matrix specified in (9) and consider an arbitrary . We have

 EP[ℓ(^Y,Y)]=EP[(Y−^Y)2] =σ2Y−CTXYΣ−1XCXY+EP[(^Y−CTXYΣ−1XX)2], and IP(X;^Y)=IP(CTXYΣ−1XX;^Y) ≥h(CTXYΣ−1XX)−h(CTXYΣ−1XX−^Y) ≥12logCTXYΣ−1XCXY−12logEP[(^Y−CTXYΣ−1XX)2].

Letting , we have

 EP[ℓ(^Y,Y)]+λIP(X;^Y) ≥σ2Y−CTXYΣ−1XCXY+λ2logCTXYΣ−1XCXY+γ−λlogγ2 ≥R.H.S. of (???),

where the second inequality follows by evaluating the minimum value of . Combing this with the above completes the proof of Proposition 2. ∎

The optimal scheme in the above example corresponds to compressing and communicating the minimax optimal rate-unconstrained predictor , since the optimal can be obtained from by shifting, scaling and adding noise. This estimate-compress approach can be thought as a separation scheme, since we first optimally estimate , then optimally communicate it while satisfying the rate constraint. In the next application, we show that such separation is not optimal in general.

### Iv-B Rate-constrained Minimax Classification

We assume and are finite, , and is closed and convex. The following proposition gives the minimax risk-information cost and the optimal estimator.

###### Proposition 3.

Consider the setting described above. The minimax risk-information cost is given by

 ~L∗λ=supP∈Γ(1+λinf~P^YEP(−log∑y2λ−1PY|X(y|X)~P^Y(y))),

the worst-case distribution is the one attaining the supremum, and the optimal estimator is given by , where attains the infimum (when ), and is obtained from .

In particular, if is symmetric for different values of (i.e., for any , there exists permutation of , of such that and ),

 ~L∗λ=supP∈Γ(1+λlogk−λEP(log∑y2λ−1PY|X(y|X))).

We can see that when ,

tends to the maximum a posteriori estimator (under

, the worst-case distribution when ).

###### Proof:

Assume is closed and convex. By Proposition 1, the minimax rate-information cost is , where

 infP^Y|X~Lλ(P^Y|X,P) =infP^Y|X(EP[ℓ(^Y,Y)]+λIP(X;^Y)) =infP^Y|X(P{^Y≠Y}+λinf~P^Y∫D(P^Y|X=x∥∥~P^Y)dP(x)) =1+λinf~P^Y,P^Y|XEP(∑yP^Y|X(y|X)(logP^Y|X(y|X)~P^Y(y)−λ−1PY|X(y|X))) =1+λinf~P^YinfP^Y|XEP⎛⎜⎝∑yP^Y|X(y|X)⎛⎜⎝logP^Y|X(y|X)2λ−1PY|X(y|X)~P^Y(y)/∑y′2λ−1PY|X(y′|X)~P^Y(y′)⎞⎟⎠−log∑y2λ−1PY|X(y|X)~P^Y(y)⎞⎟⎠ (a)=1+λinf~P^YEP(−log∑y2λ−1PY|X(y|X)~P^Y(y)),

where (a) is due to that relative entropy is nonnegative, and equality is attained when .

Next we consider the case in which is symmetric. Consider the minimax rate-information cost

 ~L∗λ=infP^Y|XsupP∈Γ~Lλ(P^Y|X,P)=infP^Y|XsupP∈Γ(EP[ℓ(^Y,Y)]+λIP(X;^Y)).

For any , let be the permutation over such that and let be the corresponding permutation over in the symmetry assumption. Since the function

 P^Y|X↦supP∈Γ~Lλ(P^Y|X,P)

is convex and symmetric about and (i.e., ), to find its infimum, we only need to consider ’s satisfying for all (if not, we can instead consider the average of for from 1 up to the product of the periods of and , which gives a value of the function not larger than that of ). For brevity we say is symmetric if it satisfies this condition.

Fix any symmetric . Since the function

 P↦~Lλ(P^Y|X,P)

is concave and symmetric about and (i.e., ), to find its supremum, we only need to consider symmetric ’s. Hence,

 ~L∗λ =infP^Y|Xsymm.supP∈Γsymm.~Lλ(P^Y|X,P) =infP^Y|Xsymm.supP∈Γsymm.(EP[ℓ(^Y,Y)]+λIP(X;^Y)) =infP^Y|Xsymm.supP∈Γsymm