Fundamental Limits of Distributed Encoding

04/02/2020 ∙ by Nastaran Abadi Khooshemehr, et al. ∙ 0

In general coding theory, we often assume that error is observed in transferring or storing encoded symbols, while the process of encoding itself is error-free. Motivated by recent applications of coding theory, we introduce the problem of distributed encoding which comprises of a set of K ∈N isolated source nodes and N ∈N encoding nodes. Each source node has one symbol from a finite field, which is sent to each of the encoding nodes. Each encoding node stores an encoded symbol from the same field, as a function of the received symbols. However, some of the source nodes are controlled by the adversary and may send different symbols to different encoding nodes. Depending on the number of adversarial nodes, denoted by β∈N, and the cardinality of the set of symbols that each one generates, denoted by v ∈N, this would make the process of decoding from the encoded symbols impossible. Assume that a decoder connects to an arbitrary subset of t ∈N encoding nodes and wants to decode the symbols of the honest nodes correctly, without necessarily identifying the sets of honest and adversarial nodes. In this paper, we characterize t^* ∈N, as the minimum of such t, as a function of K, N, β, and v. We show that for β≥ 1, v> 2, t^*=K+β (v-1)+1, if N ≥ K+β (v-1)+1, and t^*=N, if N < K+β (v-1). In order to achieve t^*, we introduce a nonlinear code. In continue, we focus on linear coding and show that t^*_linear=K+2β(v-1), if N> K+2β(v-1), and t^*_linear=N, if N< K+2β(v-1).



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.