Skip to main content

Capabilities of Bounded Discrepancy Decoding

01 July 1965

New Image

To fix ideas, let us consider first the special case of coding for the binary symmetric channel. A code is defined as a set of M binary n-vectors x = (xi, Xi, . . . , xn) where xk = 0 or 1 (k = 1, 2, . . . , n). The individual vectors are called code words. The transmission rate R is defined by M = 2" r . The Hamming distance between two binary w-vectors is the number of positions in which they differ. The code words are transmitted through a noisy channel. The received vector y is a binary w-vector whose Arth coordinate is Vk = xk + zk (mod 2), k = 1, 2, . . . , n, (1) where Xk is the kth coordinate of the transmitted code vector, and the zk (k = 1, 2, . . . , n) are statistically independent random variables which assume the value 1 with probability p0(0 ^ p0 ^ and the value 0 with probability 1 -- p0 ยท Thus p0 is the probability that a given bit is received in error. This channel is the "binary symmetric channel." It is assumed that each of the M code words is equally likely to be transmitted, and it is the task of the decoder to examine the received vector y and decide which code word was actually transmitted. We are interested in two types of decoding.