Message-passing algorithms and improved LP decoding

with Sanjeev Arora, Constantinos Daskalakis. STOC 2009, IEEE Transactions on Information Theory 58(12).

PDF

abstract

Linear programming decoding for low-density parity check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performance —coming close to that of iterative decoding algorithms— and its amenability to finite-blocklength analysis. Several works starting with the work of Feldman et al. showed how to analyze LP decoding using properties of expander graphs. This line of analysis works for only low error rates, about a couple of orders of magnitude lower than the empirically observed performance. It is possible to do better for the case of random noise, as shown by Daskalakis et al. and Koetter and Vontobel.

Building on work of Koetter and Vontobel, we obtain a novel understanding of LP decoding, which allows us to establish a 0.05-fraction of correctable errors for rate-1/2 codes; this comes very close to the performance of iterative decoders and is significantly higher than the best previously noted correctable bit error rate for LP decoding. Unlike other techniques, our analysis directly works with the primal linear program and exploits an explicit connection between LP decoding and message passing algorithms.

An interesting byproduct of our method is a notion of a “locally optimal” solution that we show to always be globally optimal (i.e., it is the nearest codeword). Such a solution can in fact be found in near-linear time by a “re-weighted” version of the min-sum algorithm, obviating the need for linear programming. Our analysis implies, in particular, that this re-weighted version of the min-sum decoder corrects up to a 0.05-fraction of errors.

keywords

  • linear programming
  • low-density parity-check (LDPC) codes