Home

Schedule Info


Sparse Representations for Codes and the Hardness of Decoding Low Density Codes

Nandakishore Santhi

LANL



ABSTRACT

The Maximum Likelihood Decoding (MLD) problem is known to be NP-hard for binary linear codes, while belief propagation decoding is known to work well in practice for several LDPC codes. This has already made LDPC codes popular for data storage and optical/wireless communication. In this talk we give a deterministic polynomial reduction (Karp reduction) from the Maximum Likelihood Decoding problem for binary linear codes to the Weighted MLD problem for (3, 3)-LDPC codes. Surprisingly enough, the reduction shows the NP-hardness of Weighted MLD for (3, 3)-LDPC codes. Furthermore, this provides a method to transform the decoding problem for dense codes to the decoding of sparse codes which is often more amenable to the use of belief propagation algorithm. Our reduction proceeds through 7 intermediate reductions. Apart from better decoding algorithms for arbitrary linear codes, potential consequences include crypt-analysis of systems such as the McEliece public key crypto-system, the security of which is based on the hardness of MLD.