LDPC Text
LDPC Text
0 contributors
csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb
Kazan, 2021
The encoding procedure is the multiplication of message vector and Generator matrix:
where is the input message, and is the code word, denotes multiplication by modulo ,
where is a Galois Field parameter: (obviously, for binary case is equal to 2).
Accordingly, the code rate is also specified via the generating matrix:
where is the parity part, and is the identity matrix. Note, the identity part is needed to keep
the code systematic: the information message remains unchanged, and the check bits are
added to the end of the block. A correctly restored codeword can restore the original message by
simply removing the checked bits. Convenient, isn't it?
Since we are talking about linear block codes, the generator matrix should provide this linearity
(see. Linear code (https://en.wikipedia.org/wiki/Linear_code#:~:text=In%20coding%20theory
%2C%20a%20linear,hybrid%20of%20these%20two%20types.)). This means that the rows of the
generator matrix must be linearly independent (yes, it sounds a little paradoxical).
The generator matrix is directly related to another important matrix, which used in decoding
procedure: Parity Check matrix. Parity-check matrix has
rows and columns, where
corresponds to desired length of codeword and corresponds to the length of the message:
The main idea can be well explained via the Tanner graph (https://en.wikipedia.org
variable nodes, the number of which correspond to the number of columns , and
check nodes, corresponding to the number of rows .
The nodes are interconnected, and the relationship is determined by the position of units in the
matrix .
The picture on the right is my own mnemonics of my own production. It seems to me that this is
the easiest way to catch the essence of the structure:
In order to consider the decoding procedure successful, it is necessary that certain values are
formed on all test nodes - as a rule, zeros (see decoding based on syndromes
(https://en.wikipedia.org/wiki/Decoding_methods#Syndrome_decoding)):
Actually, this matrix defines the last two letters in the abbreviation LDPC (Parity-Check).
In general, by what defines them as low-density: their parity check matrices must be sparce:
"Low density parity check codes are codes specified by a parity check
matrix containing mostly zeros and only small number of ones." [1]
(https://dspace.mit.edu/bitstream/handle/1721.1/11804/32786367-
MIT.pdf?sequence=2)
a codeword that will be encoded using code which based on this matrix will have a
length of 12 bits;
there are 3 ones in each column, and there 4 ones in each row (hence (3,4));
the number of ones in rows and columns are constants (in our case 3 and 4), which
means the code is regular.
NOTE:
Eroz M., Sun F. W., Lee L. N. DVB-S2 low density parity check codes
with near Shannon limit performance (http://www.iet.unipi.it/m.luise/DVB-
S2_Low-Density.pdf) //International Journal of Satellite Communications
and Networking. – 2004. – Т. 22. – №. 3. – С. 269-279.
However, you don’t notice anything? That's right: these matrices do not fall under the standard
form from formula (3), because for LDPC codes we strive to make check matrices sparse. And if
the verification matrices do not fall into the standard form, then it is not entirely clear how to
generate generating matrices for them.
The answer, of course, is (and not one). Suppose this: the original matrix is brought to the
standard form using the Gaussian elimination method, the generating matrix is obtained from the
standard form, and it is used for encoding.
From this, by moving and transforming the rows by modulo 2, as well as moving the columns, we
moved to the matrix :
Transformations with rows from the point of view of linear algebra do not affect the code word,
but column movements need to be remembered:
Create a codeword:
[1. 0. 1. 0. 1. 1. 0. 1. 0. 0.]
And we check the syndrome (that is, we encoded the word with a matrix derived from , and
in the decoding process we will use the sparse matrix ):
Concluding the section, it must be said that such a coding method is the easiest to understand,
but very difficult to calculate in the case of large matrices - the generating matrix, as a rule,
ceases to be discharged. Of course, all this has its own decisions, however, this is a completely
different story...
A lot of decoding algorithms exist for the LDPC codes, but we will consider well-known sum-
product algorithm (SPA or belief propagation algorithm) [3, p.31] (https://www.researchgate.net
/publication/228977165_Introducing_Low-Density_Parity-Check_Codes) with some references to
matrix representation during this work.
Variable-to-Check message
Then algorithm requires to process V2C message in probability domain using relation between
hyperbolic tangents (http://wwwf.imperial.ac.uk/metric/metric_public/functions_and_graphs
/hyperbolic_functions/inverses.html) and natural logarithm [3, p.32] (https://www.researchgate.net
/publication/228977165_Introducing_Low-Density_Parity-Check_Codes). Procedure of
transmission V2C message is multiplication (for probabilities) of non-zero elements in each row:
where is the number of the certain row, is the number of the certain column, is the set of
the non-zero elements in -th row, and means that we exlude -th variable node from the
consideration.
Check-to-Variable message
At the end of the first iteration the LLRs from the channel should be updated. For this purpose we
sum up the information of rows in matrix .
where is the set of the coresponding to Parity-Check matrix non-zero elements in -th column.
NOTE #3:
The summation of all of the column elements can be applied with the
same mathematical sense since the zero-elements do not contribute to
the addition.
After that the second iteration should follow. Idealy, we have to repeat iterations while is a non-
zero vector.
class SPA:
""" This class can apply SPA algorithm to received LLR ve
l[idx] 1
return l