0% found this document useful (0 votes)
82 views13 pages

LDPC Text

This document provides an overview of low-density parity-check (LDPC) codes. It discusses how LDPC codes are linear block codes defined by sparse parity check matrices. The encoding process involves multiplying the message vector by the generator matrix. Decoding can be done using the sum-product algorithm on the Tanner graph defined by the parity check matrix. Irregular parity check matrices are used in standards like DVB-S2 to provide better noise immunity than regular matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views13 pages

LDPC Text

This document provides an overview of low-density parity-check (LDPC) codes. It discusses how LDPC codes are linear block codes defined by sparse parity check matrices. The encoding process involves multiplying the message vector by the generator matrix. Decoding can be done using the sum-product algorithm on the Tanner graph defined by the parity check matrix. Irregular parity check matrices are used in standards like DVB-S2 to provide better noise immunity than regular matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

kirlf / csp-modeling Public

Code Issues Pull requests Actions Projects Wiki Security

csp-modeling / ldpc / ldpc.ipynb

Vladimir Fadeev Reorganization before new term History

0 contributors
csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

LDPC codes (tutorial)


M.Sc. Vladimir Fadeev

Kazan, 2021

Block codes encoding basics


LDPC codes are linear block codes, which means that the check bits are added at the end of
information messages (as a block).

The encoding procedure is the multiplication of message vector and Generator matrix:

where is the input message, and is the code word, denotes multiplication by modulo ,
where is a Galois Field parameter: (obviously, for binary case is equal to 2).

Accordingly, the code rate is also specified via the generating matrix:

Generator matrix consists of two contatenated matrices:

where is the parity part, and is the identity matrix. Note, the identity part is needed to keep
the code systematic: the information message remains unchanged, and the check bits are
added to the end of the block. A correctly restored codeword can restore the original message by
simply removing the checked bits. Convenient, isn't it?

Since we are talking about linear block codes, the generator matrix should provide this linearity
(see. Linear code (https://en.wikipedia.org/wiki/Linear_code#:~:text=In%20coding%20theory
%2C%20a%20linear,hybrid%20of%20these%20two%20types.)). This means that the rows of the
generator matrix must be linearly independent (yes, it sounds a little paradoxical).

The generator matrix is directly related to another important matrix, which used in decoding
procedure: Parity Check matrix. Parity-check matrix has
rows and columns, where
corresponds to desired length of codeword and corresponds to the length of the message:

The main idea can be well explained via the Tanner graph (https://en.wikipedia.org

Стор. 2 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

There are two types of nodes:

variable nodes, the number of which correspond to the number of columns , and
check nodes, corresponding to the number of rows .

The nodes are interconnected, and the relationship is determined by the position of units in the
matrix .

The picture on the right is my own mnemonics of my own production. It seems to me that this is
the easiest way to catch the essence of the structure:

if the matrix element is 1, then there is a connection between nodes,


if it is 0, there is no connection.

In order to consider the decoding procedure successful, it is necessary that certain values are
formed on all test nodes - as a rule, zeros (see decoding based on syndromes
(https://en.wikipedia.org/wiki/Decoding_methods#Syndrome_decoding)):

Actually, this matrix defines the last two letters in the abbreviation LDPC (Parity-Check).

LDPC encoding basics


But all of the above are common points for most of block codes. How then are LDPCs different
from the, for example, Hamming codes?

In general, by what defines them as low-density: their parity check matrices must be sparce:

"Low density parity check codes are codes specified by a parity check
matrix containing mostly zeros and only small number of ones." [1]
(https://dspace.mit.edu/bitstream/handle/1721.1/11804/32786367-
MIT.pdf?sequence=2)

Yes, that’s so simple.

For example, Gallagher have proposed this matrix:

Стор. 3 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

a codeword that will be encoded using code which based on this matrix will have a
length of 12 bits;
there are 3 ones in each column, and there 4 ones in each row (hence (3,4));
the number of ones in rows and columns are constants (in our case 3 and 4), which
means the code is regular.

Mackay and Neal described a parity check matrix like this:

(3,4) - regular parity check matrix with a length of 12.

NOTE:

In the DVB-S2 standard irregular parity check matrices are used:

Eroz M., Sun F. W., Lee L. N. DVB-S2 low density parity check codes
with near Shannon limit performance (http://www.iet.unipi.it/m.luise/DVB-
S2_Low-Density.pdf) //International Journal of Satellite Communications
and Networking. – 2004. – Т. 22. – №. 3. – С. 269-279.

This corresponds to better noise immunity of irregular codes.

However, you don’t notice anything? That's right: these matrices do not fall under the standard
form from formula (3), because for LDPC codes we strive to make check matrices sparse. And if
the verification matrices do not fall into the standard form, then it is not entirely clear how to
generate generating matrices for them.

The answer, of course, is (and not one). Suppose this: the original matrix is brought to the
standard form using the Gaussian elimination method, the generating matrix is obtained from the
standard form, and it is used for encoding.

Стор. 4 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

From this, by moving and transforming the rows by modulo 2, as well as moving the columns, we
moved to the matrix :

In [2]: Hstd = np.array([[0, 1, 1, 1, 0, 1, 0, 0, 0, 0],\


[1, 0, 1, 0, 0, 0, 1, 0 ,0 ,0],\
[1, 0, 1, 0, 1, 0, 0, 1, 0, 0],\
[0, 0, 1, 1, 1, 0, 0, 0, 1, 0],\
[1, 1, 0, 0, 1, 0, 0, 0, 0, 1]])

Transformations with rows from the point of view of linear algebra do not affect the code word,
but column movements need to be remembered:

In [3]: idx = [5, 6, 7, 8, 9, 0, 1, 2, 3, 4]

Then form the generating matrix:

In [4]: M = np.shape(H)[0] # N-K

Стор. 5 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

Create a codeword:

In [5]: c = np.array([1, 0, 1, 0, 1]) @ G %2


print(str(c))

[1. 0. 1. 0. 1. 1. 0. 1. 0. 0.]

And we check the syndrome (that is, we encoded the word with a matrix derived from , and
in the decoding process we will use the sparse matrix ):

In [6]: c[idx] @ H.T %2

Out[6]: array([0., 0., 0., 0., 0.])

The magic of linear algebra works!

Concluding the section, it must be said that such a coding method is the easiest to understand,
but very difficult to calculate in the case of large matrices - the generating matrix, as a rule,
ceases to be discharged. Of course, all this has its own decisions, however, this is a completely
different story...

LDPC decoding: Sum-product algorithm

A lot of decoding algorithms exist for the LDPC codes, but we will consider well-known sum-
product algorithm (SPA or belief propagation algorithm) [3, p.31] (https://www.researchgate.net
/publication/228977165_Introducing_Low-Density_Parity-Check_Codes) with some references to
matrix representation during this work.

Стор. 6 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

Variable-to-Check message

Then algorithm requires to process V2C message in probability domain using relation between
hyperbolic tangents (http://wwwf.imperial.ac.uk/metric/metric_public/functions_and_graphs
/hyperbolic_functions/inverses.html) and natural logarithm [3, p.32] (https://www.researchgate.net
/publication/228977165_Introducing_Low-Density_Parity-Check_Codes). Procedure of
transmission V2C message is multiplication (for probabilities) of non-zero elements in each row:

where is the number of the certain row, is the number of the certain column, is the set of
the non-zero elements in -th row, and means that we exlude -th variable node from the
consideration.

Стор. 7 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

Check-to-Variable message
At the end of the first iteration the LLRs from the channel should be updated. For this purpose we
sum up the information of rows in matrix .

where is the set of the coresponding to Parity-Check matrix non-zero elements in -th column.

NOTE #3:

The summation of all of the column elements can be applied with the
same mathematical sense since the zero-elements do not contribute to
the addition.

Стор. 8 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

The same problems may occure.

After that the second iteration should follow. Idealy, we have to repeat iterations while is a non-
zero vector.

In [7]: import numpy as np

class SPA:
""" This class can apply SPA algorithm to received LLR ve

Стор. 9 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

l[idx] 1
return l

def __calc_E(self, E, M):


""" Calculates V2C message

Стор. 10 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

Стор. 11 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

Стор. 12 з 13 17.09.2021, 10:16


csp-modeling/ldpc.ipynb at master · kirlf/csp-modeling · GitHub https://github.com/kirlf/csp-modeling/blob/master/ldpc/ldpc.ipynb

Стор. 13 з 13 17.09.2021, 10:16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy