0% found this document useful (0 votes)
25 views13 pages

On The Error-Correcting Capability of LDPC Codes: Problems of Information Transmission January 2008

This document discusses an iterative majority-logic decoding algorithm for low-density parity-check (LDPC) codes transmitted over a binary symmetric channel. It presents a new analytical lower bound on the maximum number of errors (τmax) that the decoding algorithm can correct, which grows linearly with the code block length. The bound is derived using a combinatorial approach to analyze the error-correcting capability of LDPC codes under this decoding algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views13 pages

On The Error-Correcting Capability of LDPC Codes: Problems of Information Transmission January 2008

This document discusses an iterative majority-logic decoding algorithm for low-density parity-check (LDPC) codes transmitted over a binary symmetric channel. It presents a new analytical lower bound on the maximum number of errors (τmax) that the decoding algorithm can correct, which grows linearly with the code block length. The bound is derived using a combinatorial approach to analyze the error-correcting capability of LDPC codes under this decoding algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/225119021

On the Error-Correcting Capability of LDPC Codes

Article  in  Problems of Information Transmission · January 2008


DOI: 10.1134/S0032946008030046

CITATIONS READS
9 295

4 authors, including:

D. K. Zigangirov
The Institute for Information Transmission Problems
18 PUBLICATIONS   60 CITATIONS   

SEE PROFILE

All content following this page was uploaded by D. K. Zigangirov on 25 December 2013.

The user has requested enhancement of the downloaded file.


ISSN 0032-9460, Problems of Information Transmission, 2008, Vol. 44, No. 3, pp. 214–225.  c Pleiades Publishing, Inc., 2008.
Original Russian Text c K.Sh. Zigangirov, A.E. Pusane, D.K. Zigangirov, D.J. Costello, Jr., 2008, published in Problemy Peredachi Informatsii, 2008,
Vol. 44, No. 3, pp. 50–62.

CODING THEORY

On the Error-Correcting Capability of LDPC Codes1


K. Sh. Zigangirova , A. E. Pusaneb , D. K. Zigangirova2 , and D. J. Costello, Jr.b
a
Institute for Information Transmission Problems, Moscow
kamil@it.lth.se zig@iitp.ru
b
University of Notre Dame, Indiana, USA
apusane@nd.edu costello.2@nd.edu
Received November 22, 2007; in final form, April 30, 2008
Abstract—We consider the ensemble of low-density parity-check (LDPC) codes introduced
by Gallager [1]. The Zyablov–Pinsker majority-logic iterative algorithm [2] for decoding LDPC
codes is analyzed on the binary symmetric channel. An analytical lower bound on the error-
correcting capability τmax that grows linearly in the code block length is obtained.
DOI: 10.1134/S0032946008030046

1. INTRODUCTION
Low-density parity-check (LDPC) codes were invented by Gallager [1]. The advantage of these
codes is their ability to use relatively low-complexity iterative decoding algorithms. An algorithm
proposed by Gallager [1] for symbolwise decoding of LDPC codes, called the sum-product algorithm,
has been extensively studied in the literature. Its bit error probability has been analyzed, resulting
in a relatively weak upper bound on its performance. As the block length N grows, the bound
decreases as exp[−O(N γ )], where γ < 1 (see, for example, [3]). The same bound is also valid for
the block error probability.
A different approach was taken in the paper by Zyablov and Pinsker [2]. In that paper, a
binary symmetric channel (BSC) is considered and a majority-logic iterative decoding algorithm
for LDPC codes, different from Gallager’s algorithm [1], is analyzed. (This algorithm is described
in Section 3.) Zyablov and Pinsker proved that this algorithm, having complexity O(N log N ),
corrects O(N ) channel errors. It immediately follows that, for small enough channel error rates,
the block error probability goes to zero exponentially in N , i.e., as exp[−O(N )].
In this paper, we study the same algorithm presented in [2]. For this decoding algorithm, we
prove a new analytical bound on the number of correctable errors as a function of the parameters
of an LDPC code. To derive the bound, we use a combinatorial approach similar to that used
in [4–6] to obtain a lower bound on the minimum distance of LDPC block codes with parity-check
matrices constructed from permutation matrices, to obtain a lower bound on the free distance of
LDPC convolutional codes, and to analyze the iterative decoding performance of LDPC codes on
the binary erasure channel, respectively.

2. BASIC DEFINITIONS AND PROBLEM FORMULATION


By definition, a {coder/decoder} pair has an error-correcting capability τmax if the decoder can
correct all combinations of τ errors in the received sequence, where τ ≤ τmax . Throughout this
1
This work was partially supported by NSF Grants CCR02-05310 and CCF05-15012, NASA Grants
NNG05GH73G and NNX07AK53G, and a Graduate Fellowship from the Center for Applied Mathematics,
University of Notre Dame.
2
Supported in part by the State Contract of the Russian Federation 02.514.11.4025 from 1.05.2007.

214
ON THE ERROR-CORRECTING CAPABILITY OF LDPC CODES 215

paper, we consider the Gallager ensemble [1] of binary LDPC (J, K)-regular codes with parity-
check matrices H of size L × N that have J ones in each column and K ones in each row. The
dimensions L and N are chosen so that N = KM and L = JM , where M > 0 is an integer. Then
the parity-check matrix H can be expressed as a composition of J submatrices H (j) , j = 1, 2, . . . , J:
⎛ ⎞
H (1)
⎜ ⎟
⎜ H (2) ⎟
H=⎜
⎜ .. ⎟.
⎟ (1)
⎝ . ⎠
H (J)

The M × N submatrices H (j) , j = 1, 2, . . . , J, have a single one in each column and K ones in
each row. The design rate3 of such a code is R = 1 − J/K. We consider the ensemble C(N, J, K)
of codes with parity-check matrices given by (1), where the submatrices H (j) , j = 1, 2, . . . , J, are
chosen randomly, independently of each other, and so that all values of the submatrix H (j) are
equiprobable. We start with a brief analysis of the ensemble.
(KM )!
Lemma 1. Each matrix H (j) , j = 1, 2, . . . , J, can take one of values.
 (K!)M
KM
Proof. There are ways to form the first row of H (j) . Then the second row can be
 K
K(M − 1)
chosen in different ways, and so on. We conclude that H (j) can be chosen in
K





KM K(M − 1) K(M − 2) K (KM )!
... = (2)
K K K K (K!)M

different ways. 
By definition, a binary sequence v = (v1 , v2 , . . . , vN ) is a codeword of a code with parity-check
matrix H if and only if it satisfies
vH T = 0. (3)
If a binary codeword v is transmitted over a BSC, at the output of the channel the received
sequence r = (r1 , r2 , . . . , rN ) can be decomposed into a (modulo-2) sum of the transmitted code-
word v and an error sequence e; i.e.,
r = v + e, (4)
where e = (e1 , e2 , . . . , eN ). From (3) and (4), the syndrome can be computed as

s = rH T = eH T , (5)

where s = (s(1) , s(2) , . . . , s(J) ). The M -dimensional vector s(j) , j = 1, 2, . . . , J, is a partial syndrome
corresponding to the submatrix H (j) , and it satisfies the equation
T
e H (j) = s(j) , j = 1, 2, . . . , J. (6)

We denote the set of check equations in (6) by S (j) , j = 1, 2, . . . , J. Let τ be the Hamming
weight of the error vector e; w(j) , j = 1, 2, . . . , J, be the Hamming weight of the partial syndrome
vector s(j) ; and w = w(1) +w(2) +. . .+w(J) (real addition) be the Hamming weight of the syndrome s.
Clearly, w(j) equals the number of unsatisfied check equations in S (j) , and each of the M equations
in S (j) , j = 1, 2, . . . , J, includes K symbols.
3
The actual code rate may be slightly greater than the design rate due to the existence of linearly dependent
rows in H.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


216 ZIGANGIROV et al.

In parallel to each partial syndrome vector s(j) , we consider a partial pseudo-syndrome vector
(j) (j) (j) (j)
(j) = (i1 , i2 , . . . , iM ), where im ∈ {0, 1, . . . , K} equals the number of nonzero symbols in e
s
included in the mth equation of S (j) , m = 1, 2, . . . , M . The vector s (1) , s
 = (s (2) , . . . , s
(J) ) is called
a pseudo-syndrome.
(j)
Now let νi denote the number of parity-check equations in S (j) that include i nonzero (error)
(j) (j) (j)
symbols of e. Consider the vector ν (j) = (ν0 , ν1 , . . . , νK ), called the constraint composition
(j)
of S (j) (or of the partial pseudo-syndrome s(j) ). In other words, νi equals the number of com-
ponents of the partial pseudo-syndrome s (j) that have the value i. Below (see (18b)), it is proved
that

K
(j)
τ= iνi . (7)
i=0

The Hamming weight of the partial syndrome s(j) can be written as


 (j)
w(j) = νi , (8)
i odd

where the summation is over odd i. The weight of the j-th partial pseudo-syndrome, defined as the
number of nonzero symbols in e included in unsatisfied parity-check equations of S (j) , is given by
 (j)
 (j) =
w iνi . (9)
i odd

Finally, we define the weight of the pseudo-syndrome as

 (1) + w
=w
w  (2) + . . . + w
 (J) . (10)

3. ANALYSIS OF THE DECODING ALGORITHM


It follows from the code description in the previous section that each symbol of the received
sequence r is included in one parity-check equation in each set S (j) , j = 1, 2, . . . , J. Thus, in total,
each received symbol is included in J check equations. The Zyablov–Pinsker algorithm works as
follows. In the first step of decoding, the decoder observes all symbols of the received sequenceand  J
marks the symbols for which the number of unsatisfied parity-check equations is greater than .
2
Then it flips these symbols. In the second step, the decoder operates with this updated sequence
as with a received sequence; it again
 
marks all symbols for which the number of unsatisfied parity-
J
check equations is greater than and flips them. The decoding stops ifthere are no symbols
2 J
for which the number of unsatisfied parity-check equations is greater than . In this case, the
2
syndrome weight is either zero (complete decoding) or nonzero (failed decoding).
We call the sequence that the decoder currently operates with the current received sequence,
and the current error sequence is the modulo-2 sum of a transmitted codeword and the current
received sequence.
Note that the decoder can observe the received sequence r, the syndrome s, and its weight w.
(1) , s
 = (s
At the same time, the error sequence e, its weight τ , the pseudo-syndrome s (2) , . . . , s
(J) ),
and the weight of the pseudo-syndrome w  are “unobservable” by the decoder. Although we can
observe how the syndrome weight w changes in the  
decoding, we cannot estimate the behavior of τ
J
 Received symbols that are included in
and w. or less unsatisfied check equations are called
2
unflipable; otherwise, they are called flipable. Received sequences for which all erroneously received
symbols are unflipable are called trapping sets.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


ON THE ERROR-CORRECTING CAPABILITY OF LDPC CODES 217

The parameter τtrap is called the trapping weight of an LDPC code if there are no trapping sets
of weight τ < τtrap and there is at least one trapping set of weight τ = τtrap . As a generalization of
the definition of the trapping weight, we introduce the trapping weight of order γ. By definition,
τtrap (γ) is called the trapping weight of order γ of an LDPC code if, for any τ < τtrap (γ), the
number of unflipable erroneous symbols in all error sequences is strictly less than (1 − γ)τ , and, for
τ = τtrap (γ), there exists at least one error sequence with (1 − γ)τ unflipable erroneous symbols.
Obviously, τtrap = τtrap (0).
Our analysis of the Zyablov–Pinsker decoding algorithm is based on lower bounding τtrap (γ).
We will prove the existence of a code for which the lower bound on τtrap (γ), the trapping weight
of order γ, grows linearly with block length N if γ is sufficiently small. Using this bound, we can
obtain a lower bound on the error-correcting capability τmax .
It is convenient to consider the normalized error-correcting capability ρmax = τmax /N and the
normalized trapping weight of order γ, ρtrap (γ) = τtrap (γ)/N . Below we formulate two theorems
which establish lower bounds ρtrap (γ) and ρmax on ρtrap (γ) and ρmax , respectively.
First, for a given value of γ, we introduce the following function of the variables λ, μ, and ρ:
 
j0 (J − j0 )γ j0 (J − j0 )γ J −1
Gγ (λ, μ, ρ) = g(λ, μ) − μ + ρ−λ 1− − ρ− H(ρ), (11)
J J J J J
J 
where j0 = , ρ = τ /N , and
2
 
1 (1 + eμ )K − (1 − eμ )K (1 + eλ )K + (1 − eλ )K
g(λ, μ) = ln + , λ < 0, μ < 0. (12)
K 2 2

Then we define
Fγ (ρ) = min Gγ (λ, μ, ρ). (13)
μ<λ<0

Theorem 1. Assume that there exists ρ = ρtrap (γ) such that Fγ (ρtrap (γ)) < 0. Then, for N
sufficiently large, we have ρtrap (γ) > ρtrap (γ) for almost all codes in the ensemble C(N, J, K).
It follows from Theorem 1 that if in a particular decoding step the current error sequence has
weight τ < ρtrap (γ)N , then for almost all codes the Zyablov–Pinsker algorithm flips more than a
fraction γ of erroneous symbols. At the same time, it may also flip some correctly received symbols;
i.e., it may introduce errors. If for each decoding step the number of flipable correctly received
symbols is strictly less than the number of flipable erroneously received symbols, then the weight of
the error sequence decreases at each decoding step and the decoder corrects all errors in O(log N )
iterations. We will establish the conditions under which, for each decoding step, the number of
flipable correct symbols is strictly less than the number of flipable erroneous symbols and thereby
obtain a lower bound ρmax on ρmax . First, we introduce the following function:
1 − ρ γρ γρ
Qγ (ρ) = H + ln α(ρ), (14)
J 1−ρ J
where

J
α(ρ) = [(K − 1)ρ/(1 − ρ)]j0 +1 . (15)
j0 + 1
Then we introduce
U (ρ) = max (Fγ (ρ) + Qγ (ρ)), (16)
0<γ<1

where Fγ (ρ) is defined in (13).


The following theorem establishes a lower bound ρmax on the normalized error-correcting capa-
bility ρmax .

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


218 ZIGANGIROV et al.

Theorem 2. Assume that there exists ρ = ρmax such that U (ρmax ) < 0. Then, for N suffi-
ciently large, we have ρmax > ρmax for almost all codes in the ensemble C(N, J, K).
The following two sections contain lemmas which imply Theorems 1 and 2.

4. PROOF OF THEOREM 1
 of a pseudo-syndrome is greater than (j0 + (J − j0 )γ)τ , where
Lemma 2. If the weight w
J 
j0 = , then the Zyablov–Pinsker algorithm flips more than a fraction γ of erroneous symbols in
2
the current error sequence of weight τ .
Proof. The Zyablov–Pinsker algorithm flips γτ symbols of the error sequence if the number of
unsatisfied parity-check equations in these symbols exceeds j0 (but, of course, does not exceed J)
and at most j0 parity-check equations are unsatisfied in the remaining (1 − γ)τ erroneous symbols.
In this case, the weight of the pseudo-syndrome is not greater than

(1 − γ)j0 τ + γJτ = j0 τ + (J − j0 )γτ, (17)

 exceeds this value, the Zyablov–Pinsker algorithm must flip more than γτ erroneous sym-
so if w
bols. 
(j) (j) (j)
Lemma 3. Components of a constraint composition vector ν (j) = (ν0 , ν1 , . . . , νK ) satisfy
the equalities

K
(j)
νi = M, (18a)
i=0

K
(j)
iνi = τ. (18b)
i=0

Proof. The set of M parity-check equations S (j) consists of K + 1 subsets such that the ith
(j)
subset, i = 0, 1, . . . , K, consists of νi equations and each equation includes i erroneous symbols.
This proves (18a). The sum in (18b) counts the number of erroneous symbols in all check equations
of S (j) . Since each symbol enters only one equation in S (j) , the sum equals the total number of
erroneous symbols in the received sequence. 
(j) (j) (j)
Lemma 4. Given a constraint composition ν (j) = (ν0 , ν1 , . . . , νK ), there are

M!
(j) (j) (j)
(19)
ν 0 ! ν 1 ! . . . νK !

(j) .
ways to choose a partial pseudo-syndrome s
Proof. The lemma immediately follows from the definitions of the constraint composition
(j) (j) (j) (j) (j) (j) (j)
ν (j) (j) = (i1 , i2 , . . . , iM ), where im ∈
= (ν0 , ν1 , . . . , νK ) and the partial pseudo-syndrome s
{0, 1, . . . , K}. 
Lemma 5. For a given error sequence e of weight τ and a given partial pseudo-syndrome s (j)
(j) (j) (j)
with constraint composition ν (j) = (ν0 , ν1 , . . . , νK ), the number of ways to choose the parity-
check submatrix H (j) is given by
τ ! (N − τ )!
. (20)
K (j)
νi
[i! (K − i)! ]
i=0

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


ON THE ERROR-CORRECTING CAPABILITY OF LDPC CODES 219

Proof. The proof is similar to that of Lemma 1.


(j) (j) (j)
Consider (j) = (i1 , i2 , . . . , iM ) with constraint composition ν (j) .
 a partial
 pseudo-syndrome
s
τ N −τ
There are (j) (j) ways to form the first row of H (j) . Then the second row can be chosen
i1 K − i1
 (j)  (j)
τ − i1 N − τ − K + i1
in (j) (j) different ways, and so on. We conclude that H (j) can be chosen in
i2 K− i2



(j)
(j)
(j)
(j)
τ N −τ τ − i1 N − τ − K + i1 iM K − iM
(j) (j) ...
i1 K − i1 (j)
i2 K − i2
(j)
iM
(j)
K − iM
(j)

τ ! (N − τ )!
= (21)

K (j)
[i! (K − i)! ]νi
i=0
different ways. 
(j) (j) (j)
Let fτ (ν0 , ν1 , . . . , νK ) be the probability of choosing the matrix H (j) for a given error vector e
(j) (j) (j)
of weight τ and constraint composition ν (j) = (ν0 , ν1 , . . . , νK ). Using Lemmas 1, 4, and 5, we
obtain
(j) (j) (j)
fτ (ν0 , ν1 , . . . , νK )
M! τ ! (N − τ )!  ν (j)

K
ν0 ! ν1 ! . . . νK !  M! K!
(j) (j) (j) K (j)
i
νi
[i! (K − i)! ] (j) (j) (j) i! (K − i)!
i=0 ν0 , ν1 . . . νK i=0
= = . (22)
N! N!
(K!)M τ ! (N − τ )!
The denominator of the first fraction on the right-hand side of (22) is the number of ways to
choose the matrix H (j) (see Lemma 1). The first term in the numerator is the number of ways
to choose the partial pseudo-syndrome s (j) with constraint composition ν (j) (see Lemma 4). The
second term in the numerator is the number of ways to choose the submatrix H (j) if the error
sequence e of weight τ and the pseudo-syndrome s (j) are given (see Lemma 5).
Now consider the code ensemble C(N, J, K) and let Pτ (w)  be the probability that an error
sequence e of weight τ has pseudo-syndrome weight w.  Correspondingly, let Pτ (w  (j) ) be the
probability that this error sequence e has the jth partial pseudo-syndrome weight w  . From (10),
(j)

it follows that 
 =
Pτ (w)  (1) )Pτ (w
Pτ (w  (2) ) . . . Pτ (w
(J) ). (23)
(1) +w
w (2) +...+w
(J )
 of the number of weight-τ error sequences e with pseudo-
The mathematical expectation Mτ (w)
 is
syndrome weight w

N
 =
Mτ (w) 
Pτ (w). (24)
τ
 is defined as
The generating function of the mathematical expectation Mτ (w)





N  N
Φτ (s) = est Mτ (w)
 = est Pτ (w)
 = [ϕτ (s)]J ,
τ τ

w=0 
w=0

 (j) ), is given by
where ϕτ (s), the generating function of the probability Pτ (w

τ
esw Pτ (w
(j)
ϕτ (s) = (j) ) (25)

w (j) =0
and does not depend on j.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


220 ZIGANGIROV et al.

Lemma 6. For the ensemble C(N, J, K), the mathematical expectation of the number of error
sequences of weight τ having pseudo-syndrome weight w  ≤ w 0 is upper bounded by the following
inequality:

1−J
 N
 ≤e
Mτ (w) 
−λJτ −(μ−λ)w 0 +g(λ,μ)JN
, (26)
τ
 w
w≤ 0
where g(λ, μ) is defined in (12), μ < λ < 0.
Proof. From the definition of the generating function Φτ (s), it follows that


  N
 ≤
Mτ (w) e  w
s(w− 0 )  = Φτ (s)e
Mτ (w) 0
−sw
= [ϕτ (s)]J e−sw0 , (27)
τ
 w
w≤ 0 
∀w

where s < 0. Then from (22) and (25) we obtain an upper bound on the generating function ϕτ (s):
 (j)

τ  s iνi
e  Pτ (w
sw (j) (j) (j) (j)
ϕτ (s) = (j) ) = e i odd fτ (ν0 , ν1 , . . . , νK ) (28)
(j) =0
w 
K
(j)
iνi =τ
i=0

 
K
(j) (j)
 s iνi λ iνi −τ
(j) (j) (j)
≤ e i odd e i=0 fτ (ν0 , ν1 , . . . , νK ) (29)
∀ν(j)
 νi(j)  ν (j)
M!  K! e(s+λ)i  K! eλi i

 (j) (j) (j)


ν0 ! ν1 ! . . . νK ! i odd i! (K − i)! i even i! (K − i)!
= e−λτ
N!
∀ν (j)
τ ! (N − τ )!

−1  M
−λτ N (1 + es+λ )K − (1 − es+λ )K (1 + eλ )K + (1 − eλ )K
=e +
τ 2 2

−1
−λτ N
=e eg(λ,μ)N , (30)
τ

where g(λ, μ) is defined in (12), μ = λ + s, and μ < λ < 0. To pass from (28) to (29), we first

K
(j)
λ iνi −τ
multiply the summands by e i=0 , which does not change the sum since the summation

K (j)
is over constraint compositions ν (j) satisfying the condition iνi = τ . Then we replace the

K (j)
i=0
summation over ν (j) satisfying the condition iνi = τ by summation over all ν (j) , which can
i=0
only increase the sum. Finally, (26) follows from (27)–(30). 
The next lemma directly follows from Lemma 6.
Lemma 7. We have the inequality

 ≤ exp [M JKGγ (λ, μ, ρ) + o(M )] ,
Mτ (w) (31)
 w
w≤ 0
J 
0 = (j0 + (J − j0 )γ)τ , and j0 =
where Gγ (λ, μ, ρ) is defined in (11), μ < λ < 0, ρ = τ /N , w .
2
Finally, if we choose λ and μ that minimize Gγ (λ, μ, ρ) and replace Gγ (λ, μ, ρ) by Fγ (ρ) in the
exponent on the right-hand side of (31), we obtain the following lemma.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


ON THE ERROR-CORRECTING CAPABILITY OF LDPC CODES 221

Lemma 8. We have the inequality



 ≤ exp [M JKFγ (ρ) + o(M )] ,
Mτ (w) (32)
 w
w≤ 0
J 
0 = (j0 + (J − j0 )γ)τ , and j0 =
where Fγ (ρ) is defined in (13), ρ = τ /N , w .
2
If Fγ (ρ) < 0, then by choosing N sufficiently large we can make the right-hand side of (32)
arbitrarily small; i.e., in the ensemble C(N, J, K), the mathematical expectation of the number of
error sequences of weight τ having pseudo-syndrome weight w  ≤ w 0 goes to zero as N goes to
infinity. This proves Theorem 1.

5. PROOF OF THEOREM 2
Lemma 9. Consider an arbitrary received sequence r and the corresponding error sequence e
of weight τ = ρN . Let pτ (k) denote the probability that the number of flipable erroneous symbols

equals k, so that pτ (k) denotes the probability that the number of flipable erroneous symbols
k≤=γτ
does not exceed = γτ . Then

pτ (k) ≤ exp [M JK(Fγ (ρ) − H(ρ)/J) + o(M )] , (33)
k≤=γτ

where Fγ (ρ) is defined in (13), μ < λ < 0, and ρ = τ /N .


Proof. The proof is analogous to the proof of Lemma 7 and is based on Lemmas 2–5. From

Lemma 2 it follows that pτ (k) is upper bounded by the probability that the pseudo-syndrome
k≤=γτ 
0 = (j0 +(J −j0 )γ)τ . In turn, the probability
weight does not exceed w  that the pseudo-
Pτ (w)
 w
w≤ 0
 does not exceed w
syndrome weight w 0 is upper bounded by (cf. (26))

−J
 N
 ≤ [ϕτ (s)] e
Pτ (w) J 0
−sw
≤e −Jλτ
eg(λ,μ)JN e−sw0 (34)
τ
 w
w≤ 0
= exp [M JK(Gγ (λ, μ, ρ) − H(ρ)/J) + o(M )] . (35)

Choosing λ and μ to minimize Gγ (λ, μ, ρ) and replacing Gγ (λ, μ, ρ) by Fγ (ρ), we obtain (3). 
Now fix the received sequence r and consider the corresponding sequence of correctly received
symbols c of length N  = N − τ = (1 − ρ)N . The number of flipable symbols
in c is denoted
N
by ; we enumerate this symbols by indices i, i = 1, 2, . . . , . There are possible sequences

of
 correctly
received symbols with flipable symbols. For the ith flipable symbol in c, there are
J
possible collections of j0 + 1 sets of parity-check equations S i = (Si,1 , Si,2 , . . . , Si,j0 +1 ),
j0 + 1
Si,j ∈ {S (1) , S (2) , . . . , S (J) }, j = 1, 2, . . . , j0 + 1, Si,j = Si,j  , j = j  , such that the equations
contained in the sets Si,1 , Si,2 , . . . , Si,j0 +1 are not all satisfied. Also, for each possible sequence of
 
J
correctly received symbols with flipable symbols, there are such collections of sets S.
j0 + 1
Now we denote by qτ ( | k) the conditional probability that the number of flipable correct symbols
in the sequence c equals given that the number of flipable erroneous symbols in the sequence e
is k. This conditional probability depends only on the weight of a sequence e and does not depend
on the configuration of e and c.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


222 ZIGANGIROV et al.

Lemma 10. The conditional probability qτ ( | k) is upper bounded by




N
qτ ( | k) ≤ [α(ρ)] , (36)

where the right-hand side does not depend on k, and α(ρ) is defined in (15).
Proof. First, we choose
 a set of positions corresponding to flipable correctly received symbols.
N
This can be done in ways. For each flipable correctly received symbol, say the ith, we then

J
choose one of the collections of j0 + 1 sets of parity-check equations S i . For the ensemble
j0 + 1
C(N, J, K), we then bound the fraction of codes for which the flipable correctly received symbols
in the chosen positions are included in unsatisfied parity-check equations corresponding to the
chosen collections of sets {S i , i = 1, 2, . . . , }. By (j) , j = 1, 2, . . . , J, we denote the number of
flipable correctly received symbols included in unsatisfied parity-check equations of the sets S (j) ,
j = 1, 2, . . . , J. Clearly, we have
(1) + (2) + . . . + (J) = (j0 + 1). (37)
Next, consider an arbitrary set of parity-check equations S (j) , j = 1, 2, . . . , J. The number of
unsatisfied parity-check equations in this set does not exceed τ , the number of correctly received
symbols that are included in unsatisfied parity-check equations does not exceed L = (K − 1)τ , and
the total number of positions occupied by correctly received symbols equals N  . Then, indepen-
dently of the number of flipable erroneously received symbols k, the probability that (j) flipable
correctly received symbols is included in unsatisfied parity-check equations of the set S (j) is upper
bounded by
L(L − 1) . . . (L − (j) + 1) (j)
  
< [(K − 1)ρ/(1 − ρ)] . (38)
N (N − 1) . . . (N − + 1)
(j)

Now it follows from (37) and (38) that the probability that all flipable correctly received symbols
are included in unsatisfied parity-check equations of the corresponding sets S (j) , j = 1, 2, . . . , J, is
upper bounded by
[(K − 1)ρ/(1 − ρ)](j0 +1) . (39)
Since a set 
 of flipable correctly received symbols can be chosen among the N correctly received
N
symbols in ways and a collection of j0 + 1 sets of 
unsatisfied
parity-check equations can be
J
chosen among the J sets of parity-check equations in ways, we can upper bound the
j0 + 1
probability qτ ( | k) by



N J
[(K − 1)ρ/(1 − ρ)](j0 +1) . (40)
j0 + 1
Finally, (36) follows from (40). 
Inequality (36) implies
qτ ( | k) ≤ exp [M JKQγ (ρ) + o(M )], (41)
where γ = /τ .
Lemma 11. For an arbitrary received sequence r of length N with τ = ρN erroneous symbols,
 
the probability qτ ( | k)pτ (k) that the number of flipable erroneously received symbols does
0<≤N  k≤
not exceed the number of flipable correctly received symbols is upper bounded by
 
qτ ( | k)pτ (k) ≤ exp [M JK(U (ρ) − H(ρ)/J) + o(M )] , (42)
0<≤N  k≤

where U (ρ) is defined in (16).

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


ON THE ERROR-CORRECTING CAPABILITY OF LDPC CODES 223

Proof. From (33), (41), and (42), it follows that


    
qτ ( | k)pτ (k) ≤ max qτ ( | k) pτ (k)
k
0<≤N  k≤ 0<≤N  k≤
 
≤ exp M JK max(Fγ (ρ) + Qγ (ρ)) − H(ρ)/J + o(M ) . (43)
γ

Then we obtain (42) by using (16) and (43). 


Lemma 12. In the ensemble C(N, J, K), the mathematical expectation of the number of received
sequences with τ erroneous symbols for which the number of flipable erroneously received symbols
does not exceed the number of flipable correctly received symbols is upper bounded by

exp [M JKU (ρ) + o(M )] , (44)

where U (ρ) is defined in (16). 


N
Proof. The total number of error sequences of weight τ is . For each of these error
τ
sequences, the probability that the number of flipable erroneously received symbols does not ex-
ceed the number of flipable correctly received symbols is upper bounded by (42). This proves
Lemma 12. 
Finally, Theorem 2 follows from Lemma 12.
We have proved that under the conditions of Theorem 2, the Zyablov–Pinsker decoder corrects
at least one error. Hence, the total number of decoding steps does not exceed τ , i.e., grows
no faster than linearly with N . In fact, and this is proved in [2], in each decoding step the
algorithm corrects a number of errors that grows linearly with N . Thus, the number of decoding
steps grows no faster than O(log N ), and the total complexity of the algorithm does not exceed
O(N log N ). To prove a similar result, we have to modify Lemmas 11 and 12 and upper bound the
mathematical expectation of the number of received sequences having at most  flipable erroneously
received symbols and exactly flipable correctly received symbols, where  − = εN . Here ε is
an arbitrarily small positive number. Then the formulation of Theorem 2 does not change since
the condition U (ρmax ) < 0 assumes the existence of a positive number ε such that U (ρmax ) < −ε,
and therefore we can choose the parameters γ for the functions Fγ (ρ) and Qγ (ρ) in equation (16),
which defines U (ρ), so that they differ by a small quantity ε.
The table gives values for ρtrap and ρmax as functions of J and K for 6 ≤ J ≤ 11 and 7 ≤ K ≤ 20.
For J < 6, both bounds are zero. It is interesting to the compare the values of ρmax given in the
table with those computed by Zyablov–Pinsker formulas [2]. The values of ρmax given in the table
are sometimes greater than those obtained using the more complicated Zyablov–Pinsker formula,
and sometimes smaller. In particular, for J = 9 and K = 10, the table value is ρmax = 1.29 × 10−3 ,
whereas the Zyablov–Pinsker formula gives ρmax = 8.5 × 10−3 . On the other hand, for J = 10 and
K = 20, the value of ρmax in the table (5.7 × 10−4 ) is greater than the value (2 × 10−4 ) computed
by the Zyablov–Pinsker formula.
Note that, from Theorem 2, it follows that, if J ≥ 6 and the number of error symbols in the
received sequence is not large, then there exists an LDPC code such that the Zyablov–Pinsker
algorithm corrects a number of channel errors that grows linearly with N .

6. DISCUSSION AND CONCLUSIONS


In this paper, we derived a lower bound on the error-correcting capability of Gallager’s regular
LDPC codes used with majority-logic iterative decoding on a BSC. In the analysis, we used a
combinatorial technique analogous to Gallager’s probabilistic technique for lower bounding the

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


224 ZIGANGIROV et al.
Lower bounds on the normalized trapping weight ρtrap (first value) and the normalized error-correcting
capability ρmax (second value) of (J, K)-regular LDPC codes
J =6 J =7 J =8 J =9 J = 10 J = 11
K=7 2.40 × 10−4
(2.40 × 10−5 )
K=8 1.50 × 10−4 1.40 × 10−3
(1.50 × 10−5 ) (5.311 × 10−4 )
K=9 1.00 × 10−5 1.10 × 10−3 6.68 × 10−4
(9.50 × 10−6 ) (3.77 × 10−4 ) (6.41 × 10−4 )
K = 10 7.0 × 10−6 8.75 × 10−4 5.29 × 10−4 3.10 × 10−3
(6.30 × 10−6 ) (2.99 × 10−4 ) (5.03 × 10−4 ) (0.00129)
K = 11 5.50 × 10−6 7.03 × 10−4 4.26 × 10−4 2.624 × 10−3 1.842 × 10−3
(5.25 × 10−6 ) (2.41 × 10−4 ) (4.11 × 10−4 ) (0.00108) (0.001626)
K = 12 4.1 × 10−6 5.85 × 10−4 3.53 × 10−4 2.204 × 10−3 1.531 × 10−3 0.0044
(3.9460×10−6) (2.00 × 10−4 ) (3.38 × 10−4 ) (9.15 × 10−4 ) (0.001394) (0.001999)
K = 13 2.90 × 10−6 4.88 × 10−4 2.95 × 10−4 1.913 × 10−3 1.385 × 10−3 0.0039
(3.04 × 10−6 ) (1.67 × 10−4 ) (2.87 × 10−4 ) −4
(8.01 × 10 ) (0.001208) (0.001751)
K = 14 2.42 × 10−6 4.14 × 10−4 2.54 × 10−4 1.679 × 10−3 0.001214 0.0034
(2.41 × 10−6 ) (1.43 × 10−4 ) (2.45 × 10−4 ) (7.00 × 10−4 ) (0.001063) (0.001555)
K = 15 1.9 × 10−6 3.58 × 10−4 2.18 × 10−4 1.482 × 10−3 0.001072 0.030
(1.90 × 10−6 ) (1.23 × 10−4 ) (2.12 × 10−4 ) (6.18 × 10−4 ) (9.40 × 10−4 ) (0.001393)
K = 16 1.565 × 10−6 3.12 × 10−4 1.90 × 10−4 1.317 × 10−3 9.51 × 10−4 0.0027
(1.56 × 10−6 ) (1.07 × 10−4 ) (1.84 × 10−4 ) (5.50 × 10 ) (8.39 × 10−4 )
−4
(0.001256)
K = 17 1.295 × 10−6 2.73 × 10−4 1.67 × 10−4 1.177 × 10−3 8.53 × 10−4 0.0025
(1.28 × 10−6 ) (9.40 × 10−5 ) (1.63 × 10−4 ) (4.94 × 10 ) (7.55 × 10−4 )
−4
(0.001137)
K = 18 1.065 × 10−6 2.42 × 10−4 1.48 × 10−4 1.069 × 10−3 7.67 × 10−4 0.0022
(1.0630×10−6) (8.30 × 10−5 ) (1.44 × 10−4 ) (4.46 × 10 ) (6.83 × 10−4 )
−4
(0.001037)
K = 19 9.35 × 10−7 2.16 × 10−4 1.31 × 10−4 9.74 × 10−4 7.00 × 10−4 0.0021
(8.97 × 10−7 ) (7.40 × 10−5 ) (1.28 × 10−4 ) (4.05 × 10 ) (6.20 × 10−4 ) (9.51 × 10−4 )
−4

K = 20 7.75 × 10−6 1.95 × 10−4 1.17 × 10−4 8.88 × 10−4 6.41 × 10−4 0.0019
(7.60 × 10−7 ) (6.60 × 10−5 ) (1.14 × 10−4 ) (0.00032) (5.7000×10−4) (8.77 × 10−4 )

minimum distance of regular LDPC codes [1]. This technique is quite robust; it was previously
used to analyze the minimum distance of LDPC block codes with parity-check matrices constructed
from permutation matrices [4], the free distance of LDPC convolutional codes [5], and the erasure-
correcting capability of LDPC block codes [6]. The main advantage of the new bound is its relatively
low computational complexity. Finally, we expect that the methods developed in this paper can
also be used to analyze more complex iterative decoding algorithms.

REFERENCES
1. Gallager, R.G., Low-Density Parity-Check Codes, Cambridge: MIT Press, 1963. Translated under the
title Kody s maloi plotnost’yu proverok na chetnost’, Moscow: Mir, 1966.

2. Zyablov, V.V. and Pinsker, M.S., Estimation of the Error-Correction Complexity for Gallager Low-
Density Codes, Probl. Peredachi Inf., 1975, vol. 11, no. 1, pp. 23–36 [Probl. Inf. Trans. (Engl. Transl.),
1975, vol. 11, no. 1, pp. 18–28].

3. Lentmaier, M., Truhachev, D.V., Zigangirov, K.Sh., and Costello, D.J., Jr., An Analysis of the Block
Error Probability Performance of Iterative Decoding, IEEE Trans. Inform. Theory, 2005, vol. 51, no. 11,
pp. 3834–3855.

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008


ON THE ERROR-CORRECTING CAPABILITY OF LDPC CODES 225

4. Sridharan, A., Lentmaier, M., Truhachev, D.V., Costello, D.J., Jr., and Zigangirov, K.Sh., On the Mini-
mum Distance of Low-Density Parity-Check Codes with Parity-Check Matrices Constructed from Permu-
tation Matrices, Probl. Peredachi Inf., 2005, vol. 41, no. 1, pp. 39–52 [Probl. Inf. Trans. (Engl. Transl.),
2005, vol. 41, no. 1, pp. 33–44].
5. Sridharan, A., Truhachev, D., Lentmaier, M., Costello, D.J., Jr., and Zigangirov, K.Sh., Distance Bounds
for an Ensemble of LDPC Convolutional Codes, IEEE Trans. Inform. Theory, 2007, vol. 53, no. 12,
pp. 4537–4555.
6. Zigangirov, D.K. and Zigangirov, K.Sh., Decoding of Low-Density Codes with Parity-Check Matrices
Composed of Permutation Matrices in an Erasure Channel Probl. Peredachi Inf., 2006, vol. 42, no. 2,
pp. 44–52 [Probl. Inf. Trans. (Engl. Transl.), 2006, vol. 42, no. 2, pp. 106–113].

PROBLEMS OF INFORMATION TRANSMISSION Vol. 44 No. 3 2008

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy