0% found this document useful (0 votes)
63 views4 pages

Sequential Decoding of Polar Codes

This papers for understanding method for decoding decoding of Polar codes

Uploaded by

Bui Van Thanh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views4 pages

Sequential Decoding of Polar Codes

This papers for understanding method for decoding decoding of Polar codes

Uploaded by

Bui Van Thanh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

IEEE COMMUNICATIONS LETTERS, VOL. 18, NO.

7, JULY 2014 1127

Sequential Decoding of Polar Codes


V. Miloslavkaya, Student Member, IEEE, and P. Trifonov, Member, IEEE

Abstract—The problem of efficient decoding of polar codes is derived for the case of classical polar codes. Its extension to the
considered. A low-complexity sequential soft decision decoding case of codes with dynamic frozen symbols is straightforward.
algorithm is proposed. It is based on the successive cancellation The decoding problem consists in identifying un−1 which
0
approach, and it employs most likely codeword probability esti- maximizes PU n−1 |Y n−1 (un−1 |y0n−1 ), such that ui = 0 for all
mates for selection of a path within the code tree to be extended. 0 0
0
i ∈ F, where Uj is a random variable corresponding to the
Index Terms—Polar codes, sequential decoding, successive j-th input symbol of the polarizing transformation, and Yj is
cancellation.
a random variable corresponding to the j-th received symbol.
I. I NTRODUCTION In the i-th phase of the classic SC decoding algorithm probab-
ilities PU i |Y n−1 (ui0 |y0n−1 ) = (PY n−1 ,U i−1 |Ui (y0n−1 , ui−1
0 |ui ))/
0 0 0 0

P OLAR codes were recently shown to be able to achieve


the capacity of a wide class of communication channels
[1]. However, their performance at moderate length appears
(2PY n−1 (y0n−1)), ui ∈ {0, 1}, are calculated. The decoder makes
0
decisions ûi = arg maxui ∈{0,1} PY n−1 ,U i |Ui (y0n−1 , ui0 |ui ) for
0 0
to be quite poor under successive cancellation (SC) decoding. i ∈ F and ûi = 0 otherwise. û0 , . . . , ûi are used instead of the
This problem was addressed in [2], where a list decoding true values of u0 , . . . , ui while computing the probabilities at
algorithm for polar codes was introduced. It was shown in [3], subsequent steps. It was shown in [1] that the SC algorithm can
[4] that the same performance can be achieved with much lower be implemented with complexity O(n log2 n).
complexity by employing a stack-based decoding algorithm. A major drawback of the SC algorithm is that it cannot
In this paper a novel low-complexity stack-based decoding correct errors which may occur at early phases of the decoding
algorithm for polar codes is introduced. It employs information process. This problem is solved in stack/list algorithms by
about the quality of not-yet-processed frozen bit subchannels keeping a list of the most probable paths of different lengths.
to reduce the number of times the decoder switches between The stack decoding algorithm [3], [4] explores paths within
different paths in the code tree, while processing a received the code tree. A path of length i is identified by values ui−1 0 .
vector. The proposed approach avoids probability-domain cal- Each path has an associated score. All paths are stored in a
culations, required by the algorithms given in [2]–[4], which are stack (priority queue). At each iteration the decoder selects for
very difficult to implement in hardware. extension path ui−1
0 with the largest score, and performs the
i-th phase of SC decoding. That is, if i ∈ F, the path is extended
II. BACKGROUND to obtain (ui−1
0 , 0), and the extended path is stored in the stack
m together with its score. Otherwise, the path is cloned to obtain
An (n = 2 , k, d) polar code is a binary linear block code
⊗m new paths (ui−1 i−1
0 , 0) and (u0 , 1), which are stored in the stack
  by k rows of matrix Gn = Bn A , where A =
generated
together with their scores. In order to keep the size of the stack
1 0
, ⊗m denotes m-fold Kronecker product of a matrix limited, low-score paths are killed. Furthermore, if the decoder
1 1
returns to phase i more than L times, all paths shorter than i + 1
with itself, Bn is an n × n bit reversal permutation matrix, and
are also killed. Decoding terminates as soon a path of length n
d is the minimum distance of the code. By aji we will denote appears at the top of the stack, or the stack becomes empty.
the sequence (ai , ai+1 , . . . , aj ). Any codeword of a polar code Hence, the worst case complexity of stack decoding is given by
can be represented as cn−1 0 = un−1
0 Gn , where un−1
0 is input O(Ln log2 n). Average decoding complexity depends on how
sequence, such that ui = 0, i ∈ F, where F ⊂ {0, . . . , n − 1} path scores are defined.
is the set of n − k indices of frozen bit subchannels. The
remaining elements of un−1 0 are set to the payload data. It was
suggested in [4] to set ui , i ∈ F, to some linear function of III. P ROPOSED A LGORITHM
ui−1
0 (dynamic frozen symbols). This enables one to obtain A. Estimating Path Probability
codes with higher minimum distance, while still allowing one
to employ the SC decoding algorithm and its variations. For The objective of the decoder is to find path un−1
0 maximizing
the sake of simplicity, the proposed decoding algorithm will be PU n−1 |Y n−1 (un−1
0 |y n−1
0 ). Let us estimate the probability
0 0
T(ui0 , y0n−1) = max PU n−1 |Y n−1 (un−1
0 |y0n−1) of trans-
un−1
i+1
:un−1
i+1,F
=0 0 0
Manuscript received December 19, 2013; accepted April 24, 2014. Date of
publication May 13, 2014; date of current version July 8, 2014. This work mission of the most probable codeword of the polar code in
was supported by the Russian Foundation for Basic Research under Grant 12- the code subtree given by prefix ui0 , assuming that ui0 values
01-00365-a. The associate editor coordinating the review of this paper and are correct. Here ahj,D is a subvector of vector ahj consisting of
approving it for publication was M. F. Flanagan.
The authors are with the Department of Distributed Computing and Network- elements as , s ∈ D ∩ {j, . . . , h}. This estimate will be used to
ing, Saint-Petersburg State Polytechnical University, Saint-Petersburg 194021, predict which path ui0 may correspond to the solution of the
Russia (e-mail: veram@dcn.icc.spbstu.ru; petert@dcn.icc.spbstu.ru). decoding problem.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. It is difficult to compute T (ui0 , y0n−1 ) at phase i of the SC
Digital Object Identifier 10.1109/LCOMM.2014.2323237 decoder. Let v[j]n−10 , 0 ≤ j < 2n−i−1 , be different paths in the

1089-7798 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
1128 IEEE COMMUNICATIONS LETTERS, VOL. 18, NO. 7, JULY 2014

subtree, such that v[j]i0 = ui0 and v[j]n−1 i+1 ∈ {0, 1}


n−i−1 The initial value for these recursive expressions is given by
. Let J
be a random variable, which is equal to j if the most probable R(b, yj ) = PXj |Yj (b|yj ), b ∈ {0, 1}, where Xj is a random
codeword of the polar code corresponds to path v[j]n−1 0 . Ob- variable corresponding to the j-th codeword symbol.
serve that J = j implies that v[j]h = 0, h ∈ F. We propose to
estimate T (ui0 , y0n−1 ) as EJ [PU n−1 |Y n−1 (v[J]n−1
0 |y0n−1 )]. Note B. Efficient Implementation
0 0
that increasing i reduces the number of possible values of J, To obtain efficient implementation of the proposed method
improving thus the accuracy of such estimate. we follow [2]. Namely, we present memory-efficient tech-
Let α = arg max0≤j<2n−i−1 PU n−1 |Y n−1 (v[j]n−1 0 |y0n−1 ). niques for computing R(ui0 , y0n−1 ). Furthermore, we derive
0 0
Event J = α is equivalent to v[α]h = 0, h ∈ F. Hence, log-domain implementation of the algorithm, which employs
  only summation and comparison operations without any per-
T ui0 , y0n−1
   formance loss.
≈ EJ PU n−1 |Y n−1 v[J]n−1 0 |y n−1
0
Let λ, φ, and β denote layer, phase and branch number,
0 0
respectively, where 0 ≤ β < 2m−λ . At each decoding itera-
 −1
2n−i−1
  tion the only output needed is R((ui0 , b), y0n−1 ), b ∈ {0, 1},
= PU n−1 |Y n−1 v[j]n−1
0 |y0n−1 P {J = j}
0 0 so it is associated with branch number β = 0. It can be
j=0
  computed recursively as follows. For λ > 0 one should com-
≥ PU n−1 |Y n−1 v[α]n−1 |y0n−1 P {J = α}. (1)
0 0


0
pute R(ûφ−1
0 , y0Λ−1 ), where Λ = 2λ . Denote ψ = φ/2 . Let
2ψ−1 Λ/2−1
R(ui0 ,y0n−1 ) R(û2ψ−1
0,even ⊕ û0,odd , y0 ) and R(û2ψ−1 Λ−1
0,odd , yΛ/2 ) be associ-
ated with branch number 2β and 2β + 1, respectively.
Average value of P {J = α} over all possible received se- Let φ, βλ = φ + 2λ β
quences is equal to the probability of the most probable path   
un−1
0 of prefix ui0 having zeroes in positions j ∈ F. It is lower Rλ [φ, βλ ] [b] = R ûφ−1
0 , b , y Λ−1
0 .
bounded by the probability Ω̂(i) that the SC decoder, which
starts from ui0 and does not take into account any freezing Similarly to [2], it will be possible to drop index φ. Therefore,
constraints, makes decisions uj = 0, j > i, j ∈ F, i.e., makes for brevity this quantity will be denoted by Rλ [β][b].
no errors in these positions. With array notation expressions (3) and (4) become
Hence, one obtains the following estimate for T (ui0 , y0n−1 ): 
max Rλ−1 [2β][b⊕d]Rλ−1 [2β +1][d], φ even,
    Rλ [β][b] = d∈{0,1}
T̂ ui0 , y0n−1 = R ui0 , y0n−1 Ω̂(i) (2) Rλ−1 [2β] [Cλ [β][0]⊕b] Rλ−1 [2β +1][b], φ odd,

where Ω̂(i) = j∈F,j>i (1 − Pj ), and Pj is the j-th subchannel where Cλ [β][b] is defined in the same way as in [2].
error probability, provided that exact values of all previous Let us make a change of variable Sλ [β] = ln(Rλ [β][0]/
bits uj  , j
< j, are available. For any given channel Pj can Rλ [β][1]). Then, one obtains
be pre-computed offline using density evolution [5], [6]. Ω̂(i) 
Q (Sλ−1 [2β], Sλ−1 [2β + 1]) , φ even,
depends only on n, F (i.e., the code being considered), channel Sλ [β] =
(−1)Cλ [β][0] Sλ−1 [2β]+Sλ−1 [2β + 1], φ odd.
properties and phase i.
At each iteration of the stack decoding algorithm one can Rλ [β][1] = Rλ−1 [2β][1]Rλ−1 [2β + 1][1] exp (Zλ [β]) , (5)
select for extension the path with the largest T̂ (ui0 , y0n−1 ). Ob- where Q(a, b) = sign(a)sign(b) min(|a|, |b|), and
serve that Ω̂(i) increases with i, while R(ui0 , y0n−1 ) decreases 
max (Sλ−1 [2β], Sλ−1 [2β + 1]) , φ even,
with i for any given ui0 . Hence, given two paths with the same Zλ [β] = (6)
Cλ [β][0]Sλ−1 [2β], φ odd.
value of R(ui0 , y0n−1 ), the decoder would prefer the longer one.
This approach enables one to compare paths ui0 with different It can be seen that
lengths, and prevent the decoder from switching frequently ⎛ ⎞ ⎛ ⎞
2
m
−1  −1
m 2m−λ
between different paths.
Let us show how probabilities R(ui0 , y0n−1 ) can be computed. ln Rm [0][1] = ⎝ lnPXj |Yj (1|yj )⎠ + ⎝ Zλ [β]⎠.
According to [1], the encoding operation cn−1 0 = un−1
0 Gn can j=0 λ=1 β=0
n/2−1
be represented via recursive expressions c0 = (u0,even ⊕
n−1
So, Dm = ln Rm [0][1] can be computed as follows. Let D0 =
2m −1
un−1
0,odd )G n/2 and c n−1
n/2 = u n−1
G
0,odd n/2 , where u i i
0,even and u0,odd j=0 ln PXj |Yj (1|yj ). For λ = 1, . . . , m one obtains
are subsequences of ui0 consisting of elements with even and
−1
2m−λ
odd indices, respectively. Thus, encoding of an input sequence Dλ = Dλ−1 + Zλ [j]. (7)
of length n reduces to encoding of two sequences of length n/2. j=0
This implies that
    However, since D0 does not depend on ui0 and ln Rm [0][1] =
n/2−1
= max R u2i+1 0,even ⊕ u0,odd , y0
2i+1
R u2i 0 , y0
n−1
Dm , ln Rm [0][0] = Dm + Sm [0], one can set D0 = 0 in this
u2i+1 ∈{0,1}
  recursion, so ln Rm [0][b], b ∈ {0, 1}, are changed by the same
· R u2i+1 , y n−1
, (3) value for all ui0 .
 2i+1 n−1   0,odd n/2 
n/2−1 The above described calculations can be implemented as
= R u2i+10,even ⊕ u0,odd , y0
2i+1
R u0 , y0
  follows. Let l be the index of a path corresponding to some
· R u2i+1 n−1
0,odd , yn/2 . (4) uφ0 l −1 . For each l one should keep the values Sl,λ [β], Dl,λ , as
MILOSLAVKAYA AND TRIFONOV: SEQUENTIAL DECODING OF POLAR CODES 1129

Fig. 1. Calculation of path score.

Fig. 3. Performance of (1024, 512) polar codes.

additionally array D is created. The same lazy copying shared


memory data structures as in [2] are used for these arrays.
Elements Sl,0 [j], j = 0, . . . , n − 1, are initialized with LLRs
ln(PXj |Yj (0|yj ))/(PXj |Yj (1|yj )). The decoder keeps paths in
a stack (priority queue) ordered according to their scores. At
each iteration a path with the largest score M is extracted from
the stack. LLRs Sl,m [0], as well as Dl,m , are computed for
this path. If path phase φl corresponds to a frozen symbol, the
path is extended1 with 0. Otherwise, the path is cloned and
extended with 0 and 1. Scores of extended paths include term
Ω̂ln (φ) = ln Ω̂(φ), which is given by (2). The algorithm makes
use of procedure RecursivelyU pdateC, which is the same as
the one in [2] (Algorithm 15), except that it is employed for a
path with number l.
If stack capacity Θ is about to be exceeded, paths with low
scores are eliminated in line 20 of Fig. 2. If the number of
different paths in the code tree extended till phase qφl exceeds
L, then short paths are removed in line 33 of Fig. 2.
Observe that the proposed algorithm makes use of only
addition, subtraction and comparison operations. Furthermore,
array Pl,λ [β][b], used in Tal-Vardy algorithm, is replaced with
the array Sl,λ [β] which has half of the size. Each path con-
sidered by the algorithm requires at most n memory cells
to store real values Sl,λ [β], m + 1 real values Dl,λ , and
2n bits Cl,λ [β][b]. Therefore, the total space requirements
are O(min(Ln, Θ)(3n + m + 1)), which is substantially more
than that of, e.g., existing LDPC decoders. Reduction of mem-
ory consumption is subject of further research.

C. Further Improvements
The decoding complexity can be further reduced as follows:
• Call RecursivelyCalcS only for φl ∈ / F. If a path is
Fig. 2. Sequential decoding of polar codes.
extended with a frozen symbol, its old score value may be
preserved. Observe that this requires some modifications
well as Cl,λ [β][b], 0 ≤ λ ≤ m, 0 ≤ β < 2m−λ , b ∈ {0, 1}. in RecursivelyCalcS in order to ensure that all its source
The calculations given by (5)–(7) are implemented in data are available.
algorithm RecursivelyCalcS shown in Fig. 1. Call • As soon as all frozen symbols are processed, one can
RecursivelyCalcS(l, m, φl ) reuses the intermediate values switch to hard decision decoding.
obtained at previous calls for the same l.
Fig. 2 presents the proposed sequential decoding algorithm. 1 If the algorithm is applied to a polar code with dynamic frozen symbols, one
It performs the same initialization as Tal-Vardy list decoding al- should extend the path with the value given by the dynamic freezing constraint,
gorithm, where probability array P is replaced with array S and and adjust appropriately line 13 of Fig. 2.
1130 IEEE COMMUNICATIONS LETTERS, VOL. 18, NO. 7, JULY 2014

TABLE I
AVERAGE C OMPLEXITY OF D ECODING A LGORITHMS, ×103 R EAL O PERATIONS

IV. N UMERICAL R ESULTS


Fig. 3 presents simulation results illustrating the performance
of the proposed decoding algorithm, as well as that of Tal-
Vardy list decoding algorithm [2], and the directed search
algorithm [4]. The results are reported both for the case of
an (1024,512,16) pure polar code and an (1024,512,28) polar
code with dynamic frozen symbols obtained as a subcode of an
(1024,893,28) extended BCH code, as well as an (1032,516)
LDPC code. It can be seen that the perfomance of the pro-
posed algorithm for the case of pure polar code is identical
to that of Tal-Vardy list decoding algorithm. It must be rec-
ognized that the performance of this code is limited by its
poor minimum distance, and increasing list size L does not
provide any gain. Much better performance can be obtained
with an (1024,512,28) code. It can be seen that the proposed
decoding algorithm provides slightly worse performance than
the directed search algorithm with the same L. Fig. 4. Correct path scores for (1024, 512, 28) polar code.
Table I illustrates the complexity of the proposed approach,
much larger list size, allowing thus to obtain significant per-
log-domain implementation of directed search algorithm for
formance gain at the same complexity level as state-of-the-art
polar codes, as well as belief propagation algorithm for LDPC
algorithms.
codes with at most 200 iterations. It can be seen that the pro-
Min-sum list decoding algorithm, similar to the proposed
posed algorithm requires much smaller number of arithmetic
one, was considered in [7]. However, it operates only with
operations with real numbers. Furthermore, it avoids evaluation
fixed-length paths, and has therefore higher average compu-
of non-linear functions, which are used in log-domain im-
tational complexity than the proposed method, but smaller
plementation of belief propagation algorithm, and some other
memory consumption.
decoding algorithms for polar codes [2]–[4].
Fig. 4 presents correct path scores ln T̂ (ui0 , y0n−1 ) (see
expression (2)) for a few instances of the decoding problem. R EFERENCES
It can be seen that one obtains score values close to the final [1] E. Arikan, “Channel polarization: A method for constructing capacity-
achieving codes for symmetric binary-input memoryless channels,” IEEE
ones at very early decoding phases, after the decoder processes Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073, Jul. 2009.
the initial blocks of frozen symbols (phases corresponding to [2] I. Tal and A. Vardy, “List decoding of polar codes,” in Proc. IEEE Int.
frozen symbols are designated by +). This prevents the decoder Symp. Inf. Theory, 2011, pp. 1–5.
from switching to incorrect paths. [3] K. Niu and K. Chen, “CRC-aided decoding of polar codes,” IEEE
Commun. Lett., vol. 16, no. 10, Oct. 2012.
[4] P. Trifonov and V. Miloslavskaya, “Polar codes with dynamic frozen sym-
bols and their decoding by directed search,” in Proc. IEEE Int. Workshop
V. C ONCLUSION Inf. Theory, Sep. 2013, pp. 1–5.
[5] I. Tal and A. Vardy, “How to construct polar codes,” IEEE Trans. Inf.
In this paper a novel decoding algorithm for polar codes Theory, vol. 59, no. 10, pp. 6562–6582, Oct. 2013.
was proposed. Its complexity was shown to be substantially [6] R. Pedarsani, S. H. Hassani, I. Tal, and E. Telatar, “On the construction of
lower compared to the existing list and stack decoding algo- polar codes,” in Proc. IEEE Int. Symp. Inf. Theory, 2011, pp. 11–15.
[7] A. Balatsoukas-Stimming, M. B. Parizi, and A. Burg, “LLR-based suc-
rithms. Complexity reduction is achieved at the cost of negli- cessive cancellation list decoding of polar codes,” in Proc. 39th Int. Conf.
gible performance degradation. However, it enables one to use Acoust., Speech, Signal Process., 2014, pp. 1–5.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy