Low-Density Parity-Check (LDPC)
The low-density parity-check (LDPC) code module supports 5G compliant LDPC codes and allows iterative belief propagation (BP) decoding. Further, the module supports rate-matching for 5G and provides a generic linear encoder.
The following code snippets show how to setup and run a rate-matched 5G compliant LDPC encoder and a corresponding belief propagation (BP) decoder.
First, we need to create instances of LDPC5GEncoder
and LDPC5GDecoder
:
encoder = LDPC5GEncoder(k = 100, # number of information bits (input)
n = 200) # number of codeword bits (output)
decoder = LDPC5GDecoder(encoder = encoder,
num_iter = 20, # number of BP iterations
return_infobits = True)
Now, the encoder and decoder can be used by:
# --- encoder ---
# u contains the information bits to be encoded and has shape [...,k].
# c contains the encoded codewords and has shape [...,n].
c = encoder(u)
# --- decoder ---
# llr contains the log-likelihood ratios from the demapper and has shape [...,n].
# u_hat contains the estimated information bits and has shape [...,k].
u_hat = decoder(llr)
- class sionna.phy.fec.ldpc.encoding.LDPC5GEncoder(k, n, num_bits_per_symbol=None, bg=None, precision=None, **kwargs)[source]
5G NR LDPC Encoder following the 3GPP 38.212 including rate-matching.
The implementation follows the 3GPP NR Initiative [3GPPTS38212_LDPC].
- Parameters:
k (int) – Defining the number of information bit per codeword.
n (int) – Defining the desired codeword length.
num_bits_per_symbol (None (default) | int) – Defining the number of bits per QAM symbol. If this parameter is explicitly provided, the codeword will be interleaved after rate-matching as specified in Sec. 5.4.2.2 in [3GPPTS38212_LDPC].
bg (None (default) | “bg1” | “bg2”) – Basegraph to be used for the code construction. If None is provided, the encoder will automatically select the basegraph according to [3GPPTS38212_LDPC].
precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
bits ([…,k], tf.float) – Binary tensor containing the information bits to be encoded.
- Output:
[…,n], tf.float – Binary tensor of same shape as inputs besides last dimension has changed to n containing the encoded codeword bits.
Note
As specified in [3GPPTS38212_LDPC], the encoder also performs rate-matching (puncturing and shortening). Thus, the corresponding decoder needs to invert these operations, i.e., must be compatible with the 5G encoding scheme.
- property coderate
Coderate of the LDPC code after rate-matching
- generate_out_int(n, num_bits_per_symbol)[source]
Generates LDPC output interleaver sequence as defined in Sec 5.4.2.2 in [3GPPTS38212_LDPC].
- Parameters:
n (int) – Desired output sequence length.
num_bits_per_symbol (int) – Number of symbols per QAM symbol, i.e., the modulation order.
- Output:
perm_seq (ndarray of length n) – Containing the permuted indices.
perm_seq_inv (ndarray of length n) – Containing the inverse permuted indices.
Note
The interleaver pattern depends on the modulation order and helps to reduce dependencies in bit-interleaved coded modulation (BICM) schemes combined with higher order modulation.
- property k
Number of input information bits
- property k_ldpc
Number of LDPC information bits after rate-matching
- property n
Number of output codeword bits
- property n_ldpc
Number of LDPC codeword bits before rate-matching
- property num_bits_per_symbol
Modulation order used for the rate-matching output interleaver
- property out_int
Output interleaver sequence as defined in 5.4.2.2
- property out_int_inv
Inverse output interleaver sequence as defined in 5.4.2.2
- property pcm
Parity-check matrix for given code parameters
- property z
Lifting factor of the basegraph
- class sionna.phy.fec.ldpc.decoding.LDPCBPDecoder(pcm, cn_update='boxplus-phi', vn_update='sum', cn_schedule='flooding', hard_out=True, num_iter=20, llr_max=20.0, v2c_callbacks=None, c2v_callbacks=None, return_state=False, precision=None, **kwargs)[source]
Iterative belief propagation decoder for low-density parity-check (LDPC) codes and other codes on graphs.
This class defines a generic belief propagation decoder for decoding with arbitrary parity-check matrices. It can be used to iteratively estimate/recover the transmitted codeword (or information bits) based on the LLR-values of the received noisy codeword observation.
Per default, the decoder implements the flooding message passing algorithm [Ryan], i.e., all nodes are updated in a parallel fashion. Different check node update functions are available
boxplus
boxplus-phi
with
minsum
offset-minsum
where
and and denotes the message from check node (CN) j to variable node (VN) i and from VN i to CN j, respectively. Further, denotes all indices of connected VNs to CN j andis the sign of the outgoing message. For further details we refer to [Ryan] and [Chen] for offset corrected minsum.
Note that for full 5G 3GPP NR compatibility, the correct puncturing and shortening patterns must be applied (cf. [Richardson] for details), this can be done by
LDPC5GEncoder
andLDPC5GDecoder
, respectively.If required, the decoder can be made trainable and is fully differentiable by following the concept of weighted BP [Nachmani]. For this, custom callbacks can be registered that scale the messages during decoding. Please see the corresponding tutorial notebook for details.
For numerical stability, the decoder applies LLR clipping of +/- llr_max to the input LLRs.
- Parameters:
pcm (ndarray) – An ndarray of shape [n-k, n] defining the parity-check matrix consisting only of 0 or 1 entries. Can be also of type scipy. sparse.csr_matrix or scipy.sparse.csc_matrix.
cn_update (str, "boxplus-phi" (default) | "boxplus" | "minsum" | "offset-minsum" | "identity" | callable) – Check node update rule to be used as described above. If a callable is provided, it will be used instead as CN update. The input of the function is a ragged tensor of v2c messages of shape [num_cns, None, batch_size] where the second dimension is ragged (i.e., depends on the individual CN degree).
vn_update (str, "sum" (default) | "identity" | callable) – Variable node update rule to be used. If a callable is provided, it will be used instead as VN update. The input of the function is a ragged tensor of c2v messages of shape [num_vns, None, batch_size] where the second dimension is ragged (i.e., depends on the individual VN degree).
cn_schedule ("flooding" | [num_update_steps, num_active_nodes], tf.int) – Defines the CN update scheduling per BP iteration. Can be either “flooding” to update all nodes in parallel (recommended) or an 2D tensor where each row defines the num_active_nodes node indices to be updated per subiteration. In this case each BP iteration runs num_update_steps subiterations, thus the decoder’s level of parallelization is lower and usually the decoding throughput decreases.
hard_out (bool, (default True)) – If True, the decoder provides hard-decided codeword bits instead of soft-values.
num_iter (int) – Defining the number of decoder iteration (due to batching, no early stopping used at the moment!).
llr_max (float (default 20) | None) – Internal clipping value for all internal messages. If None, no clipping is applied.
v2c_callbacks – Each callable will be executed after each VN update with the following arguments msg_vn_rag_, it, x_hat,where msg_vn_rag_ are the v2c messages as ragged tensor of shape [num_vns, None, batch_size], x_hat is the current estimate of each VN of shape [num_vns, batch_size] and it is the current iteration counter. It must return and updated version of msg_vn_rag_ of same shape.
callables (None (default) | list of) – Each callable will be executed after each VN update with the following arguments msg_vn_rag_, it, x_hat,where msg_vn_rag_ are the v2c messages as ragged tensor of shape [num_vns, None, batch_size], x_hat is the current estimate of each VN of shape [num_vns, batch_size] and it is the current iteration counter. It must return and updated version of msg_vn_rag_ of same shape.
c2v_callbacks (None (default) | list of callables) – Each callable will be executed after each CN update with the following arguments msg_cn_rag_ and it where msg_cn_rag_ are the c2v messages as ragged tensor of shape [num_cns, None, batch_size] and it is the current iteration counter. It must return and updated version of msg_cn_rag_ of same shape.
return_state (bool, (default False)) – If True, the internal VN messages
msg_vn
from the last decoding iteration are returned, andmsg_vn
or None needs to be given as a second input when calling the decoder. This can be used for iterative demapping and decoding.precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
llr_ch ([…,n], tf.float) – Tensor containing the channel logits/llr values.
msg_v2c (None | [num_edges, batch_size], tf.float) – Tensor of VN messages representing the internal decoder state. Required only if the decoder shall use its previous internal state, e.g. for iterative detection and decoding (IDD) schemes.
- Output:
[…,n], tf.float – Tensor of same shape as
llr_ch
containing bit-wise soft-estimates (or hard-decided bit-values) of all codeword bits.[num_edges, batch_size], tf.float: – Tensor of VN messages representing the internal decoder state. Returned only if
return_state
is set to True.
Note
As decoding input logits
are assumed for compatibility with the learning framework, but internally log-likelihood ratios (LLRs) with definition are used.The decoder is not (particularly) optimized for quasi-cyclic (QC) LDPC codes and, thus, supports arbitrary parity-check matrices.
The decoder is implemented by using ‘“ragged Tensors”’ [TF_ragged] to account for arbitrary node degrees. To avoid a performance degradation caused by a severe indexing overhead, the batch-dimension is shifted to the last dimension during decoding.
- property coderate
codrate assuming independent parity checks
- property llr_max
Max LLR value used for internal calculations and rate-matching
- property n
codeword length
- property num_cns
Number of check nodes
- property num_edges
Number of edges in decoding graph
- property num_iter
Number of decoding iterations
- property num_vns
Number of variable nodes
- property pcm
Parity-check matrix of LDPC code
- property return_state
Return internal decoder state for IDD schemes
- class sionna.phy.fec.ldpc.decoding.LDPC5GDecoder(encoder, cn_update='boxplus-phi', vn_update='sum', cn_schedule='flooding', hard_out=True, return_infobits=True, num_iter=20, llr_max=20.0, v2c_callbacks=None, c2v_callbacks=None, prune_pcm=True, return_state=False, precision=None, **kwargs)[source]
Iterative belief propagation decoder for 5G NR LDPC codes.
Inherits from
LDPCBPDecoder
and provides a wrapper for 5G compatibility, i.e., automatically handles rate-matching according to [3GPPTS38212_LDPC].Note that for full 5G 3GPP NR compatibility, the correct puncturing and shortening patterns must be applied and, thus, the encoder object is required as input.
If required the decoder can be made trainable and is differentiable (the training of some check node types may be not supported) following the concept of “weighted BP” [Nachmani].
- Parameters:
encoder (LDPC5GEncoder) – An instance of
LDPC5GEncoder
containing the correct code parameters.cn_update (str, “boxplus-phi” (default) | “boxplus” | “minsum” | “offset-minsum” | “identity” | callable) – Check node update rule to be used as described above. If a callable is provided, it will be used instead as CN update. The input of the function is a ragged tensor of v2c messages of shape [num_cns, None, batch_size] where the second dimension is ragged (i.e., depends on the individual CN degree).
vn_update (str, “sum” (default) | “identity” | callable) – Variable node update rule to be used. If a callable is provided, it will be used instead as VN update. The input of the function is a ragged tensor of c2v messages of shape [num_vns, None, batch_size] where the second dimension is ragged (i.e., depends on the individual VN degree).
cn_schedule ("flooding" | "layered" | [num_update_steps, num_active_nodes], tf.int) – Defines the CN update scheduling per BP iteration. Can be either “flooding” to update all nodes in parallel (recommended) or “layered” to sequentally update all CNs in the same lifting group together or an 2D tensor where each row defines the num_active_nodes node indices to be updated per subiteration. In this case each BP iteration runs num_update_steps subiterations, thus the decoder’s level of parallelization is lower and usually the decoding throughput decreases.
hard_out (bool, (default True)) – If True, the decoder provides hard-decided codeword bits instead of soft-values.
return_infobits (bool, (default True)) – If True, only the k info bits (soft or hard-decided) are returned. Otherwise all n positions are returned.
prune_pcm (bool, (default True)) – If True, all punctured degree-1 VNs and connected check nodes are removed from the decoding graph (see [Cammerer] for details). Besides numerical differences, this should yield the same decoding result but improved the decoding throughput and reduces the memory footprint.
num_iter (int (default: 20)) – Defining the number of decoder iterations (due to batching, no early stopping used at the moment!).
llr_max (float (default: 20) | None) – Internal clipping value for all internal messages. If None, no clipping is applied.
v2c_callbacks – Each callable will be executed after each VN update with the following arguments msg_vn_rag_, it, x_hat,where msg_vn_rag_ are the v2c messages as ragged tensor of shape [num_vns, None, batch_size], x_hat is the current estimate of each VN of shape [num_vns, batch_size] and it is the current iteration counter. It must return and updated version of msg_vn_rag_ of same shape.
callables (None (default) | list of) – Each callable will be executed after each VN update with the following arguments msg_vn_rag_, it, x_hat,where msg_vn_rag_ are the v2c messages as ragged tensor of shape [num_vns, None, batch_size], x_hat is the current estimate of each VN of shape [num_vns, batch_size] and it is the current iteration counter. It must return and updated version of msg_vn_rag_ of same shape.
c2v_callbacks (None (default) | list of callables) – Each callable will be executed after each CN update with the following arguments msg_cn_rag_ and it where msg_cn_rag_ are the c2v messages as ragged tensor of shape [num_cns, None, batch_size] and it is the current iteration counter. It must return and updated version of msg_cn_rag_ of same shape.
return_state (bool, (default False)) – If True, the internal VN messages
msg_vn
from the last decoding iteration are returned, andmsg_vn
or None needs to be given as a second input when calling the decoder. This can be used for iterative demapping and decoding.precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
llr_ch ([…,n], tf.float) – Tensor containing the channel logits/llr values.
msg_v2c (None | [num_edges, batch_size], tf.float) – Tensor of VN messages representing the internal decoder state. Required only if the decoder shall use its previous internal state, e.g. for iterative detection and decoding (IDD) schemes.
- Output:
[…,n] or […,k], tf.float – Tensor of same shape as
llr_ch
containing bit-wise soft-estimates (or hard-decided bit-values) of all n codeword bits or only the k information bits ifreturn_infobits
is True.[num_edges, batch_size], tf.float: – Tensor of VN messages representing the internal decoder state. Returned only if
return_state
is set to True. Remark: always retruns entire decoder state, even ifreturn_infobits
is True.
Note
As decoding input logits
are assumed for compatibility with the learning framework, but internally LLRs with definition are used.The decoder is not (particularly) optimized for Quasi-cyclic (QC) LDPC codes and, thus, supports arbitrary parity-check matrices.
The decoder is implemented by using ‘“ragged Tensors”’ [TF_ragged] to account for arbitrary node degrees. To avoid a performance degradation caused by a severe indexing overhead, the batch-dimension is shifted to the last dimension during decoding.
- property encoder
LDPC Encoder used for rate-matching/recovery
Node Update Functions
- sionna.phy.fec.ldpc.decoding.vn_update_sum(msg_c2v_rag, llr_ch, llr_clipping=None)[source]
Variable node update function implementing the sum update.
This function implements the (extrinsic) variable node update function. It takes the sum over all incoming messages
msg
excluding the intrinsic (= outgoing) message itself.Additionally, the channel LLR
llr_ch
is considered in each variable node.- Parameters:
msg_c2v_rag ([num_nodes, None, batch_size], tf.ragged) – Ragged tensor of shape [num_nodes, None, batch_size] where the second axis is ragged (represents individual node degrees). Represents c2v messages.
llr_ch ([num_nodes, batch_size], tf.float) – Tensor containing the channel LLRs.
llr_clipping (None (default) | float) – Clipping value used for internal processing. If None, no internal clipping is applied.
- Returns:
msg_v2c_rag (tf.ragged) – Updated v2c messages. Ragged tensor of same shape as
msg_c2v
x_tot (tf.float) – Mariginalized LLRs per variable node of shape [num_nodes, batch_size].
- sionna.phy.fec.ldpc.decoding.cn_update_minsum(msg_v2c_rag, llr_clipping=None)[source]
Check node update function implementing the minsum update.
The function implements
where
denotes the message from check node (CN) j to variable node (VN) i and from VN i to CN j, respectively. Further, denotes all indices of connected VNs to CN j andis the sign of the outgoing message. For further details we refer to [Ryan] and [Chen].
- Parameters:
msg_v2c_rag ([num_nodes, None, batch_size], tf.ragged) – Ragged tensor of shape [num_nodes, None, batch_size] where the second axis is ragged (represents individual node degrees). Represents v2c messages
llr_clipping (None (default) | float) – Clipping value used for internal processing. If None, no internal clipping is applied.
- Returns:
msg_c2v – Ragged tensor of same shape as
msg_c2v
containing the updated c2v messages.- Return type:
[num_nodes, None, batch_size], tf.ragged
- sionna.phy.fec.ldpc.decoding.cn_update_offset_minsum(msg_v2c_rag, llr_clipping=None, offset=0.5)[source]
Check node update function implementing the offset corrected minsum.
The function implements
where
and denotes the message from check node (CN) j to variable node (VN) i and from VN i to CN j, respectively. Further, denotes all indices of connected VNs to CN j andis the sign of the outgoing message. For further details we refer to [Chen].
- Parameters:
msg_v2c_rag ([num_nodes, None, batch_size], tf.ragged) – Ragged tensor of shape [num_nodes, None, batch_size] where the second axis is ragged (represents individual node degrees). Represents v2c messages.
llr_clipping (None (default) | float) – Clipping value used for internal processing. If None, no internal clipping is applied.
offset (float (default 0.5)) – Offset value to be subtracted from each outgoing message.
- Returns:
msg_c2v – Ragged tensor of same shape as
msg_c2v
containing the updated c2v messages.- Return type:
[num_nodes, None, batch_size], tf.ragged
- sionna.phy.fec.ldpc.decoding.cn_update_phi(msg, llr_clipping=None)[source]
Check node update function implementing the boxplus operation.
This function implements the (extrinsic) check node update function based on the numerically more stable “_phi” function (cf. [Ryan]). It calculates the boxplus function over all incoming messages
msg
excluding the intrinsic (=outgoing) message itself. The exact boxplus function is implemented by using the “_phi” function as in [Ryan].The function implements
where
and denotes the message from check node (CN) j to variable node (VN) i and from VN i to CN j, respectively. Further, denotes all indices of connected VNs to CN j andis the sign of the outgoing message. For further details we refer to [Ryan].
Note that for numerical stability clipping can be applied.
- Parameters:
msg_v2c_rag ([num_nodes, None, batch_size], tf.ragged) – Ragged tensor of shape [num_nodes, None, batch_size] where the second axis is ragged (represents individual node degrees). Represents v2c messages
llr_clipping (None (default) | float) – Clipping value used for internal processing. If None, no internal clipping is applied.
- Returns:
msg_c2v – Ragged tensor of same shape as
msg_c2v
containing the updated c2v messages.- Return type:
[num_nodes, None, batch_size], tf.ragged
- sionna.phy.fec.ldpc.decoding.cn_update_tanh(msg, llr_clipping=None)[source]
Check node update function implementing the boxplus operation.
This function implements the (extrinsic) check node update function. It calculates the boxplus function over all incoming messages “msg” excluding the intrinsic (=outgoing) message itself. The exact boxplus function is implemented by using the tanh function.
The function implements
where
denotes the message from check node (CN) j to variable node (VN) i and from VN i to CN j, respectively. Further, denotes all indices of connected VNs to CN j andis the sign of the outgoing message. For further details we refer to [Ryan].
Note that for numerical stability clipping can be applied.
- Parameters:
msg_v2c_rag ([num_nodes, None, batch_size], tf.ragged) – Ragged tensor of shape [num_nodes, None, batch_size] where the second axis is ragged (represents individual node degrees). Represents v2c messages
llr_clipping (None (default) | float) – Clipping value used for internal processing. If None, no internal clipping is applied.
- Returns:
msg_c2v – Ragged tensor of same shape as
msg_c2v
containing the updated c2v messages.- Return type:
[num_nodes, None, batch_size], tf.ragged
Decoder Callbacks
The LDPCBPDecoder
and
LDPC5GDecoder
have the possibility to
register callbacks that are executed after each iteration. This allows to
customize the behavior of the decoder (for example to implement weighted BP
[Nachmani]) or to track the decoding process.
A simple example to track the decoder statistics is given in the following example
num_iter = 10
# init decoder stats module
dec_stats = DecoderStatisticsCallback(num_iter)
encoder = LDPC5GEncoder(k = 100, # number of information bits (input)
n = 200) # number of codeword bits (output)
decoder = LDPC5GDecoder(encoder = encoder,
num_iter = num_iter, # number of BP iterations
return_infobits = True,
c2v_callbacks = [dec_stats,]) # register stats callback
source = GaussianPriorSource()
# generate LLRs
noise_var = 0.1
batch_size = 1000
llr_ch = source([batch_size, encoder.n], noise_var)
# and run decoder (this can be also a loop)
decoder(llr_ch)
# and print statistics
print("Avg. iterations:", dec_stats.avg_number_iterations.numpy())
print("Success rate after n iterations:", dec_stats.success_rate.numpy())
>> Avg. iterations: 5.404
>> Success rate after n iterations: [0.258 0.235 0.637 0.638 0.638 0.638 0.638 0.638 0.638 0.638]
- sionna.phy.fec.ldpc.utils.DecoderStatisticsCallback(num_iter)[source]
Callback for the LDPCBPDecoder to track decoder statistics.
Can be registered as
c2v_callbacks
in theLDPCBPDecoder
and theLDPC5GDecoder
.Remark: the decoding statistics are based on CN convergence, i.e., successful decoding is assumed if all check nodes are fulfilled. This overestimates the success-rate as it includes cases where the decoder converges to the wrong codeword.
- Parameters:
num_iter (int) – Maximum number of decoding iterations.
- Input:
msg ([num_vns, None, batch_size], tf.ragged, tf.float) – v2c messages as ragged tensor
it (int) – Current number of decoding iterations
- Output:
tf.ragged, tf.float – Same as
msg
- sionna.phy.fec.ldpc.utils.EXITCallback(num_iter)[source]
Callback for the LDPCBPDecoder to track EXIT statistics.
Can be registered as
c2v_callbacks
v2c_callbacks
in theLDPCBPDecoder
and theLDPC5GDecoder
.This callback requires all-zero codeword simulations.
- Parameters:
num_iter (int) – Maximum number of decoding iterations.
- Input:
msg ([num_vns, None, batch_size], tf.ragged, tf.float) – The v2c or c2v messages as ragged tensor
it (int) – Current number of decoding iterations
- Output:
tf.ragged, tf.float – Same as
msg
- sionna.phy.fec.ldpc.utils.WeightedBPCallback(num_edges, precision=None, **kwargs)[source]
Callback for the LDPCBPDecoder to enable weighted BP [Nachmani].
The BP decoder is fully differentiable and can be made trainable by following the concept of weighted BP [Nachmani] as shown in Fig. 1 leading to
where
denotes the trainable weight of message . Please note that the training of some check node types may be not supported.Fig. 10 Fig. 1: Weighted BP as proposed in [Nachmani].
Can be registered as
c2v_callbacks
andv2c_callbacks
in theLDPCBPDecoder
and theLDPC5GDecoder
.- Parameters:
num_edges (int) – Number of edges in the decoding graph
precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
msg ([num_vns, None, batch_size], tf.ragged, tf.float) – v2c messages as ragged tensor
- Output:
tf.ragged, tf.float – Same as
msg
References:
[3GPPTS38212_LDPC] (1,2,3,4,5,6)ETSI 3GPP TS 38.212 “5G NR Multiplexing and channel coding”, v.16.5.0, 2021-03.
[Ryan] (1,2,3,4,5,6,7)W. Ryan, “An Introduction to LDPC codes”, CRC Handbook for Coding and Signal Processing for Recording Systems, 2004.
[Richardson]T. Richardson and S. Kudekar. “Design of low-density parity-check codes for 5G new radio,” IEEE Communications Magazine 56.3, 2018.
[Nachmani] (1,2,3,4,5,6)E. Nachmani, Y. Be’ery, and D. Burshtein. “Learning to decode linear codes using deep learning,” IEEE Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2016.
[Cammerer]S. Cammerer, M. Ebada, A. Elkelesh, and S. ten Brink. “Sparse graphs for belief propagation decoding of polar codes.” IEEE International Symposium on Information Theory (ISIT), 2018.