LDPCBPDecoder#

class sionna.phy.fec.ldpc.LDPCBPDecoder(pcm: numpy.ndarray | scipy.sparse._csr.csr_matrix | scipy.sparse._csc.csc_matrix, cn_update: str | Callable = 'boxplus-phi', vn_update: str | Callable = 'sum', cn_schedule: str | numpy.ndarray | torch.Tensor = 'flooding', hard_out: bool = True, num_iter: int = 20, llr_max: float | None = 20.0, v2c_callbacks: List[Callable] | None = None, c2v_callbacks: List[Callable] | None = None, return_state: bool = False, precision: str | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.block.Block

Iterative belief propagation decoder for low-density parity-check (LDPC) codes and other codes on graphs.

This class defines a generic belief propagation decoder for decoding with arbitrary parity-check matrices. It can be used to iteratively estimate/recover the transmitted codeword (or information bits) based on the LLR-values of the received noisy codeword observation.

Per default, the decoder implements the flooding message passing algorithm [Ryan], i.e., all nodes are updated in a parallel fashion. Different check node update functions are available:

  1. boxplus

    \[y_{j \to i} = 2 \operatorname{tanh}^{-1} \left( \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{tanh} \left( \frac{x_{i' \to j}}{2} \right) \right)\]
  2. boxplus-phi

    \[y_{j \to i} = \alpha_{j \to i} \cdot \phi \left( \sum_{i' \in \mathcal{N}(j) \setminus i} \phi \left( |x_{i' \to j}|\right) \right)\]

    with \(\phi(x)=-\operatorname{log}\left(\operatorname{tanh}\left(\frac{x}{2}\right)\right)\)

  3. minsum

    \[\qquad y_{j \to i} = \alpha_{j \to i} \cdot \min_{i' \in \mathcal{N}(j) \setminus i} \left(|x_{i' \to j}|\right)\]
  4. offset-minsum

\[\qquad y_{j \to i} = \alpha_{j \to i} \cdot \max \left( \min_{i' \in \mathcal{N}(j) \setminus i} \left(|x_{i' \to j}| \right)-\beta , 0\right)\]

where \(\beta=0.5\) and \(y_{j \to i}\) denotes the message from check node (CN) j to variable node (VN) i and \(x_{i \to j}\) from VN i to CN j, respectively. Further, \(\mathcal{N}(j)\) denotes all indices of connected VNs to CN j and

\[\alpha_{j \to i} = \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{sign}(x_{i' \to j})\]

is the sign of the outgoing message. For further details we refer to [Ryan] and [Chen] for offset corrected minsum.

Note that for full 5G 3GPP NR compatibility, the correct puncturing and shortening patterns must be applied (cf. [Richardson] for details), this can be done by LDPC5GEncoder and LDPC5GDecoder, respectively.

If required, the decoder can be made trainable and is fully differentiable by following the concept of weighted BP [Nachmani]. For this, custom callbacks can be registered that scale the messages during decoding. Please see the corresponding tutorial notebook for details.

For numerical stability, the decoder applies LLR clipping of +/- llr_max to the input LLRs.

Parameters:
  • pcm (numpy.ndarray | scipy.sparse._csr.csr_matrix | scipy.sparse._csc.csc_matrix) – An ndarray of shape [n-k, n] defining the parity-check matrix consisting only of 0 or 1 entries. Can also be of type scipy.sparse.csr_matrix or scipy.sparse.csc_matrix.

  • cn_update (str | Callable) – Check node update rule to be used as described above. One of “boxplus-phi” (default), “boxplus”, “minsum”, “offset-minsum”, “identity”, or a callable. If a callable is provided, it will be used instead as CN update. The input of the function is a tensor of v2c messages of shape [batch_size, num_cns, max_degree] with a mask of shape [num_cns, max_degree].

  • vn_update (str | Callable) – Variable node update rule to be used. One of “sum” (default), “identity”, or a callable. If a callable is provided, it will be used instead as VN update. The input of the function is a tensor of c2v messages of shape [batch_size, num_vns, max_degree] with a mask of shape [num_vns, max_degree].

  • cn_schedule (str | numpy.ndarray | torch.Tensor) – Defines the CN update scheduling per BP iteration. Can be either “flooding” to update all nodes in parallel (recommended) or a 2D tensor of shape [num_update_steps, num_active_nodes] where each row defines the node indices to be updated per subiteration. In this case each BP iteration runs num_update_steps subiterations, thus the decoder’s level of parallelization is lower and usually the decoding throughput decreases.

  • hard_out (bool) – If True, the decoder provides hard-decided codeword bits instead of soft-values.

  • num_iter (int) – Defining the number of decoder iterations (due to batching, no early stopping used at the moment!).

  • llr_max (float | None) – Internal clipping value for all internal messages. If None, no clipping is applied.

  • v2c_callbacks (List[Callable] | None) – Each callable will be executed after each VN update with the following arguments msg_vn, it, x_hat, where msg_vn are the v2c messages as tensor of shape [batch_size, num_vns, max_degree], x_hat is the current estimate of each VN of shape [batch_size, num_vns], and it is the current iteration counter. It must return an updated version of msg_vn of same shape.

  • c2v_callbacks (List[Callable] | None) – Each callable will be executed after each CN update with the following arguments msg_cn and it where msg_cn are the c2v messages as tensor of shape [batch_size, num_cns, max_degree] and it is the current iteration counter. It must return an updated version of msg_cn of same shape.

  • return_state (bool) – If True, the internal VN messages msg_vn from the last decoding iteration are returned, and msg_vn or None needs to be given as a second input when calling the decoder. This can be used for iterative demapping and decoding.

  • precision (str | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’).

Inputs:
  • llr_ch – […, n], torch.float. Tensor containing the channel logits/llr values.

  • msg_v2cNone | [batch_size, num_edges], torch.float. Tensor of VN messages representing the internal decoder state. Required only if the decoder shall use its previous internal state, e.g., for iterative detection and decoding (IDD) schemes.

Outputs:
  • x_hat – […, n], torch.float. Tensor of same shape as llr_ch containing bit-wise soft-estimates (or hard-decided bit-values) of all codeword bits.

  • msg_v2c – [batch_size, num_edges], torch.float. Tensor of VN messages representing the internal decoder state. Returned only if return_state is set to True.

Notes

As decoding input logits \(\operatorname{log} \frac{p(x=1)}{p(x=0)}\) are assumed for compatibility with the learning framework, but internally log-likelihood ratios (LLRs) with definition \(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) are used.

The decoder is not (particularly) optimized for quasi-cyclic (QC) LDPC codes and, thus, supports arbitrary parity-check matrices.

Examples

import torch
from sionna.phy.fec.utils import load_parity_check_examples
from sionna.phy.fec.ldpc import LDPCBPDecoder

# Load (7,4) Hamming code
pcm, k, n, _ = load_parity_check_examples(0)
decoder = LDPCBPDecoder(pcm, num_iter=10)

# Decode random LLRs
llr_ch = torch.randn(100, n) * 2.0
c_hat = decoder(llr_ch)
print(c_hat.shape)
# torch.Size([100, 7])

Attributes

property pcm: numpy.ndarray | scipy.sparse._csr.csr_matrix#

Parity-check matrix of LDPC code.

property num_cns: int#

Number of check nodes.

property num_vns: int#

Number of variable nodes.

property n: int#

Codeword length.

property coderate: float#

Coderate assuming independent parity checks.

property num_edges: int#

Number of edges in decoding graph.

property num_iter: int#

Number of decoding iterations.

property llr_max: float | None#

Max LLR value used for internal calculations.

property return_state: bool#

Return internal decoder state for IDD schemes.

Methods

build(input_shape: tuple, **kwargs) None[source]#

Build block and validate input shape.

Parameters:

input_shape (tuple)