LDPC5GDecoder#

class sionna.phy.fec.ldpc.LDPC5GDecoder(encoder: sionna.phy.fec.ldpc.encoding.LDPC5GEncoder, cn_update: str | Callable = 'boxplus-phi', vn_update: str | Callable = 'sum', cn_schedule: str | numpy.ndarray | torch.Tensor = 'flooding', hard_out: bool = True, return_infobits: bool = True, num_iter: int = 20, llr_max: float | None = 20.0, v2c_callbacks: List[Callable] | None = None, c2v_callbacks: List[Callable] | None = None, prune_pcm: bool = True, return_state: bool = False, precision: str | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.fec.ldpc.decoding.LDPCBPDecoder

Iterative belief propagation decoder for 5G NR LDPC codes.

Inherits from LDPCBPDecoder and provides a wrapper for 5G compatibility, i.e., automatically handles rate-matching according to [3GPPTS38212].

Note that for full 5G 3GPP NR compatibility, the correct puncturing and shortening patterns must be applied and, thus, the encoder object is required as input.

If required the decoder can be made trainable and is differentiable (the training of some check node types may be not supported) following the concept of “weighted BP” [Nachmani].

Parameters:
  • encoder (sionna.phy.fec.ldpc.encoding.LDPC5GEncoder) – An instance of LDPC5GEncoder containing the correct code parameters.

  • cn_update (str | Callable) – Check node update rule to be used as described above. One of “boxplus-phi” (default), “boxplus”, “minsum”, “offset-minsum”, “identity”, or a callable. If a callable is provided, it will be used instead as CN update. The input of the function is a tensor of v2c messages of shape [batch_size, num_cns, max_degree] with a mask of shape [num_cns, max_degree].

  • vn_update (str | Callable) – Variable node update rule to be used. One of “sum” (default), “identity”, or a callable. If a callable is provided, it will be used instead as VN update. The input of the function is a tensor of c2v messages of shape [batch_size, num_vns, max_degree] with a mask of shape [num_vns, max_degree].

  • cn_schedule (str | numpy.ndarray | torch.Tensor) – Defines the CN update scheduling per BP iteration. Can be either “flooding” to update all nodes in parallel (recommended) or “layered” to sequentially update all CNs in the same lifting group together or a 2D tensor of shape [num_update_steps, num_active_nodes] where each row defines the node indices to be updated per subiteration. In this case each BP iteration runs num_update_steps subiterations, thus the decoder’s level of parallelization is lower and usually the decoding throughput decreases.

  • hard_out (bool) – If True, the decoder provides hard-decided codeword bits instead of soft-values.

  • return_infobits (bool) – If True, only the k info bits (soft or hard-decided) are returned. Otherwise all n positions are returned.

  • prune_pcm (bool) – If True, all punctured degree-1 VNs and connected check nodes are removed from the decoding graph (see [Cammerer] for details). Besides numerical differences, this should yield the same decoding result but improves the decoding throughput and reduces the memory footprint.

  • num_iter (int) – Defining the number of decoder iterations (due to batching, no early stopping used at the moment!).

  • llr_max (float | None) – Internal clipping value for all internal messages. If None, no clipping is applied.

  • v2c_callbacks (List[Callable] | None) – Each callable will be executed after each VN update with the following arguments msg_vn, it, x_hat, where msg_vn are the v2c messages as tensor of shape [batch_size, num_vns, max_degree], x_hat is the current estimate of each VN of shape [batch_size, num_vns], and it is the current iteration counter. It must return an updated version of msg_vn of same shape.

  • c2v_callbacks (List[Callable] | None) – Each callable will be executed after each CN update with the following arguments msg_cn and it where msg_cn are the c2v messages as tensor of shape [batch_size, num_cns, max_degree] and it is the current iteration counter. It must return an updated version of msg_cn of same shape.

  • return_state (bool) – If True, the internal VN messages msg_vn from the last decoding iteration are returned, and msg_vn or None needs to be given as a second input when calling the decoder. This can be used for iterative demapping and decoding.

  • precision (str | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’).

Inputs:
  • llr_ch – […, n], torch.float. Tensor containing the channel logits/llr values.

  • msg_v2cNone | [batch_size, num_edges], torch.float. Tensor of VN messages representing the internal decoder state. Required only if the decoder shall use its previous internal state, e.g., for iterative detection and decoding (IDD) schemes.

Outputs:
  • x_hat – […, n] or […, k], torch.float. Tensor of same shape as llr_ch containing bit-wise soft-estimates (or hard-decided bit-values) of all n codeword bits or only the k information bits if return_infobits is True.

  • msg_v2c – [batch_size, num_edges], torch.float. Tensor of VN messages representing the internal decoder state. Returned only if return_state is set to True. Remark: always returns entire decoder state, even if return_infobits is True.

Notes

As decoding input logits \(\operatorname{log} \frac{p(x=1)}{p(x=0)}\) are assumed for compatibility with the learning framework, but internally LLRs with definition \(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) are used.

The decoder is not (particularly) optimized for Quasi-cyclic (QC) LDPC codes and, thus, supports arbitrary parity-check matrices.

The batch-dimension is shifted to the last dimension during decoding to avoid a performance degradation caused by a severe indexing overhead.

Examples

import torch
from sionna.phy.fec.ldpc import LDPC5GEncoder, LDPC5GDecoder

# Create encoder and decoder
encoder = LDPC5GEncoder(k=100, n=200)
decoder = LDPC5GDecoder(encoder, num_iter=20)

# Encode and decode
u = torch.randint(0, 2, (10, 100), dtype=torch.float32)
c = encoder(u)
llr_ch = 2.0 * (2.0 * c - 1.0)  # Perfect LLRs
u_hat = decoder(llr_ch)
print(torch.equal(u, u_hat))
# True

Attributes

property encoder: sionna.phy.fec.ldpc.encoding.LDPC5GEncoder#

LDPC Encoder used for rate-matching/recovery.

Methods

build(input_shape: tuple, **kwargs) None[source]#

Build block and check input dimensions.

Parameters:

input_shape (tuple)