PolarBPDecoder#

class sionna.phy.fec.polar.PolarBPDecoder(frozen_pos: numpy.ndarray, n: int, num_iter: int = 20, hard_out: bool = True, *, precision: str | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.block.Block

Belief propagation (BP) decoder for Polar codes [Arikan_Polar] and Polar-like codes based on [Arikan_BP] and [Forney_Graphs].

Parameters:
  • frozen_pos (numpy.ndarray) – Array of int defining the n-k indices of the frozen positions.

  • n (int) – Defining the codeword length.

  • num_iter (int) – Defining the number of decoder iterations (no early stopping used at the moment).

  • hard_out (bool) – If True, the decoder provides hard-decided information bits instead of soft-values.

  • precision (str | None) – Precision used for internal calculations and outputs. If None, precision is used.

  • device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None, device is used.

Inputs:

llr_ch – […, n], torch.float. Tensor containing the channel logits/llr values.

Outputs:

u_hat – […, k], torch.float. Tensor containing bit-wise soft-estimates (or hard-decided bit-values) of all k information bits.

Notes

This decoder is fully differentiable and, thus, well-suited for gradient descent-based learning tasks such as learned code design [Ebada_Design].

As commonly done, we assume frozen bits are set to 0. Please note that - although its practical relevance is only little - setting frozen bits to 1 may result in affine codes instead of linear code as the all-zero codeword is not necessarily part of the code any more.

Examples

import torch
from sionna.phy.fec.polar import PolarBPDecoder, PolarEncoder
from sionna.phy.fec.polar.utils import generate_5g_ranking

k, n = 100, 256
frozen_pos, _ = generate_5g_ranking(k, n)
encoder = PolarEncoder(frozen_pos, n)
decoder = PolarBPDecoder(frozen_pos, n, num_iter=20)

bits = torch.randint(0, 2, (10, k), dtype=torch.float32)
codewords = encoder(bits)
llr_ch = 20.0 * (2.0 * codewords - 1)  # BPSK without noise
decoded = decoder(llr_ch)
print(torch.equal(bits, decoded))
# True

Attributes

property n: int#

Codeword length.

property k: int#

Number of information bits.

property frozen_pos: numpy.ndarray#

Frozen positions for Polar decoding.

property info_pos: numpy.ndarray#

Information bit positions for Polar encoding.

property llr_max: float#

Maximum LLR value for internal calculations.

property num_iter: int#

Number of decoding iterations.

property hard_out: bool#

Indicates if decoder hard-decides outputs.

Methods

build(input_shape: Tuple[int, ...]) None[source]#

Build and check if shape of input is invalid.

Parameters:

input_shape (Tuple[int, ...])