PolarSCDecoder#
- class sionna.phy.fec.polar.PolarSCDecoder(frozen_pos: numpy.ndarray, n: int, *, precision: str | None = None, device: str | None = None, **kwargs)[source]#
Bases:
sionna.phy.block.BlockSuccessive cancellation (SC) decoder [Arikan_Polar] for Polar codes and Polar-like codes.
- Parameters:
frozen_pos (numpy.ndarray) – Array of int defining the
n-kindices of the frozen positions.n (int) – Defining the codeword length.
precision (str | None) – Precision used for internal calculations and outputs. If None,
precisionis used.device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None,
deviceis used.
- Inputs:
llr_ch – […, n], torch.float. Tensor containing the channel LLR values (as logits).
- Outputs:
u_hat – […, k], torch.float. Tensor containing hard-decided estimations of all
kinformation bits.
Notes
This block implements the SC decoder as described in [Arikan_Polar]. However, the implementation follows the recursive tree [Gross_Fast_SCL] terminology and combines nodes for increased throughputs without changing the outcome of the algorithm.
As commonly done, we assume frozen bits are set to 0. Please note that - although its practical relevance is only little - setting frozen bits to 1 may result in affine codes instead of linear code as the all-zero codeword is not necessarily part of the code any more.
Examples
import torch from sionna.phy.fec.polar import PolarSCDecoder, PolarEncoder from sionna.phy.fec.polar.utils import generate_5g_ranking k, n = 100, 256 frozen_pos, _ = generate_5g_ranking(k, n) encoder = PolarEncoder(frozen_pos, n) decoder = PolarSCDecoder(frozen_pos, n) bits = torch.randint(0, 2, (10, k), dtype=torch.float32) codewords = encoder(bits) llr_ch = 20.0 * (2.0 * codewords - 1) # BPSK without noise decoded = decoder(llr_ch) print(torch.equal(bits, decoded)) # True
Attributes
- property frozen_pos: numpy.ndarray#
Frozen positions for Polar decoding.
- property info_pos: numpy.ndarray#
Information bit positions for Polar encoding.
Methods