OSDecoder#
- class sionna.phy.fec.linear.OSDecoder(enc_mat: numpy.ndarray | None = None, t: int = 0, is_pcm: bool = False, encoder: sionna.phy.block.Block | None = None, precision: str | None = None, device: str | None = None, **kwargs)[source]#
Bases:
sionna.phy.block.BlockOrdered statistics decoding (OSD) for binary, linear block codes.
This block implements the OSD algorithm as proposed in [Fossorier] and, thereby, approximates maximum likelihood decoding for a sufficiently large order \(t\). The algorithm works for arbitrary linear block codes, but has a high computational complexity for long codes.
The algorithm consists of the following steps:
1. Sort LLRs according to their reliability and apply the same column permutation to the generator matrix.
2. Bring the permuted generator matrix into its systematic form (so-called most-reliable basis).
3. Hard-decide and re-encode the \(k\) most reliable bits and discard the remaining \(n-k\) received positions.
4. Generate all possible error patterns up to \(t\) errors in the \(k\) most reliable positions find the most likely codeword within these candidates.
This implementation of the OSD algorithm uses the LLR-based distance metric from [Stimming_LLR] which simplifies the handling of higher-order modulation schemes.
- Parameters:
enc_mat (numpy.ndarray | None) – Binary generator matrix of shape [k, n]. If
is_pcmis True,enc_matis interpreted as parity-check matrix of shape [n-k, n].t (int) – Order of the OSD algorithm.
is_pcm (bool) – If True,
enc_matis interpreted as parity-check matrix.encoder (sionna.phy.block.Block | None) – Sionna block that implements a FEC encoder. If not None,
enc_matwill be ignored and the code as specified by the encoder is used to initialize OSD.precision (str | None) – Precision used for internal calculations and outputs. If None,
precisionis used.device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None,
deviceis used.
- Inputs:
llr_ch – […, n], torch.float. Tensor containing the channel logits/llr values.
- Outputs:
c_hat – […, n], torch.float. Tensor of same shape as
llr_chcontaining binary hard-decisions of all codeword bits.
Notes
OS decoding is of high complexity and is only feasible for small values of \(t\) as \({n \choose t}\) patterns must be evaluated. The advantage of OSD is that it works for arbitrary linear block codes and provides an estimate of the expected ML performance for sufficiently large \(t\). However, for some code families, more efficient decoding algorithms with close to ML performance exist which can exploit certain code specific properties. Examples of such decoders are the
ViterbiDecoderalgorithm for convolutional codes or thePolarSCLDecoderfor Polar codes (for a sufficiently large list size).It is recommended to run the decoder with
torch.compile()as it significantly reduces the memory complexity (typically 4-5x reduction) and improves execution speed (typically 7x or more). Without compilation, the decoder materializes large intermediate tensors of shape[batch_size, num_patterns, n]wherenum_patternscan be very large for higher values oft.Examples
import torch from sionna.phy.fec.utils import load_parity_check_examples from sionna.phy.fec.linear import LinearEncoder, OSDecoder # Load (7,4) Hamming code pcm, k, n, _ = load_parity_check_examples(0) encoder = LinearEncoder(pcm, is_pcm=True) decoder = OSDecoder(encoder=encoder, t=2) # Generate random codeword and add noise u = torch.randint(0, 2, (10, k), dtype=torch.float32) c = encoder(u) llr_ch = 2.0 * (2.0 * c - 1.0) # Perfect LLRs c_hat = decoder(llr_ch) print(torch.equal(c, c_hat)) # True
Attributes
- property gm: torch.Tensor#
Generator matrix of the code.
Methods