TurboDecoder#

class sionna.phy.fec.turbo.TurboDecoder(encoder: TurboEncoder | None = None, gen_poly: tuple | None = None, rate: float = 0.3333333333333333, constraint_length: int | None = None, interleaver: str = '3GPP', terminate: bool = False, num_iter: int = 6, hard_out: bool = True, algorithm: str = 'map', precision: str | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.block.Block

Turbo code decoder based on BCJR component decoders [Berrou].

Takes as input LLRs and returns LLRs or hard decided bits, i.e., an estimate of the information tensor.

This decoder is based on the BCJRDecoder and, thus, internally instantiates two BCJRDecoder blocks.

Parameters:
  • encoder (TurboEncoder | None) – If encoder is provided as input, the following input parameters are not required and will be ignored: gen_poly, rate, constraint_length, terminate, interleaver. They will be inferred from the encoder object itself. If encoder is None, the above parameters must be provided explicitly.

  • gen_poly (tuple | None) – Tuple of strings with each string being a 0, 1 sequence. If None, rate and constraint_length must be provided.

  • rate (float) – Rate of the Turbo code. Valid values are 1/3 and 1/2. Note that gen_poly, if provided, is used to encode the underlying convolutional code, which traditionally has rate 1/2.

  • constraint_length (int | None) – Valid values are between 3 and 6 inclusive. Only required if encoder and gen_poly are None.

  • interleaver (str) – “3GPP” or “random”. If “3GPP”, the internal interleaver for Turbo codes as specified in [3GPPTS36212] will be used. Only required if encoder is None.

  • terminate (bool) – If True, the two underlying convolutional encoders are assumed to have terminated to all zero state.

  • num_iter (int) – Number of iterations for the Turbo decoding to run. Each iteration of Turbo decoding entails one BCJR decoder for each of the underlying convolutional code components.

  • hard_out (bool) – Indicates whether to output hard or soft decisions on the decoded information vector. True implies a hard-decoded information vector of 0/1’s is output. False implies decoded LLRs of the information is output.

  • algorithm (str) – Indicates the implemented BCJR algorithm. “map” denotes the exact MAP algorithm, “log” indicates the exact MAP implementation, but in log-domain, and “maxlog” indicates the approximated MAP implementation in log-domain, where \(\log(e^{a}+e^{b}) \sim \max(a,b)\).

  • precision (str | None) – Precision used for internal calculations and outputs. If None, precision is used.

  • device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None, device is used.

Inputs:

llr_chtorch.float. Tensor of shape […, n] containing the (noisy) channel output symbols where n is the codeword length.

Outputs:

outputtorch.float. Tensor of shape […, coderate * n] containing the estimates of the information bit tensor.

Notes

For decoding, input logits defined as \(\operatorname{log} \frac{p(x=1)}{p(x=0)}\) are assumed for compatibility with the rest of Sionna. Internally, log-likelihood ratios (LLRs) with definition \(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) are used.

Examples

import torch
from sionna.phy.fec.turbo import TurboEncoder, TurboDecoder

encoder = TurboEncoder(rate=1/3, constraint_length=4, terminate=True)
decoder = TurboDecoder(encoder, num_iter=6)

u = torch.randint(0, 2, (10, 40), dtype=torch.float32)
c = encoder(u)

# Simulate BPSK with AWGN
x = 2.0 * c - 1.0
y = x + 0.5 * torch.randn_like(x)
llr = 2.0 * y / 0.25

u_hat = decoder(llr)
print(u_hat.shape)
# torch.Size([10, 40])

Attributes

property gen_poly: tuple#

Generator polynomial used by the encoder.

property constraint_length: int#

Constraint length of the encoder.

property coderate: float#

Rate of the code used in the encoder.

property trellis: sionna.phy.fec.conv.utils.Trellis#

Trellis object used during encoding.

property k: int | None#

Number of information bits per codeword.

property n: int | None#

Number of codeword bits.

Methods

depuncture(y: torch.Tensor) torch.Tensor[source]#

Depuncture by scattering elements into a larger tensor with zeros.

Given a tensor y of shape [batch, n], scatters y elements into shape [batch, 3*rate*n] where the extra elements are filled with 0.

For example, if input is y, rate is 1/2 and punct_pattern is [1, 1, 0, 1, 0, 1], then the output is [y[0], y[1], 0., y[2], 0., y[3], y[4], y[5], 0., ... ,].

Parameters:

y (torch.Tensor) – Tensor of shape [batch, n] containing received LLRs.

Outputs:

y_depunct – Depunctured tensor of shape [batch, 3*rate*n].

build(input_shape: tuple) None[source]#

Build block and check dimensions.

Parameters:

input_shape (tuple) – Shape of input tensor […, n].