TurboDecoder#
- class sionna.phy.fec.turbo.TurboDecoder(encoder: TurboEncoder | None = None, gen_poly: tuple | None = None, rate: float = 0.3333333333333333, constraint_length: int | None = None, interleaver: str = '3GPP', terminate: bool = False, num_iter: int = 6, hard_out: bool = True, algorithm: str = 'map', precision: str | None = None, device: str | None = None, **kwargs)[source]#
Bases:
sionna.phy.block.BlockTurbo code decoder based on BCJR component decoders [Berrou].
Takes as input LLRs and returns LLRs or hard decided bits, i.e., an estimate of the information tensor.
This decoder is based on the
BCJRDecoderand, thus, internally instantiates twoBCJRDecoderblocks.- Parameters:
encoder (TurboEncoder | None) – If
encoderis provided as input, the following input parameters are not required and will be ignored:gen_poly,rate,constraint_length,terminate,interleaver. They will be inferred from theencoderobject itself. Ifencoderis None, the above parameters must be provided explicitly.gen_poly (tuple | None) – Tuple of strings with each string being a 0, 1 sequence. If None,
rateandconstraint_lengthmust be provided.rate (float) – Rate of the Turbo code. Valid values are 1/3 and 1/2. Note that
gen_poly, if provided, is used to encode the underlying convolutional code, which traditionally has rate 1/2.constraint_length (int | None) – Valid values are between 3 and 6 inclusive. Only required if
encoderandgen_polyare None.interleaver (str) – “3GPP” or “random”. If “3GPP”, the internal interleaver for Turbo codes as specified in [3GPPTS36212] will be used. Only required if
encoderis None.terminate (bool) – If True, the two underlying convolutional encoders are assumed to have terminated to all zero state.
num_iter (int) – Number of iterations for the Turbo decoding to run. Each iteration of Turbo decoding entails one BCJR decoder for each of the underlying convolutional code components.
hard_out (bool) – Indicates whether to output hard or soft decisions on the decoded information vector. True implies a hard-decoded information vector of 0/1’s is output. False implies decoded LLRs of the information is output.
algorithm (str) – Indicates the implemented BCJR algorithm. “map” denotes the exact MAP algorithm, “log” indicates the exact MAP implementation, but in log-domain, and “maxlog” indicates the approximated MAP implementation in log-domain, where \(\log(e^{a}+e^{b}) \sim \max(a,b)\).
precision (str | None) – Precision used for internal calculations and outputs. If None,
precisionis used.device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None,
deviceis used.
- Inputs:
llr_ch – torch.float. Tensor of shape […, n] containing the (noisy) channel output symbols where n is the codeword length.
- Outputs:
output – torch.float. Tensor of shape […, coderate * n] containing the estimates of the information bit tensor.
Notes
For decoding, input logits defined as \(\operatorname{log} \frac{p(x=1)}{p(x=0)}\) are assumed for compatibility with the rest of Sionna. Internally, log-likelihood ratios (LLRs) with definition \(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) are used.
Examples
import torch from sionna.phy.fec.turbo import TurboEncoder, TurboDecoder encoder = TurboEncoder(rate=1/3, constraint_length=4, terminate=True) decoder = TurboDecoder(encoder, num_iter=6) u = torch.randint(0, 2, (10, 40), dtype=torch.float32) c = encoder(u) # Simulate BPSK with AWGN x = 2.0 * c - 1.0 y = x + 0.5 * torch.randn_like(x) llr = 2.0 * y / 0.25 u_hat = decoder(llr) print(u_hat.shape) # torch.Size([10, 40])
Attributes
- property trellis: sionna.phy.fec.conv.utils.Trellis#
Trellis object used during encoding.
Methods
- depuncture(y: torch.Tensor) torch.Tensor[source]#
Depuncture by scattering elements into a larger tensor with zeros.
Given a tensor
yof shape [batch, n], scattersyelements into shape [batch, 3*rate*n] where the extra elements are filled with 0.For example, if input is
y, rate is 1/2 andpunct_patternis[1, 1, 0, 1, 0, 1], then the output is[y[0], y[1], 0., y[2], 0., y[3], y[4], y[5], 0., ... ,].- Parameters:
y (torch.Tensor) – Tensor of shape [batch, n] containing received LLRs.
- Outputs:
y_depunct – Depunctured tensor of shape [batch, 3*rate*n].