TurboTermination#

class sionna.phy.fec.turbo.utils.TurboTermination(constraint_length: int, conv_n: int = 2, num_conv_encs: int = 2, num_bitstreams: int = 3, precision: str | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.object.Object

Termination object, handles the transformation of termination bits from the convolutional encoders to a Turbo codeword.

Similarly, it handles the transformation of channel symbols corresponding to the termination of a Turbo codeword to the underlying convolutional codewords.

Parameters:
  • constraint_length (int) – Constraint length of the convolutional encoder used in the Turbo code. Note that the memory of the encoder is constraint_length - 1.

  • conv_n (int) – Number of output bits for one state transition in the underlying convolutional encoder.

  • num_conv_encs (int) – Number of parallel convolutional encoders used in the Turbo code.

  • num_bitstreams (int) – Number of output bit streams from Turbo code.

  • precision (str | None) – Precision used for internal calculations and outputs. If None, precision is used.

  • device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None, device is used.

Examples

from sionna.phy.fec.turbo import TurboTermination

term = TurboTermination(constraint_length=4, conv_n=2)
num_term_syms = term.get_num_term_syms()
print(num_term_syms)
# 4

Attributes

property mu: int#

Memory of the underlying convolutional encoder.

property conv_n: int#

Number of output bits per state transition.

property num_conv_encs: int#

Number of parallel convolutional encoders.

property num_bitstreams: int#

Number of output bit streams.

Methods

get_num_term_syms() int[source]#

Computes the number of termination symbols for the Turbo code based on the underlying convolutional code parameters, primarily the memory \(\mu\).

Note that it is assumed that one Turbo symbol implies num_bitstreams bits.

Outputs:

turbo_term_syms – Total number of termination symbols for the Turbo Code. One symbol equals num_bitstreams bits.

Examples

from sionna.phy.fec.turbo import TurboTermination

term = TurboTermination(constraint_length=4, conv_n=2)
num_term_syms = term.get_num_term_syms()
print(num_term_syms)
# 4
termbits_conv2turbo(term_bits1: torch.Tensor, term_bits2: torch.Tensor) torch.Tensor[source]#

Merges termination bit streams from the two convolutional encoders to a bit stream corresponding to the Turbo codeword.

Let term_bits1 and term_bits2 be:

\([x_1(K), z_1(K), x_1(K+1), z_1(K+1),..., x_1(K+\mu-1),z_1(K+\mu-1)]\)

\([x_2(K), z_2(K), x_2(K+1), z_2(K+1),..., x_2(K+\mu-1), z_2(K+\mu-1)]\)

where \(x_i, z_i\) are the systematic and parity bit streams respectively for a rate-1/2 convolutional encoder i, for i = 1, 2.

In the example output below, we assume \(\mu=4\) to demonstrate zero padding at the end. Zero padding is done such that the total length is divisible by num_bitstreams (defaults to 3) which is the number of Turbo bit streams.

Assume num_bitstreams = 3. Then number of termination symbols for the TurboEncoder is \(\lceil \frac{2 \cdot conv\_n \cdot \mu}{3} \rceil\):

\([x_1(K), z_1(K), x_1(K+1)]\)

\([z_1(K+1), x_1(K+2), z_1(K+2)]\)

\([x_1(K+3), z_1(K+3), x_2(K)]\)

\([z_2(K), x_2(K+1), z_2(K+1)]\)

\([x_2(K+2), z_2(K+2), x_2(K+3)]\)

\([z_2(K+3), 0, 0]\)

Therefore, the output from this method is a single dimension vector where all Turbo symbols are concatenated together.

\([x_1(K), z_1(K), x_1(K+1), z_1(K+1), x_1(K+2), z_1(K+2), x_1(K+3),\)

\(z_1(K+3), x_2(K), z_2(K), x_2(K+1), z_2(K+1), x_2(K+2), z_2(K+2),\)

\(x_2(K+3), z_2(K+3), 0, 0]\)

Parameters:
  • term_bits1 (torch.Tensor) – 2+D tensor containing termination bits from convolutional encoder 1.

  • term_bits2 (torch.Tensor) – 2+D tensor containing termination bits from convolutional encoder 2.

Outputs:

term_bits – Tensor of termination bits. The output is obtained by concatenating the inputs and then adding right zero-padding if needed.

Examples

import torch
from sionna.phy.fec.turbo import TurboTermination

term = TurboTermination(constraint_length=4, conv_n=2)
term_bits1 = torch.randint(0, 2, (10, 6), dtype=torch.int32)
term_bits2 = torch.randint(0, 2, (10, 6), dtype=torch.int32)
result = term.termbits_conv2turbo(term_bits1, term_bits2)
print(result.shape)
# torch.Size([10, 12])
term_bits_turbo2conv(term_bits: torch.Tensor) tuple[torch.Tensor, torch.Tensor][source]#

Splits the termination symbols from a Turbo codeword to the termination symbols corresponding to the two convolutional encoders, respectively.

Let’s assume \(\mu=4\) and the underlying convolutional encoders are systematic and rate-1/2, for demonstration purposes.

Let term_bits tensor, corresponding to the termination symbols of the Turbo codeword be as following:

\(y = [x_1(K), z_1(K), x_1(K+1), z_1(K+1), x_1(K+2), z_1(K+2),\) \(x_1(K+3), z_1(K+3), x_2(K), z_2(K), x_2(K+1), z_2(K+1),\) \(x_2(K+2), z_2(K+2), x_2(K+3), z_2(K+3), 0, 0]\)

The two termination tensors corresponding to the convolutional encoders are: \(y[0,..., 2\mu]\), \(y[2\mu,..., 4\mu]\). The output from this method is a tuple of two tensors, each of size \(2\mu\) and shape \([\mu,2]\).

\([[x_1(K), z_1(K)],\)

\([x_1(K+1), z_1(K+1)],\)

\([x_1(K+2), z_1(K+2)],\)

\([x_1(K+3), z_1(K+3)]]\)

and

\([[x_2(K), z_2(K)],\)

\([x_2(K+1), z_2(K+1)],\)

\([x_2(K+2), z_2(K+2)],\)

\([x_2(K+3), z_2(K+3)]]\)

Parameters:

term_bits (torch.Tensor) – Channel output of the Turbo codeword, corresponding to the termination part.

Outputs:
  • term_bits1 – Channel output corresponding to encoder 1.

  • term_bits2 – Channel output corresponding to encoder 2.

Examples

import torch
from sionna.phy.fec.turbo import TurboTermination

term = TurboTermination(constraint_length=4, conv_n=2)
term_bits = torch.randn(10, 12)
term_bits1, term_bits2 = term.term_bits_turbo2conv(term_bits)
print(term_bits1.shape, term_bits2.shape)
# torch.Size([10, 6]) torch.Size([10, 6])