TBEncoder#

class sionna.phy.nr.TBEncoder(target_tb_size: int, num_coded_bits: int, target_coderate: float, num_bits_per_symbol: int, num_layers: int = 1, n_rnti: int | List[int] = 1, n_id: int | List[int] = 1, channel_type: str = 'PUSCH', codeword_index: int = 0, use_scrambler: bool = True, verbose: bool = False, precision: str | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.block.Block

5G NR transport block (TB) encoder as defined in TS 38.214 [3GPPTS38214] and TS 38.211 [3GPPTS38211]

The transport block (TB) encoder takes as input a transport block of information bits and generates a sequence of codewords for transmission. For this, the information bit sequence is segmented into multiple codewords, protected by additional CRC checks and FEC encoded. Further, interleaving and scrambling is applied before a codeword concatenation generates the final bit sequence. Fig. 1 provides an overview of the TB encoding procedure and we refer the interested reader to [3GPPTS38214] and [3GPPTS38211] for further details.

phy/api/figures/tb_encoding.png

Fig. 13 Fig. 1: Overview TB encoding (CB CRC does not always apply).#

If n_rnti and n_id are given as list, the TBEncoder encodes num_tx = len( n_rnti ) parallel input streams with different scrambling sequences per user.

Parameters:
  • target_tb_size (int) – Target transport block size, i.e., how many information bits are encoded into the TB. Note that the effective TB size can be slightly different due to quantization. If required, zero padding is internally applied.

  • num_coded_bits (int) – Number of coded bits after TB encoding.

  • target_coderate (float) – Target coderate.

  • num_bits_per_symbol (int) – Modulation order, i.e., number of bits per QAM symbol.

  • num_layers (int) – Number of transmission layers. Must be in [1, …, 8]. Defaults to 1.

  • n_rnti (int | List[int]) – RNTI identifier provided by higher layer. Defaults to 1 and must be in range [0, 65535]. Defines a part of the random seed of the scrambler. If provided as list, every list entry defines the RNTI of an independent input stream.

  • n_id (int | List[int]) – Data scrambling ID \(n_\text{ID}\) related to cell id and provided by higher layer. Defaults to 1 and must be in range [0, 1023]. If provided as list, every list entry defines the scrambling id of an independent input stream.

  • channel_type (str) – Can be either “PUSCH” or “PDSCH”. Defaults to “PUSCH”.

  • codeword_index (int) – Scrambler can be configured for two codeword transmission. codeword_index can be either 0 or 1. Must be 0 for channel_type = “PUSCH”. Defaults to 0.

  • use_scrambler (bool) – If False, no data scrambling is applied (non standard-compliant). Defaults to True.

  • verbose (bool) – If True, additional parameters are printed during initialization. Defaults to False.

  • precision (str | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • device (str | None) – Device for computation (‘cpu’ or ‘cuda’).

Inputs:

inputs – […, target_tb_size] or […, num_tx, target_tb_size], torch.float. 2+D tensor containing the information bits to be encoded. If n_rnti and n_id are a list of size num_tx, the input must be of shape [..., num_tx, target_tb_size].

Outputs:

codeword – […, num_coded_bits], torch.float. 2+D tensor containing the sequence of the encoded codeword bits of the transport block.

Notes

The parameters tb_size and num_coded_bits can be derived by the calculate_tb_size() function or by accessing the corresponding PUSCHConfig attributes.

Examples

import torch
from sionna.phy.nr import TBEncoder

encoder = TBEncoder(
    target_tb_size=1000,
    num_coded_bits=2000,
    target_coderate=0.5,
    num_bits_per_symbol=4,
    n_rnti=1,
    n_id=1
)

bits = torch.randint(0, 2, (10, 1000), dtype=torch.float32)
coded_bits = encoder(bits)
print(coded_bits.shape)
# torch.Size([10, 2000])

Attributes

property tb_size: int#

Effective number of information bits per TB. Note that (if required) internal zero padding can be applied to match the requested exact target_tb_size.

property k: int#

Number of input information bits. Equals tb_size except for zero padding of the last positions if the target_tb_size is quantized.

property k_padding: int#

Number of zero padded bits at the end of the TB.

property n: int#

Total number of output bits.

property num_cbs: int#

Number of code blocks.

property coderate: float#

Effective coderate of the TB after rate-matching including overhead for the CRC.

property ldpc_encoder: sionna.phy.fec.ldpc.encoding.LDPC5GEncoder#

LDPC encoder used for TB encoding.

property scrambler: sionna.phy.fec.scrambling.TB5GScrambler | None#

Scrambler used for TB scrambling. None if no scrambler is used.

property tb_crc_encoder: sionna.phy.fec.crc.CRCEncoder#

TB CRC encoder.

property cb_crc_encoder: sionna.phy.fec.crc.CRCEncoder | None#

CB CRC encoder. None if no CB CRC is applied.

property num_tx: int#

Number of independent streams.

property cw_lengths: numpy.ndarray#

Each list element defines the codeword length of each of the codewords after LDPC encoding and rate-matching. The total number of coded bits is \(\sum\) cw_lengths.

property cw_lengths_sum: int#

Sum of codeword lengths (cached for torch.compile compatibility).

property cw_lengths_max: int#

Maximum codeword length (cached for torch.compile compatibility).

property output_perm_inv: torch.Tensor#

Inverse interleaver pattern for output bit interleaver.

Methods

build(input_shape: tuple) None[source]#

Test input shapes for consistency.

Parameters:

input_shape (tuple)