LLRs2SymbolLogits#

class sionna.phy.mapping.LLRs2SymbolLogits(num_bits_per_symbol: int, hard_out: bool = False, precision: Literal['single', 'double'] | None = None, device: str | None = None, **kwargs: Any)[source]#

Bases: sionna.phy.block.Block

Computes logits (i.e., unnormalized log-probabilities) or hard decisions on constellation points from a tensor of log-likelihood ratios (LLRs) on bits.

Parameters:
  • num_bits_per_symbol (int) – Number of bits per constellation symbol, e.g., 4 for QAM16.

  • hard_out (bool) – If True, the layer provides hard-decided constellation points instead of soft-values. Defaults to False.

  • precision (Literal['single', 'double'] | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • device (str | None) – Device for tensor operations. If None, device is used.

  • kwargs (Any)

Inputs:

llrs – […, n, num_bits_per_symbol], torch.float. LLRs for every bit.

Outputs:

logits – […, n, num_points], torch.float or […, n], torch.int32. Logits or hard-decisions on constellation points.

Notes

The logit for the constellation point \(c\) is computed according to

\[\begin{split}\begin{aligned} \log{\left(\Pr\left(c\lvert LLRs \right)\right)} &= \log{\left(\prod_{k=0}^{K-1} \Pr\left(b_k = \ell(c)_k \lvert LLRs \right)\right)}\\ &= \log{\left(\prod_{k=0}^{K-1} \text{sigmoid}\left(LLR(k) \ell(c)_k\right)\right)}\\ &= \sum_{k=0}^{K-1} \log{\left(\text{sigmoid}\left(LLR(k) \ell(c)_k\right)\right)} \end{aligned}\end{split}\]

where \(\ell(c)_k\) is the \(k^{th}\) bit label of \(c\), where 0 is replaced by -1. The definition of the LLR has been chosen such that it is equivalent with that of logits. This is different from many textbooks in communications, where the LLR is defined as \(LLR(i) = \ln\left(\frac{\Pr\left(b_i=0\lvert y\right)}{\Pr\left(b_i=1\lvert y\right)}\right)\).

Examples

import torch
from sionna.phy.mapping import LLRs2SymbolLogits

converter = LLRs2SymbolLogits(4)  # 16-QAM
llr = torch.randn(10, 25, 4)  # 10 batches, 25 symbols, 4 bits per symbol
logits = converter(llr)
print(logits.shape)
# torch.Size([10, 25, 16])

Attributes

property num_bits_per_symbol: int#

Number of bits per symbol