SymbolLogits2LLRs#

class sionna.phy.mapping.SymbolLogits2LLRs(method: str, num_bits_per_symbol: int, *, hard_out: bool = False, precision: Literal['single', 'double'] | None = None, device: str | None = None, **kwargs: Any)[source]#

Bases: sionna.phy.block.Block

Computes log-likelihood ratios (LLRs) or hard-decisions on bits from a tensor of logits (i.e., unnormalized log-probabilities) on constellation points.

Prior knowledge on the bits can be optionally provided.

Parameters:
  • method (str) – Method used for computing the LLRs. One of “app” or “maxlog”.

  • num_bits_per_symbol (int) – Number of bits per constellation symbol, e.g., 4 for QAM16.

  • hard_out (bool) – If True, the layer provides hard-decided bits instead of soft-values. Defaults to False.

  • precision (Literal['single', 'double'] | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • device (str | None) – Device for tensor operations. If None, device is used.

  • kwargs (Any)

Inputs:
  • logits – […, n, num_points], torch.float. Logits on constellation points.

  • priorNone (default) | [num_bits_per_symbol] or […, n, num_bits_per_symbol], torch.float. Prior for every bit as LLRs. It can be provided either as a tensor of shape [num_bits_per_symbol] for the entire input batch, or as a tensor that is “broadcastable” to […, n, num_bits_per_symbol].

Outputs:

llr – […, n, num_bits_per_symbol], torch.float. LLRs or hard-decisions for every bit.

Notes

With the “app” method, the LLR for the \(i\text{th}\) bit is computed according to

\[LLR(i) = \ln\left(\frac{\Pr\left(b_i=1\lvert \mathbf{z},\mathbf{p}\right)}{\Pr\left(b_i=0\lvert \mathbf{z},\mathbf{p}\right)}\right) =\ln\left(\frac{ \sum_{c\in\mathcal{C}_{i,1}} \Pr\left(c\lvert\mathbf{p}\right) e^{z_c} }{ \sum_{c\in\mathcal{C}_{i,0}} \Pr\left(c\lvert\mathbf{p}\right) e^{z_c} }\right)\]

where \(\mathcal{C}_{i,1}\) and \(\mathcal{C}_{i,0}\) are the sets of \(2^K\) constellation points for which the \(i\text{th}\) bit is equal to 1 and 0, respectively. \(\mathbf{z} = \left[z_{c_0},\dots,z_{c_{2^K-1}}\right]\) is the vector of logits on the constellation points, \(\mathbf{p} = \left[p_0,\dots,p_{K-1}\right]\) is the vector of LLRs that serves as prior knowledge on the \(K\) bits that are mapped to a constellation point and is set to \(\mathbf{0}\) if no prior knowledge is assumed to be available, and \(\Pr(c\lvert\mathbf{p})\) is the prior probability on the constellation symbol \(c\):

\[\Pr\left(c\lvert\mathbf{p}\right) = \prod_{k=0}^{K-1} \Pr\left(b_k = \ell(c)_k \lvert\mathbf{p} \right) = \prod_{k=0}^{K-1} \text{sigmoid}\left(p_k \ell(c)_k\right)\]

where \(\ell(c)_k\) is the \(k^{th}\) bit label of \(c\), where 0 is replaced by -1. The definition of the LLR has been chosen such that it is equivalent with that of logits. This is different from many textbooks in communications, where the LLR is defined as \(LLR(i) = \ln\left(\frac{\Pr\left(b_i=0\lvert y\right)}{\Pr\left(b_i=1\lvert y\right)}\right)\).

With the “maxlog” method, LLRs for the \(i\text{th}\) bit are approximated like

\[\begin{aligned} LLR(i) &\approx\ln\left(\frac{ \max_{c\in\mathcal{C}_{i,1}} \Pr\left(c\lvert\mathbf{p}\right) e^{z_c} }{ \max_{c\in\mathcal{C}_{i,0}} \Pr\left(c\lvert\mathbf{p}\right) e^{z_c} }\right) . \end{aligned}\]

Examples

import torch
from sionna.phy.mapping import SymbolLogits2LLRs

converter = SymbolLogits2LLRs("app", 4)  # 16-QAM
logits = torch.randn(10, 25, 16)  # 10 batches, 25 symbols, 16 constellation points
llr = converter(logits)
print(llr.shape)
# torch.Size([10, 25, 4])

Attributes

property num_bits_per_symbol: int#

Number of bits per symbol