llr2mi#

sionna.phy.fec.utils.llr2mi(llr: torch.Tensor, s: torch.Tensor | None = None, reduce_dims: bool = True) torch.Tensor[source]#

Approximates the mutual information based on Log-Likelihood Ratios (LLRs).

This function approximates the mutual information for a given set of llr values, assuming an all-zero codeword transmission as derived in [Hagenauer]:

\[I \approx 1 - \sum \operatorname{log_2} \left( 1 + \operatorname{e}^{-\text{llr}} \right)\]

The approximation relies on the symmetry condition:

\[p(\text{llr}|x=0) = p(\text{llr}|x=1) \cdot \operatorname{exp}(\text{llr})\]

For cases where the transmitted codeword is not all-zero, this method requires knowledge of the original bit sequence s to adjust the LLR signs accordingly, simulating an all-zero codeword transmission.

Note that the LLRs are defined as \(\frac{p(x=1)}{p(x=0)}\), which reverses the sign compared to the solution in [Hagenauer].

Parameters:
  • llr (torch.Tensor) – Tensor of arbitrary shape containing LLR values.

  • s (torch.Tensor | None) – Tensor of the same shape as llr representing the signs of the transmitted sequence (assuming BPSK), with values of +/-1.

  • reduce_dims (bool) – If True, reduces all dimensions and returns a scalar. If False, averages only over the last dimension.

Outputs:

mi – Approximated mutual information. Scalar if reduce_dims is True, otherwise tensor with the same shape as llr except for the last dimension, which is removed.

Examples

import torch
from sionna.phy.fec.utils import llr2mi

llr = torch.randn(1000) * 2.0
mi = llr2mi(llr)
print(mi.item())