MaximumLikelihoodDetector#
- class sionna.phy.mimo.MaximumLikelihoodDetector(output: str, demapping_method: str, num_streams: int, constellation_type: str | None = None, num_bits_per_symbol: int | None = None, constellation: sionna.phy.mapping.Constellation | None = None, hard_out: bool = False, precision: Literal['single', 'double'] | None = None, device: str | None = None, **kwargs)[source]#
Bases:
sionna.phy.block.BlockMIMO maximum-likelihood (ML) detector.
This block implements MIMO maximum-likelihood (ML) detection assuming the following channel model:
\[\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n}\]where \(\mathbf{y}\in\mathbb{C}^M\) is the received signal vector, \(\mathbf{x}\in\mathcal{C}^K\) is the vector of transmitted symbols which are uniformly and independently drawn from the constellation \(\mathcal{C}\), \(\mathbf{H}\in\mathbb{C}^{M\times K}\) is the known channel matrix, and \(\mathbf{n}\in\mathbb{C}^M\) is a complex Gaussian noise vector. It is assumed that \(\mathbb{E}\left[\mathbf{n}\right]=\mathbf{0}\) and \(\mathbb{E}\left[\mathbf{n}\mathbf{n}^{\mathsf{H}}\right]=\mathbf{S}\), where \(\mathbf{S}\) has full rank. Optionally, prior information of the transmitted signal \(\mathbf{x}\) can be provided, either as LLRs on the bits mapped onto \(\mathbf{x}\) or as logits on the individual constellation points forming \(\mathbf{x}\).
Prior to demapping, the received signal is whitened:
\[\begin{split}\tilde{\mathbf{y}} &= \mathbf{S}^{-\frac{1}{2}}\mathbf{y}\\ &= \mathbf{S}^{-\frac{1}{2}}\mathbf{H}\mathbf{x} + \mathbf{S}^{-\frac{1}{2}}\mathbf{n}\\ &= \tilde{\mathbf{H}}\mathbf{x} + \tilde{\mathbf{n}}\end{split}\]The block can compute ML detection of symbols or bits with either soft- or hard-decisions. Note that decisions are computed symbol-/bit-wise and not jointly for the entire vector \(\textbf{x}\) (or the underlying vector of bits).
ML detection of bits:
Soft-decisions on bits are called log-likelihood ratios (LLR). With the “app” demapping method, the LLR for the \(i\text{th}\) bit of the \(k\text{th}\) user is then computed according to
\[\begin{split}\begin{aligned} LLR(k,i)&= \ln\left(\frac{\Pr\left(b_{k,i}=1\lvert \mathbf{y},\mathbf{H}\right)}{\Pr\left(b_{k,i}=0\lvert \mathbf{y},\mathbf{H}\right)}\right)\\ &=\ln\left(\frac{ \sum_{\mathbf{x}\in\mathcal{C}_{k,i,1}} \exp\left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 \right) \Pr\left( \mathbf{x} \right) }{ \sum_{\mathbf{x}\in\mathcal{C}_{k,i,0}} \exp\left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 \right) \Pr\left( \mathbf{x} \right) }\right) \end{aligned}\end{split}\]where \(\mathcal{C}_{k,i,1}\) and \(\mathcal{C}_{k,i,0}\) are the sets of vectors of constellation points for which the \(i\text{th}\) bit of the \(k\text{th}\) user is equal to 1 and 0, respectively. \(\Pr\left( \mathbf{x} \right)\) is the prior distribution of the vector of constellation points \(\mathbf{x}\). Assuming that the constellation points and bit levels are independent, it is computed from the prior of the bits according to
\[\Pr\left( \mathbf{x} \right) = \prod_{k=1}^K \prod_{i=1}^{I} \sigma \left( LLR_p(k,i) \right)\]where \(LLR_p(k,i)\) is the prior knowledge of the \(i\text{th}\) bit of the \(k\text{th}\) user given as an LLR and which is set to \(0\) if no prior knowledge is assumed to be available, and \(\sigma\left(\cdot\right)\) is the sigmoid function. The definition of the LLR has been chosen such that it is equivalent with that of logit. This is different from many textbooks in communications, where the LLR is defined as \(LLR(k,i) = \ln\left(\frac{\Pr\left(b_{k,i}=0\lvert \mathbf{y},\mathbf{H}\right)}{\Pr\left(b_{k,i}=1\lvert \mathbf{y},\mathbf{H}\right)}\right)\).
With the “maxlog” demapping method, the LLR for the \(i\text{th}\) bit of the \(k\text{th}\) user is approximated like
\[\begin{split}\begin{aligned} LLR(k,i) \approx&\ln\left(\frac{ \max_{\mathbf{x}\in\mathcal{C}_{k,i,1}} \left( \exp\left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 \right) \Pr\left( \mathbf{x} \right) \right) }{ \max_{\mathbf{x}\in\mathcal{C}_{k,i,0}} \left( \exp\left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 \right) \Pr\left( \mathbf{x} \right) \right) }\right)\\ = &\min_{\mathbf{x}\in\mathcal{C}_{k,i,0}} \left( \left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 - \ln \left(\Pr\left( \mathbf{x} \right) \right) \right) - \min_{\mathbf{x}\in\mathcal{C}_{k,i,1}} \left( \left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 - \ln \left( \Pr\left( \mathbf{x} \right) \right) \right). \end{aligned}\end{split}\]ML detection of symbols:
Soft-decisions on symbols are called logits (i.e., unnormalized log-probability).
With the “app” demapping method, the logit for the constellation point \(c \in \mathcal{C}\) of the \(k\text{th}\) user is computed according to
\[\begin{aligned} \text{logit}(k,c) &= \ln\left(\sum_{\mathbf{x} : x_k = c} \exp\left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 \right)\Pr\left( \mathbf{x} \right)\right). \end{aligned}\]With the “maxlog” demapping method, the logit for the constellation point \(c \in \mathcal{C}\) of the \(k\text{th}\) user is approximated like
\[\text{logit}(k,c) \approx \max_{\mathbf{x} : x_k = c} \left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 + \ln \left( \Pr\left( \mathbf{x} \right) \right) \right).\]When hard decisions are requested, this block returns for the \(k\) th stream
\[\hat{c}_k = \underset{c \in \mathcal{C}}{\text{argmax}} \left( \sum_{\mathbf{x} : x_k = c} \exp\left( -\left\lVert\tilde{\mathbf{y}}-\tilde{\mathbf{H}}\mathbf{x}\right\rVert^2 \right)\Pr\left( \mathbf{x} \right) \right)\]where \(\mathcal{C}\) is the set of constellation points.
- Parameters:
output (str) – Type of output, either
"bit"for LLRs on bits or"symbol"for logits on constellation symbolsdemapping_method (str) – Demapping method, either
"app"or"maxlog"num_streams (int) – Number of transmitted streams
constellation_type (str | None) – Constellation type, one of
"qam","pam", or"custom". For"custom", an instance ofConstellationmust be provided.num_bits_per_symbol (int | None) – Number of bits per constellation symbol, e.g., 4 for QAM16. Only required for
constellation_typein ["qam","pam"].constellation (sionna.phy.mapping.Constellation | None) – An instance of
Constellationor None. If None,constellation_typeandnum_bits_per_symbolmust be provided.hard_out (bool) – If True, the detector computes hard-decided bit values or constellation point indices instead of soft-values. Defaults to False.
precision (Literal['single', 'double'] | None) – Precision used for internal calculations and outputs. If set to None,
precisionis used.device (str | None) – Device for computations
- Inputs:
y – […,M], torch.complex. Received signals.
h – […,M,num_streams], torch.complex. Channel matrices.
s – […,M,M], torch.complex. Noise covariance matrices.
prior – None (default) | […,num_streams,num_bits_per_symbol] or […,num_streams,num_points], torch.float. Prior of the transmitted signals. If
outputequals"bit", then LLRs of the transmitted bits are expected. Ifoutputequals"symbol", then logits of the transmitted constellation points are expected.
One of:
- Outputs:
llr – […, num_streams, num_bits_per_symbol], torch.float. LLRs or hard-decisions for every bit of every stream, if
outputequals"bit".logits – […, num_streams, num_points], torch.float or […, num_streams], torch.int32. Logits or hard-decisions for constellation symbols for every stream, if
outputequals"symbol". Hard-decisions correspond to the symbol indices.
- Parameters:
Examples
detector = MaximumLikelihoodDetector( output="bit", demapping_method="maxlog", num_streams=2, constellation_type="qam", num_bits_per_symbol=4 ) llr = detector(y, h, s)
Attributes
- property constellation: sionna.phy.mapping.Constellation#
The constellation used by the detector.