ApplyFlatFadingChannel#

class sionna.phy.channel.ApplyFlatFadingChannel(precision: Literal['single', 'double'] | None = None, device: str | None = None, **kwargs)[source]#

Bases: sionna.phy.block.Block

Applies given channel matrices to a vector input and adds AWGN

This class applies a given tensor of flat-fading channel matrices to an input tensor. AWGN noise can be optionally added. Mathematically, for channel matrices \(\mathbf{H}\in\mathbb{C}^{M\times K}\) and input \(\mathbf{x}\in\mathbb{C}^{K}\), the output is

\[\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n}\]

where \(\mathbf{n}\in\mathbb{C}^{M}\sim\mathcal{CN}(0, N_o\mathbf{I})\) is an AWGN vector that is optionally added.

Parameters:
  • precision (Literal['single', 'double'] | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • device (str | None) – Device for computation (e.g., ‘cpu’, ‘cuda:0’). If None, device is used.

Inputs:
  • x – [batch_size, num_tx_ant], torch.complex. Transmit vectors.

  • h – [batch_size, num_rx_ant, num_tx_ant], torch.complex. Channel realizations. Will be broadcast to the dimensions of x if needed.

  • noNone (default) | torch.Tensor, torch.float. (Optional) noise power no per complex dimension. Will be broadcast to the shape of y. For more details, see AWGN.

Outputs:

y – [batch_size, num_rx_ant], torch.complex. Channel output.

Examples

import torch
from sionna.phy.channel import ApplyFlatFadingChannel

app_chn = ApplyFlatFadingChannel()
x = torch.randn(32, 4, dtype=torch.complex64)
h = torch.randn(32, 16, 4, dtype=torch.complex64)
y = app_chn(x, h)
print(y.shape)
# torch.Size([32, 16])