ApplyFlatFadingChannel#
- class sionna.phy.channel.ApplyFlatFadingChannel(precision: Literal['single', 'double'] | None = None, device: str | None = None, **kwargs)[source]#
Bases:
sionna.phy.block.BlockApplies given channel matrices to a vector input and adds AWGN
This class applies a given tensor of flat-fading channel matrices to an input tensor. AWGN noise can be optionally added. Mathematically, for channel matrices \(\mathbf{H}\in\mathbb{C}^{M\times K}\) and input \(\mathbf{x}\in\mathbb{C}^{K}\), the output is
\[\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n}\]where \(\mathbf{n}\in\mathbb{C}^{M}\sim\mathcal{CN}(0, N_o\mathbf{I})\) is an AWGN vector that is optionally added.
- Parameters:
- Inputs:
x – [batch_size, num_tx_ant], torch.complex. Transmit vectors.
h – [batch_size, num_rx_ant, num_tx_ant], torch.complex. Channel realizations. Will be broadcast to the dimensions of
xif needed.no – None (default) | torch.Tensor, torch.float. (Optional) noise power
noper complex dimension. Will be broadcast to the shape ofy. For more details, seeAWGN.
- Outputs:
y – [batch_size, num_rx_ant], torch.complex. Channel output.
Examples
import torch from sionna.phy.channel import ApplyFlatFadingChannel app_chn = ApplyFlatFadingChannel() x = torch.randn(32, 4, dtype=torch.complex64) h = torch.randn(32, 16, 4, dtype=torch.complex64) y = app_chn(x, h) print(y.shape) # torch.Size([32, 16])