spread_across_subcarriers#

sionna.sys.spread_across_subcarriers(tx_power_per_ut: torch.Tensor, is_scheduled: torch.Tensor, num_tx: int | None = None, precision: Literal['single', 'double'] | None = None) torch.Tensor[source]#

Distributes the power uniformly across all allocated subcarriers and streams for each user.

Parameters:
  • tx_power_per_ut (torch.Tensor) – Transmit power [W] for each user.

  • is_scheduled (torch.Tensor) – Whether a user is scheduled on a given subcarrier and stream.

  • num_tx (int | None) – Number of transmitters. If None, it is set to num_ut, as in uplink.

  • precision (Literal['single', 'double'] | None) – Precision used for internal calculations and outputs. If set to None, precision is used.

Outputs:

tx_power – […, num_tx, num_streams_per_tx, num_ofdm_sym, num_subcarriers], torch.float. Transmit power [W] for each user, across subcarriers, streams, and OFDM symbols.

Examples

import torch
from sionna.sys.utils import spread_across_subcarriers

batch_size = 2
num_ofdm_sym = 14
num_ut = 4
num_subcarriers = 52
num_streams = 2

tx_power_per_ut = torch.ones(batch_size, num_ofdm_sym, num_ut)
is_scheduled = torch.ones(batch_size, num_ofdm_sym, num_subcarriers,
                          num_ut, num_streams, dtype=torch.bool)

tx_power = spread_across_subcarriers(tx_power_per_ut, is_scheduled)
print(tx_power.shape)
# torch.Size([2, 4, 2, 14, 52])