Utility Functions

Linear Algebra

sionna.phy.utils.inv_cholesky(tensor)[source]

Inverse of the Cholesky decomposition of a matrix

Given a batch of M×M Hermitian positive definite matrices A, this function computes L1, where L is the Cholesky decomposition, such that A=LLH.

Input:

tensor ([…, M, M], tf.float | tf.complex) – Input tensor of rank greater than one

Output:

[…, M, M], tf.float | tf.complex – A tensor of the same shape and type as tensor containing the inverse of the Cholesky decomposition of its last two dimensions

sionna.phy.utils.matrix_pinv(tensor)[source]

Computes the Moore–Penrose (or pseudo) inverse of a matrix

Given a batch of M×K matrices A with rank K (i.e., linearly independent columns), the function returns A+, such that A+A=IK.

The two inner dimensions are assumed to correspond to the matrix rows and columns, respectively.

Input:

tensor ([…, M, K], tf.Tensor) – Input tensor of rank greater than or equal to two

Output:

[…, M, K], tf.Tensor – A tensor of the same shape and type as tensor containing the matrix pseudo inverse of its last two dimensions

Metrics

sionna.phy.utils.compute_ber(b, b_hat, precision='double')[source]

Computes the bit error rate (BER) between two binary tensors

Input:
  • b (tf.float or tf.int) – A tensor of arbitrary shape filled with ones and zeros

  • b_hat (tf.float or tf.int) – A tensor like b

  • precision (str, “single” | “double” (default)) – Precision used for internal calculations and outputs

Output:

tf.float – BER

sionna.phy.utils.compute_bler(b, b_hat, precision='double')[source]

Computes the block error rate (BLER) between two binary tensors

A block error happens if at least one element of b and b_hat differ in one block. The BLER is evaluated over the last dimension of the input, i. e., all elements of the last dimension are considered to define a block.

This is also sometimes referred to as word error rate or frame error rate.

Input:
  • b (tf.float or tf.int) – A tensor of arbitrary shape filled with ones and zeros

  • b_hat (tf.float or tf.int) – A tensor like b

  • precision (str, “single” | “double” (default)) – Precision used for internal calculations and outputs

Output:

tf.float – BLER

sionna.phy.utils.compute_ser(s, s_hat, precision='double')[source]

Computes the symbol error rate (SER) between two integer tensors

Input:
  • s (tf.float or tf.int) – A tensor of arbitrary shape filled with integers

  • s_hat (tf.float or tf.int) – A tensor like s

  • precision (str, “single” | “double” (default)) – Precision used for internal calculations and outputs

Output:

tf.float – SER

sionna.phy.utils.count_block_errors(b, b_hat)[source]

Counts the number of block errors between two binary tensors

A block error happens if at least one element of b and b_hat differ in one block. The BLER is evaluated over the last dimension of the input, i. e., all elements of the last dimension are considered to define a block.

This is also sometimes referred to as word error rate or frame error rate.

Input:
  • b (tf.float or tf.int) – A tensor of arbitrary shape filled with ones and zeros

  • b_hat (tf.float or tf.int) – A tensor like b

Output:

tf.int64 – Number of block errors

sionna.phy.utils.count_errors(b, b_hat)[source]

Counts the number of bit errors between two binary tensors

Input:
  • b (tf.float or tf.int) – A tensor of arbitrary shape filled with ones and zeros

  • b_hat (tf.float or tf.int) – A tensor like b

Output:

tf.int64 – Number of bit errors

Miscellaneous

sionna.phy.utils.dbm_to_watt(x_dbm, precision=None)[source]

Converts the input [dBm] to Watt

Input:
  • x_dbm (tf.float) – Input value [dBm]

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:

tf.float – Input value converted to Watt

class sionna.phy.utils.db_to_lin(x, precision=None)[source]

Converts the input [dB] to linear scale

Input:
  • x (tf.float) – Input value [dB]

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:

tf.float – Input value converted to linear scale

class sionna.phy.utils.DeepUpdateDict[source]

Class inheriting from dict enabling nested merging of the dictionary with a new one

deep_update(delta, stop_at_keys=())[source]

Merges self with the input delta in nested fasion. In case of conflict, the values of the new dictionary prevail. The two dictionary are merged at intermediate keys stop_at_keys, if provided.

Input:
  • delta (dict) – Dictionary to be merged with self

  • stop_at_keys (tuple) – Tuple of keys at which the subtree of delta replaces the corresponding subtree of self

Example

from sionna.phy.utils import DeepUpdateDict

# Merge without conflicts
dict1 = DeepUpdateDict(
    {'a': 1,
     'b':
      {'b1': 10,
       'b2': 20}})
dict_delta1 = {'c': -2,
            'b':
            {'b3': 30}}
dict1.deep_update(dict_delta1)
print(dict1)
# {'a': 1, 'b': {'b1': 10, 'b2': 20, 'b3': 30}, 'c': -2}

# Compare against the classic "update" method, which is not nested
dict1 = DeepUpdateDict(
    {'a': 1,
     'b':
      {'b1': 10,
       'b2': 20}})
dict1.update(dict_delta1)
print(dict1)
# {'a': 1, 'b': {'b3': 30}, 'c': -2}

# Handle key conflicts
dict2 = DeepUpdateDict(
    {'a': 1,
     'b':
      {'b1': 10,
       'b2': 20}})
dict_delta2 = {'a': -2,
            'b':
            {'b1': {'f': 3, 'g': 4}}}
dict2.deep_update(dict_delta2)
print(dict2)
# {'a': -2, 'b': {'b1': {'f': 3, 'g': 4}, 'b2': 20}}

# Merge at intermediate keys
dict2 = DeepUpdateDict(
    {'a': 1,
     'b':
      {'b1': 10,
       'b2': 20}})
dict2.deep_update(dict_delta2, stop_at_keys='b')
print(dict2)
# {'a': -2, 'b': {'b1': {'f': 3, 'g': 4}}}
sionna.phy.utils.dict_keys_to_int(x)[source]

Converts the string keys of an input dictionary to integers whenever possible

Input:

x (dict) – Input dictionary

Output:

dict – Dictionary with integer keys

Example

from sionna.phy.utils import dict_keys_to_int

dict_in = {'1': {'2': [45, '3']}, '4.3': 6, 'd': [5, '87']}
print(dict_keys_to_int(dict_in))
# {1: {'2': [45, '3']}, '4.3': 6, 'd': [5, '87']})
sionna.phy.utils.ebnodb2no(ebno_db, num_bits_per_symbol, coderate, resource_grid=None, precision=None)[source]

Computes the noise variance No for a given Eb/No in dB

The function takes into account the number of coded bits per constellation symbol, the coderate, as well as possible additional overheads related to OFDM transmissions, such as the cyclic prefix and pilots.

The value of No is computed according to the following expression

No=(EbNorMEs)1

where 2M is the constellation size, i.e., M is the average number of coded bits per constellation symbol, Es=1 is the average energy per constellation per symbol, r(0,1] is the coderate, Eb is the energy per information bit, and No is the noise power spectral density. For OFDM transmissions, Es is scaled according to the ratio between the total number of resource elements in a resource grid with non-zero energy and the number of resource elements used for data transmission. Also the additionally transmitted energy during the cyclic prefix is taken into account, as well as the number of transmitted streams per transmitter.

Input:
  • ebno_db (float) – Eb/No value in dB

  • num_bits_per_symbol (int) – Number of bits per symbol

  • coderate (float) – Coderate

  • resource_grid (None (default) | ResourceGrid) – An (optional) resource grid for OFDM transmissions

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:

tf.float – Value of No in linear scale

sionna.phy.utils.complex_normal(shape, var=1.0, precision=None)[source]

Generates a tensor of complex normal random variables

Input:
  • shape (tf.shape, or list) – Desired shape

  • var (float) – Total variance., i.e., each complex dimension has variance var/2.

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:

shape, tf.complex – Tensor of complex normal random variables

sionna.phy.utils.hard_decisions(llr)[source]

Transforms LLRs into hard decisions

Positive values are mapped to 1. Nonpositive values are mapped to 0.

Input:

llr (any non-complex tf.DType) – Tensor of LLRs

Output:

Same shape and dtype as llr – Hard decisions

class sionna.phy.utils.Interpolate[source]

Class template for interpolating data defined on unstructured or rectangular grids. Used in PHYAbstraction for BLER and SNR interpolation.

abstract struct(z, x, y, x_interp, y_interp, **kwargs)[source]

Interpolates data structured in rectangular grids

Input:
  • z ([N, M], array) – Co-domain sample values. Informally, z = f ( x , y )

  • x ([N], array) – First coordinate of the domain sample values

  • y ([M], array) – Second coordinate of the domain sample values

  • x_interp ([L], array) – Interpolation grid for the first (x) coordinate. Typically, LN

  • y_interp ([J], array) – Interpolation grid for the second (y) coordinate. Typically, JM

  • kwargs – Additional interpolation parameters

Output:

z_interp ([L, J], np.array) – Interpolated data

abstract unstruct(z, x, y, x_interp, y_interp, **kwargs)[source]

Interpolates unstructured data

Input:
  • z ([N], array) – Co-domain sample values. Informally, z = f ( x , y )

  • x ([N], array) – First coordinate of the domain sample values

  • y ([N], array) – Second coordinate of the domain sample values

  • x_interp ([L], array) – Interpolation grid for the first (x) coordinate. Typically, LN

  • y_interp ([J], array) – Interpolation grid for the second (y) coordinate. Typically, JN

  • griddata_method (linear | nearest, cubic) – Interpolation method. See Scipy’s interpolate.griddata for more details

Output:

z_interp ([L, J], np.array) – Interpolated data

class sionna.phy.utils.lin_to_db(x, precision=None)[source]

Converts the input in linear scale to dB scale

Input:
  • x (tf.float) – Input value in linear scale

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:

tf.float – Input value converted to [dB]

sionna.phy.utils.log2(x)[source]

TensorFlow implementation of NumPy’s log2 function

Simple extension to tf.experimental.numpy.log2 which casts the result to the dtype of the input. For more details see the TensorFlow and NumPy documentation.

sionna.phy.utils.log10(x)[source]

TensorFlow implementation of NumPy’s log10 function

Simple extension to tf.experimental.numpy.log10 which casts the result to the dtype of the input. For more details see the TensorFlow and NumPy documentation.

class sionna.phy.utils.MCSDecoder(*args, precision=None, **kwargs)[source]

Class template for mapping a Modulation and Coding Scheme (MCS) index to the corresponding modulation order, i.e., number of bits per symbol, and coderate.

Input:
  • mcs_index ([…], tf.int32) – MCS index

  • mcs_table_index ([…], tf.int32) – MCS table index. Different tables contain different mappings.

  • mcs_category ([…], tf.int32) – Table category which may correspond, e.g., to uplink or downlink transmission

  • check_index_validity (bool (default: True)) – If True, an ValueError is thrown is the input mcs indices are not valid for the given configuration

Output:
  • modulation_order ([…], tf.int32) – Modulation order corresponding to the input MCS index

  • coderate ([…], tf.float) – Coderate corresponding to the input MCS index

sionna.phy.utils.scalar_to_shaped_tensor(inp, dtype, shape)[source]

Converts a scalar input to a tensor of specified shape, or validates and casts an existing input tensor. If the input is a scalar, creates a tensor of the specified shape filled with that value. Otherwise, verifies the input tensor matches the required shape and casts it to the specified dtype.

Input:
  • inp (int | float | bool | tf.Tensor) – Input value. If scalar (int, float, bool, or shapeless tensor), it will be used to fill a new tensor. If a shaped tensor, its shape must match the specified shape.

  • dtype (tf.dtype) – Desired data type of the output tensor

  • shape (list) – Required shape of the output tensor

Output:

tf.Tensor – A tensor of shape shape and type dtype. Either filled with the scalar input value or the input tensor cast to the specified dtype.

sionna.phy.utils.sim_ber(mc_fun, ebno_dbs, batch_size, max_mc_iter, soft_estimates=False, num_target_bit_errors=None, num_target_block_errors=None, target_ber=None, target_bler=None, early_stop=True, graph_mode=None, distribute=None, verbose=True, forward_keyboard_interrupt=True, callback=None, precision=None)[source]

Simulates until target number of errors is reached and returns BER/BLER

The simulation continues with the next SNR point if either num_target_bit_errors bit errors or num_target_block_errors block errors is achieved. Further, it continues with the next SNR point after max_mc_iter batches of size batch_size have been simulated. Early stopping allows to stop the simulation after the first error-free SNR point or after reaching a certain target_ber or target_bler.

Input:
  • mc_fun (callable) – Callable that yields the transmitted bits b and the receiver’s estimate b_hat for a given batch_size and ebno_db. If soft_estimates is True, b_hat is interpreted as logit.

  • ebno_dbs ([n], tf.float) – A tensor containing SNR points to be evaluated.

  • batch_size (tf.int) – Batch-size for evaluation

  • max_mc_iter (tf.int) – Maximum number of Monte-Carlo iterations per SNR point

  • soft_estimates (bool, (default False)) – If True, b_hat is interpreted as logit and an additional hard-decision is applied internally.

  • num_target_bit_errors (None (default) | tf.int32) – Target number of bit errors per SNR point until the simulation continues to next SNR point

  • num_target_block_errors (None (default) | tf.int32) – Target number of block errors per SNR point until the simulation continues

  • target_ber (None (default) | tf.float32) – The simulation stops after the first SNR point which achieves a lower bit error rate as specified by target_ber. This requires early_stop to be True.

  • target_bler (None (default) | tf.float32) – The simulation stops after the first SNR point which achieves a lower block error rate as specified by target_bler. This requires early_stop to be True.

  • early_stop (None (default) | bool) – If True, the simulation stops after the first error-free SNR point (i.e., no error occurred after max_mc_iter Monte-Carlo iterations).

  • graph_mode (None (default) | “graph” | “xla”) – A string describing the execution mode of mc_fun. If None, mc_fun is executed as is.

  • distribute (None (default) | “all” | list of indices | tf.distribute.strategy) – Distributes simulation on multiple parallel devices. If None, multi-device simulations are deactivated. If “all”, the workload will be automatically distributed across all available GPUs via the tf.distribute.MirroredStrategy. If an explicit list of indices is provided, only the GPUs with the given indices will be used. Alternatively, a custom tf.distribute.strategy can be provided. Note that the same batch_size will be used for all GPUs in parallel, but the number of Monte-Carlo iterations max_mc_iter will be scaled by the number of devices such that the same number of total samples is simulated. However, all stopping conditions are still in-place which can cause slight differences in the total number of simulated samples.

  • verbose (bool, (default True)) – If True, the current progress will be printed.

  • forward_keyboard_interrupt (bool, (default True)) – If False, KeyboardInterrupts will be catched internally and not forwarded (e.g., will not stop outer loops). If True, the simulation ends and returns the intermediate simulation results.

  • callback (None (default) | callable) – If specified, callback will be called after each Monte-Carlo step. Can be used for logging or advanced early stopping. Input signature of callback must match callback(mc_iter, snr_idx, ebno_dbs, bit_errors, block_errors, nb_bits, nb_blocks) where mc_iter denotes the number of processed batches for the current SNR point, snr_idx is the index of the current SNR point, ebno_dbs is the vector of all SNR points to be evaluated, bit_errors the vector of number of bit errors for each SNR point, block_errors the vector of number of block errors, nb_bits the vector of number of simulated bits, nb_blocks the vector of number of simulated blocks, respectively. If callable returns sim_ber.CALLBACK_NEXT_SNR, early stopping is detected and the simulation will continue with the next SNR point. If callable returns sim_ber.CALLBACK_STOP, the simulation is stopped immediately. For sim_ber.CALLBACK_CONTINUE continues with the simulation.

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:
  • ber ([n], tf.float) – Bit-error rate.

  • bler ([n], tf.float) – Block-error rate

Note

This function is implemented based on tensors to allow full compatibility with tf.function(). However, to run simulations in graph mode, the provided mc_fun must use the @tf.function() decorator.

class sionna.phy.utils.SingleLinkChannel(num_bits_per_symbol, num_info_bits, target_coderate, precision=None)[source]

Class template for simulating a single-link, i.e., single-carrier and single-stream, channels. Used for generating BLER tables in new_bler_table().

Parameters:
  • num_bits_per_symbol (int) – Number of bits per symbol, i.e., modulation order

  • num_info_bits (int) – Number of information bits per code block

  • target_coderate (float) – Target code rate, i.e., the target ratio between the information and the coded bits within a block

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Input:
  • batch_size (int) – Size of the simulation batches

  • ebno_db (float) – Eb/No value in dB

Output:
  • bits ([batch_size, num_info_bits], int) – Transmitted bits

  • bits_hat ([batch_size, num_info_bits], int) – Decoded bits

property num_bits_per_symbol

Get/set the modulation order

Type:

int

property num_coded_bits

Number of coded bits in a code block

Type:

int (read-only)

property num_info_bits

Get/set the number of information bits per code block

Type:

int

set_num_coded_bits()[source]

Compute the number of coded bits per code block

property target_coderate

Get/set the target coderate

Type:

float

class sionna.phy.utils.SplineGriddataInterpolation[source]

Interpolates data defined on rectangular or unstructured grids via Scipy’s interpolate.RectBivariateSpline and interpolate.griddata, respectively. It inherits from Interpolate

struct(z, x, y, x_interp, y_interp, spline_degree=1, **kwargs)[source]

Perform spline interpolation via Scipy’s interpolate.RectBivariateSpline

Input:
  • z ([N, M], array) – Co-domain sample values. Informally, z = f ( x , y ).

  • x ([N], array) – First coordinate of the domain sample values

  • y ([M], array) – Second coordinate of the domain sample values

  • x_interp ([L], array) – Interpolation grid for the first (x) coordinate. Typically, LN.

  • y_interp ([J], array) – Interpolation grid for the second (y) coordinate. Typically, JM.

  • spline_degree (int (default: 1)) – Spline interpolation degree

Output:

z_interp ([L, J], np.array) – Interpolated data

unstruct(z, x, y, x_interp, y_interp, griddata_method='linear', **kwargs)[source]

Interpolates unstructured data via Scipy’s interpolate.griddata

Input:
  • z ([N], array) – Co-domain sample values. Informally, z = f ( x , y ).

  • x ([N], array) – First coordinate of the domain sample values

  • y ([N], array) – Second coordinate of the domain sample values

  • x_interp ([L], array) – Interpolation grid for the first (x) coordinate. Typically, LN.

  • y_interp ([J], array) – Interpolation grid for the second (y) coordinate. Typically, JN.

  • griddata_method (“linear” | “nearest” | “cubic”) – Interpolation method. See Scipy’s interpolate.griddata for more details.

Output:

z_interp ([L, J], np.array) – Interpolated data

sionna.phy.utils.to_list(x)[source]

Converts the input to a list

Input:

x (list | float | int | str | None) – Input, to be converted to a list

Output:

list – Input converted to a list

class sionna.phy.utils.TransportBlock(*args, precision=None, **kwargs)[source]

Class template for computing the number and size (measured in n. bits) of code blocks within a transport block, given the modulation order, coderate and the total number of coded bits of a transport block. Used in PHYAbstraction.

Input:
  • modulation_order ([…], tf.int32) – Modulation order, i.e., number of bits per symbol

  • target_rate ([…], tf.float32) – Target coderate

  • num_coded_bits ([…], tf.float32) – Total number of coded bits across all codewords

Output:
  • cb_size ([…], tf.int32) – Code block (CB) size, i.e., number of information bits per code block

  • num_cb ([…], tf.int32) – Number of code blocks that the transport block is segmented into

sionna.phy.utils.watt_to_dbm(x_w, precision=None)[source]

Converts the input [Watt] to dBm

Input:
  • x_w (tf.float) – Input value [Watt]

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

Output:

tf.float – Input value converted to dBm

Numerics

sionna.phy.utils.bisection_method(f, left, right, regula_falsi=False, expand_to_left=True, expand_to_right=True, step_expand=2.0, eps_x=1e-05, eps_y=0.0001, max_n_iter=100, return_brackets=False, precision=None, **kwargs)[source]

Implements the classic bisection method for estimating the root of batches of decreasing univariate functions

Input:
  • f (callable) – Generic function handle that takes batched inputs and returns batched outputs. Applies a different decreasing univariate function to each of its inputs. Must accept input batches of the same shape as left and right.

  • left ([…], tf.float) – Left end point of the initial search interval, for each batch. The root is guessed to be contained within [left, right].

  • right ([…], tf.float) – Right end point of the initial search interval, for each batch

  • regula_falsi (bool (default: False)) – If True, then the regula falsi method is employed to determine the next root guess. This guess is computed as the x-intercept of the line passing through the two points formed by the function evaluated at the current search interval endpoints. Else, the next root guess is computed as the middle point of the current search interval.

  • expand_to_left (bool (default: True)) – If True and f(left) is negative, then left is decreased by a geometric progression of step_expand until f becomes positive, for each batch. If False, then left is not decreased.

  • expand_to_right (bool (default: True)) – If True and f(left) is positive, then right is increased by a geometric progression of step_expand until f becomes negative, for each batch. If False, then right is not increased.

  • step_expand (float (default: 2.)) – See expand_to_left and expand_to_right

  • eps_x (float (default: 1e-4)) – Convergence criterion. Search terminates after max_n_iter iterations or if, for each batch, either the search interval length is smaller than eps_x or the function absolute value is smaller than eps_y.

  • eps_y (float (default: 1e-4)) – Convergence criterion. See eps_x.

  • max_n_iter (int (default: 1000)) – Maximum number of iterations

  • return_brackets (bool (default: False)) – If True, the final values of search interval left and right end point are returned

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None, precision is used.

  • kwargs (dict) – Additional arguments for function f

Output:
  • x_opt ([…], tf.float) – Estimated roots of the input batch of functions f

  • f_opt ([…], tf.float) – Value of function f evaluated at x_opt

  • left ([…], tf.float) – Final value of left end points of the search intervals. Only returned if return_brackets is True.

  • right ([…], tf.float) – Final value of right end points of the search intervals. Only returned if return_brackets is True.

Example

import tensorflow as tf
from sionna.phy.utils import bisection_method

# Define a decreasing univariate function of x
def f(x, a):
    return - tf.math.pow(x - a, 3)

# Initial search interval
left, right = 0., 2.
# Input parameter for function a
a = 3

# Perform bisection method
x_opt, _ = bisection_method(f, left, right, eps_x=1e-4, eps_y=0, a=a)
print(x_opt.numpy())
# 2.9999084

Plotting

sionna.phy.utils.plotting.plot_ber(snr_db, ber, legend='', ylabel='BER', title='Bit Error Rate', ebno=True, is_bler=None, xlim=None, ylim=None, save_fig=False, path='')[source]

Plot error-rates

Input:
  • snr_db (numpy.ndarray or list of numpy.ndarray) – Array defining the simulated SNR points

  • ber (numpy.ndarray or list of numpy.ndarray) – Array defining the BER/BLER per SNR point

  • legend (str, (default “”), or list of str) – Legend entries

  • ylabel (str, (default “BER”)) – y-label

  • title (str, (default “Bit Error Rate”)) – Figure title

  • ebno (bool, (default True)) – If True, the x-label is set to “EbNo [dB]” instead of “EsNo [dB]”.

  • is_bler (bool, (default False)) – If True, the corresponding curve is dashed.

  • xlim (None (default) | (float, float)) – x-axis limits

  • ylim (None (default) | (float, float)) – y-axis limits

  • save_fig (bool, (default False)) – If True, the figure is saved as .png.

  • path (str, (default “”)) – Path to save the figure (if save_fig is True)

Output:
  • fig (matplotlib.figure.Figure) – Figure handle

  • ax (matplotlib.axes.Axes) – Axes object

class sionna.phy.utils.plotting.PlotBER(title='Bit/Block Error Rate')[source]

Provides a plotting object to simulate and store BER/BLER curves

Parameters:

title (str, (default “Bit/Block Error Rate”)) – Figure title

Input:
  • snr_db (numpy.ndarray or list of numpy.ndarray, float) – SNR values

  • ber (numpy.ndarray or list of numpy.ndarray, float) – BER values corresponding to snr_db

  • legend (str or list of str) – Legend entries

  • is_bler (bool or list of bool, (default [])) – If True, ber will be interpreted as BLER.

  • show_ber (bool, (default True)) – If True, BER curves will be plotted.

  • show_bler (bool, (default True)) – If True, BLER curves will be plotted.

  • xlim (None (default) | (float, float)) – x-axis limits

  • ylim (None (default) | (float, float)) – y-axis limits

  • save_fig (bool, (default False)) – If True, the figure is saved as .png.

  • path (str, (default “”)) – Path to save the figure (if save_fig is True)

Tensors

sionna.phy.utils.expand_to_rank(tensor, target_rank, axis=-1)[source]

Inserts as many axes to a tensor as needed to achieve a desired rank

This operation inserts additional dimensions to a tensor starting at axis, so that so that the rank of the resulting tensor has rank target_rank. The dimension index follows Python indexing rules, i.e., zero-based, where a negative index is counted backward from the end.

Input:
  • tensor (tf.Tensor) – Input tensor

  • target_rank (int) – Rank of the output tensor. If target_rank is smaller than the rank of tensor, the function does nothing.

  • axis (int) – Dimension index at which to expand the shape of tensor. Given a tensor of D dimensions, axis must be within the range [-(D+1), D] (inclusive).

Output:

tf.Tensor – A tensor with the same data as tensor, with target_rank- rank(tensor) additional dimensions inserted at the index specified by axis. If target_rank <= rank(tensor), tensor is returned.

sionna.phy.utils.flatten_dims(tensor, num_dims, axis)[source]

Flattens a specified set of dimensions of a tensor

This operation flattens num_dims dimensions of a tensor starting at a given axis.

Input:
  • tensor (tf.Tensor) – Input tensor

  • num_dims (int) – Number of dimensions to combine. Must be larger than two and less or equal than the rank of tensor.

  • axis (int) – Index of the dimension from which to start

Output:

tf.Tensor – A tensor of the same type as tensor with num_dims-1 lesser dimensions, but the same number of elements

sionna.phy.utils.flatten_last_dims(tensor, num_dims=2)[source]

Flattens the last n dimensions of a tensor

This operation flattens the last num_dims dimensions of a tensor. It is a simplified version of the function flatten_dims.

Input:
  • tensor (tf.Tensor) – Input tensor

  • num_dims (int) – Number of dimensions to combine. Must be greater than or equal to two and less or equal than the rank of tensor.

Output:

tf.Tensor – A tensor of the same type as tensor with num_dims-1 lesser dimensions, but the same number of elements

sionna.phy.utils.insert_dims(tensor, num_dims, axis=-1)[source]

Adds multiple length-one dimensions to a tensor

This operation is an extension to TensorFlow`s expand_dims function. It inserts num_dims dimensions of length one starting from the dimension axis of a tensor. The dimension index follows Python indexing rules, i.e., zero-based, where a negative index is counted backward from the end.

Input:
  • tensor (tf.Tensor) – Input tensor

  • num_dims (int) – Number of dimensions to add

  • axis (int) – Dimension index at which to expand the shape of tensor. Given a tensor of D dimensions, axis must be within the range [-(D+1), D] (inclusive).

Output:

tf.tensor – A tensor with the same data as tensor, with num_dims additional dimensions inserted at the index specified by axis

sionna.phy.utils.split_dim(tensor, shape, axis)[source]

Reshapes a dimension of a tensor into multiple dimensions

This operation splits the dimension axis of a tensor into multiple dimensions according to shape.

Input:
  • tensor (tf.Tensor) – Input tensor

  • shape ((list or TensorShape)) – Shape to which the dimension should be reshaped

  • axis (int) – Index of the axis to be reshaped

Output:

tf.Tensor – A tensor of the same type as tensor with len(shape)-1 additional dimensions, but the same number of elements

sionna.phy.utils.diag_part_axis(tensor, axis, **kwargs)[source]

Extracts the batched diagonal part of a batched tensor over the specified axis

This is an extension of TensorFlow`s tf.linalg.diag_part function, which extracts the diagonal over the last two dimensions. This behavior can be reproduced by setting axis =-2.

Input:
  • tensor ([s(1), …, s(N)], any) – A tensor of rank greater than or equal to two (N2)

  • axis (int) – Axis index starting from which the diagonal part is extracted

  • kwargs (dict) – Optional inputs for TensorFlow’s linalg.diag_part, such as the diagonal offset k or the padding value padding_value. See TensorFlow’s linalg.diag_part for more details.

Output:

[s(1), …, min[s(axis),s(axis +1)], s(axis +2), …, s(N))], any – Tensor containing the diagonal part of input tensor over axis (axis, axis +1)

Example

import tensorflow as tf
from sionna.phy.utils import diag_part_axis

a = tf.reshape(tf.range(27), [3,3,3])
print(a.numpy())
#  [[[ 0  1  2]
#    [ 3  4  5]
#    [ 6  7  8]]
#
#    [[ 9 10 11]
#    [12 13 14]
#    [15 16 17]]
#
#    [[18 19 20]
#    [21 22 23]
#    [24 25 26]]]

dp_0 = diag_part_axis(a, axis=0)
print(dp_0.numpy())
# [[ 0  1  2]
#  [12 13 14]
#  [24 25 26]]

dp_1 = diag_part_axis(a, axis=1)
print(dp_1.numpy())
# [[ 0  4  8]
#  [ 9 13 17]
#  [18 22 26]]
sionna.phy.utils.flatten_multi_index(indices, shape)[source]

Converts a tensor of index arrays into an tensor of flat indices

Input:
  • indices ([…, N], tf.int32) – Indices to flatten

  • shape ([N], tf.int32) – Shape of each index dimension. Note that it must hold that indices[..., n]<shape[n] for all n and batch dimension

Output:

flat_indices ([…], tf.int32) – Flattened indices

Example

import tensorflow as tf
from sionna.phy.utils import flatten_multi_index

indices = tf.constant([2, 3])
shape = [5, 6]
print(flatten_multi_index(indices, shape).numpy())
# 15 = 2*6 + 3
sionna.phy.utils.gather_from_batched_indices(params, indices)[source]

Gathers the values of a tensor params according to batch-specific indices

Input:
  • params ([s(1), …, s(N)], any) – Tensor containing the values to gather

  • indices ([…, N], tf.int32) – Tensor containing, for each batch […], the indices at which params is gathered. Note that 0 indices[...,n] < s(n) must hold for all n=1,…,N

Output:

[…], any – Tensor containing the gathered values

Example

import tensorflow as tf
from sionna.phy.utils import gather_from_batched_indices

params = tf.constant([[10, 20, 30], [40, 50, 60], [70, 80, 90]])
print(params.shape)
# TensorShape([3, 3])

indices = tf.constant([[[0, 1], [1, 2], [2, 0], [0, 0]],
                       [[0, 0], [2, 2], [2, 1], [0, 1]]])
print(indices.shape)
# TensorShape([2, 4, 2])
# Note that the batch shape is [2, 4]. Each batch contains a list of 2 indices

print(gather_from_batched_indices(params, indices).numpy())
# [[20, 60, 70, 10],
#  [10, 90, 80, 20]]
# Note that the output shape coincides with the batch shape.
# Element [i,j] coincides with params[indices[i,j,:]]
sionna.phy.utils.tensor_values_are_in_set(tensor, admissible_set)[source]

Checks if the input tensor values are contained in the specified admissible_set

Input:
  • tensor (tf.Tensor | list) – Tensor to validate

  • admissible_set (tf.Tensor | list) – Set of valid values that the input tensor must be composed of

Output:

bool – Returns True if and only if tensor values are contained in admissible_set

Example

import tensorflow as tf
from sionna.phy.utils import tensor_values_are_in_set

tensor = tf.Variable([[1, 0], [0, 1]])

print(tensor_values_are_in_set(tensor, [0, 1, 2]).numpy())
# True

print(tensor_values_are_in_set(tensor, [0, 2]).numpy())
# False
sionna.phy.utils.enumerate_indices(bounds)[source]

Enumerates all indices between 0 (included) and bounds (excluded) in lexicographic order

Input:

bounds (list | tf.Tensor | np.array, int) – Collection of index bounds

Output:

[prod(bounds), len(bounds)] – Collection of all indices, in lexicographic order

Example

from sionna.phy.utils import enumerate_indices

print(enumerate_indices([2, 3]).numpy())
# [[0 0]
#  [0 1]
#  [0 2]
#  [1 0]
#  [1 1]
#  [1 2]]