Discrete Channel Models
This module provides layers and functions that implement channel models with discrete input/output alphabets.
All channel models support binary inputs
The channels can either return discrete values or log-likelihood ratios (LLRs).
These LLRs describe the channel transition probabilities
Further, the channel reliability parameter
The channel models are based on the Gumble-softmax trick [GumbleSoftmax] to ensure differentiability of the channel w.r.t. to the channel reliability parameter. Please see [LearningShaping] for further details.
Setting-up:
>>> bsc = BinarySymmetricChannel(return_llrs=False, bipolar_input=False)
Running:
>>> x = tf.zeros((128,)) # x is the channel input
>>> pb = 0.1 # pb is the bit flipping probability
>>> y = bsc((x, pb))
- class sionna.phy.channel.BinaryErasureChannel(return_llrs=False, bipolar_input=False, llr_max=100.0, precision=None, **kwargs)[source]
Binary erasure channel (BEC) where a bit is either correctly received or erased.
In the binary erasure channel, bits are always correctly received or erased with erasure probability
.This block supports binary inputs (
) and bipolar inputs ( ).If activated, the channel directly returns log-likelihood ratios (LLRs) defined as
The erasure probability
can be either a scalar or a tensor (broadcastable to the shape of the input). This allows different erasure probabilities per bit position.Please note that the output of the BEC is ternary. Hereby, -1 indicates an erasure for the binary configuration and 0 for the bipolar mode, respectively.
- Parameters:
return_llrs (bool, (default False)) – If True, the layer returns log-likelihood ratios instead of binary values based on
pb
.bipolar_input (bool, (default False)) – If True, the expected input is given as
instead of .llr_max (tf.float, (default 100)) – Clipping value of the LLRs
precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
x ([…,n], tf.float) – Input sequence to the channel
pb (tf.float) – Erasure probability. Can be a scalar or of any shape that can be broadcasted to the shape of
x
.
- Output:
[…,n], tf.float – Output sequence of same length as the input
x
. Ifreturn_llrs
is False, the output is ternary where each -1 and each 0 indicate an erasure for the binary and bipolar input, respectively.
- class sionna.phy.channel.BinaryMemorylessChannel(return_llrs=False, bipolar_input=False, llr_max=100.0, precision=None, **kwargs)[source]
Discrete binary memory less channel with (possibly) asymmetric bit flipping probabilities
Inputs bits are flipped with probability
and , respectively.This block supports binary inputs (
) and bipolar inputs ( ).If activated, the channel directly returns log-likelihood ratios (LLRs) defined as
The error probability
can be either scalar or a tensor (broadcastable to the shape of the input). This allows different erasure probabilities per bit position. In any case, its last dimension must be of length 2 and is interpreted as and .- Parameters:
return_llrs (bool, (default False)) – If True, the layer returns log-likelihood ratios instead of binary values based on
pb
.bipolar_input (bool, (default False)) – If True, the expected input is given as
instead of .llr_max (tf.float, (default 100)) – Clipping value of the LLRs
precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
x ([…,n], tf.float32) – Input sequence to the channel consisting of binary values
{-1,1}`, respectivelypb ([…,2], tf.float32) – Error probability. Can be a tuple of two scalars or of any shape that can be broadcasted to the shape of
x
. It has an additional last dimension which is interpreted as and .
- Output:
[…,n], tf.float32 – Output sequence of same length as the input
x
. Ifreturn_llrs
is False, the output is ternary where a -1 and 0 indicate an erasure for the binary and bipolar input, respectively.
- property llr_max
Get/set maximum value used for LLR calculations
- Type:
tf.float
- property temperature
Get/set temperature for Gumble-softmax trick
- Type:
tf.float32
- class sionna.phy.channel.BinarySymmetricChannel(return_llrs=False, bipolar_input=False, llr_max=100.0, precision=None, **kwargs)[source]
Discrete binary symmetric channel which randomly flips bits with probability
This layer supports binary inputs (
) and bipolar inputs ( ).If activated, the channel directly returns log-likelihood ratios (LLRs) defined as
where
denotes the binary output of the channel.The bit flipping probability
can be either a scalar or a tensor (broadcastable to the shape of the input). This allows different bit flipping probabilities per bit position.- Parameters:
return_llrs (bool, (default False)) – If True, the layer returns log-likelihood ratios instead of binary values based on
pb
.bipolar_input (bool, (default False)) – If True, the expected input is given as
instead of .llr_max (tf.float, (default 100)) – Clipping value of the LLRs
precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
x ([…,n], tf.float32) – Input sequence to the channel
pb ([…,2], tf.float32) – Bit flipping probability. Can be a scalar or of any shape that can be broadcasted to the shape of
x
.
- Output:
[…,n], tf.float32 – Output sequence of same length as the input
x
. Ifreturn_llrs
is False, the output is ternary where a -1 and 0 indicate an erasure for the binary and bipolar input, respectively.
- class sionna.phy.channel.BinaryZChannel(return_llrs=False, bipolar_input=False, llr_max=100.0, precision=None, **kwargs)[source]
Block that implements the binary Z-channel
In the Z-channel, transmission errors only occur for the transmission of second input element (i.e., if a 1 is transmitted) with error probability probability
but the first element is always correctly received.This block supports binary inputs (
) and bipolar inputs ( ).If activated, the channel directly returns log-likelihood ratios (LLRs) defined as
assuming equal probable inputs
.The error probability
can be either a scalar or a tensor (broadcastable to the shape of the input). This allows different error probabilities per bit position.- Parameters:
return_llrs (bool, (default False)) – If True, the layer returns log-likelihood ratios instead of binary values based on
pb
.bipolar_input (bool, (default False)) – If True, the expected input is given as
instead of .llr_max (tf.float, (default 100)) – Clipping value of the LLRs
precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs. If set to None,
precision
is used.
- Input:
x ([…,n], tf.float32) – Input sequence to the channel
pb (tf.float32) – Error probability. Can be a scalar or of any shape that can be broadcasted to the shape of
x
.
- Output:
[…,n], tf.float32 – Output sequence of same length as the input
x
. Ifreturn_llrs
is False, the output is binary and otherwise soft-values are returned.
- References:
- [GumbleSoftmax]
E. Jang, G. Shixiang, and B. Poole. “Categorical reparameterization with gumbel-softmax,” arXiv preprint arXiv:1611.01144 (2016).
[LearningShaping]M. Stark, F. Ait Aoudia, and J. Hoydis. “Joint learning of geometric and probabilistic constellation shaping,” 2019 IEEE Globecom Workshops (GC Wkshps). IEEE, 2019.