Matrix inversion#
Many signal processing algorithms are described using inverses and square roots of matrices. However, their actual computation is very rarely required and one should resort to alternatives in practical implementations instead. Below, we will describe two representative examples.
Solving linear systems#
One frequently needs to compute equations of the form
and would be tempted to implement this equation in the following way:
import torch
# Create random example
x_ = torch.randn(10, 1)
h = torch.randn(10, 10)
y = h @ x_
# Solve via matrix inversion
h_inv = torch.linalg.inv(h)
x = h_inv @ y
A much more stable and efficient implementation avoids the inverse computation and solves the following linear system instead
which looks in code like this:
# Solve as linear system
x = torch.linalg.solve(h, y)
When \(\mathbf{H}\) is a Hermitian positive-definite matrix, we can leverage
the Cholesky
decomposition \(\mathbf{H}=\mathbf{L}\mathbf{L}^{\mathsf{H}}\), where \(\mathbf{L}\) is
a lower-triangular matrix, for an
even faster implementation. Throughout Sionna we use
torch.linalg.cholesky_ex with check_errors=False and two
torch.linalg.solve_triangular calls (instead of torch.cholesky_solve),
which avoids synchronization points and is preferred for CUDA graph compatibility:
# Solve via Cholesky decomposition (pattern used in Sionna)
l, _ = torch.linalg.cholesky_ex(h, check_errors=False)
y_temp = torch.linalg.solve_triangular(l, y, upper=False)
x = torch.linalg.solve_triangular(l.mH, y_temp, upper=True)
For one-off scripts, the simpler torch.linalg.cholesky and
torch.cholesky_solve are also valid. Ready-to-use utilities built on
these patterns include inv_cholesky() and
matrix_pinv().