Wrote initial version of admm implementation details

This commit is contained in:
Andreas Tsouchlos 2023-04-14 01:47:11 +02:00
parent 774019000c
commit c2e41ecf6a

View File

@ -724,6 +724,70 @@ The method chosen here is the one presented in \cite{lautern}.
\section{Implementation Details}%
\label{sec:lp:Implementation Details}
The development process used to implement this decoding algorithm is the same
as outlined in section
\ref{sec:prox:Implementation Details} for proximal decoding.
At first, an initial version was implemented in Python, before repeating the
process using C++ to achieve higher performance.
Again, the performance can be increased by reframing the operations in such
a way that the computation can take place primarily with element-wise
operations and matrix-vector multiplication, since these operations
are highly optimized in the software libraries used for the implementation.
In the summation operation in line 8 of algorithm \ref{alg:admm}, the
components of each $\boldsymbol{z}_j$ and $\boldsymbol{u}_j$ relating to a
given \ac{VN} $i$ have to be found.
This operation can be streamlined by observing that the transfer matrices
$\boldsymbol{T}_j,\hspace{1mm}j\in\mathcal{J}$ are able to perform the mapping
they were devised for in both directions:
with $\boldsymbol{T}_j \tilde{\boldsymbol{c}}$, the $d_j$ components of
$\tilde{\boldsymbol{c}}$ required for parity check $i$ are selected;
with $\boldsymbol{T}_j^\text{T} \boldsymbol{z}_j$, the $d_j$ components of
$\boldsymbol{z}_j$ can be mapped onto a vector of length $n$, each component
at the position corresponding to the \ac{VN} it relates to.
Using this observation, the sum can be written as%
%
\begin{align*}
\sum_{j\in N_v\left( i \right) }\left( \boldsymbol{T}_j^\text{T} \left( \boldsymbol{z}_j
- \boldsymbol{u}_j \right) \right)_i
.\end{align*}
Further noticing that the vectors
$\boldsymbol{T}_j^\text{T}\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right),
\hspace{1mm} j\in\mathcal{J} $
unrelated to \ac{VN} $i$ have $0$ as the $i$th component, the set of indices
the summation takes place over can be extended to $\mathcal{J}$, allowing the
expression to be rewritten to%
%
\begin{align*}
\sum_{j\in \mathcal{J}}\left( \boldsymbol{T}_j^\text{T} \left( \boldsymbol{z}_j
- \boldsymbol{u}_j \right) \right)_i
= \left( \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right) \right)_i
.\end{align*}
%
Defining%
%
\begin{align*}
\boldsymbol{D} := \begin{bmatrix}
d_1 \\
\vdots \\
d_m
\end{bmatrix}%
\hspace{5mm}%
\text{and}%
\hspace{5mm}%
\boldsymbol{M} := \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right)
\end{align*}%
%
the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
%
\begin{align*}
\tilde{\boldsymbol{c}} \leftarrow \boldsymbol{D}^{\circ -1} \circ
\left( \boldsymbol{M} - \frac{1}{\mu}\boldsymbol{\gamma} \right)
.\end{align*}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Results}%