Fixed mistake in implementation details; |N_v(i)| -> d_i

This commit is contained in:
Andreas Tsouchlos 2023-04-15 18:56:55 +02:00
parent ff0e2beea0
commit bcf05d26af

View File

@ -134,8 +134,8 @@ examplary code, which is described by the generator and parity-check matrices%
and has only two possible codewords:
%
\begin{align*}
\mathcal{C} = \left\{ \begin{bmatrix} 0 & 0 & 0 \end{bmatrix},
\begin{bmatrix} 0 & 1 & 1 \end{bmatrix} \right\}
\mathcal{C} = \left\{ \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix},
\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \right\}
.\end{align*}
%
Figure \ref{fig:lp:poly:exact_ilp} shows the domain of exact \ac{ML} decoding.
@ -631,7 +631,7 @@ The same is true for $\left( \boldsymbol{\lambda}_j \right)_i$.}
\cite[Sec. III. B.]{original_admm}:%
%
\begin{alignat*}{3}
\tilde{c}_i &\leftarrow \frac{1}{\left| N_v\left( i \right) \right|} \left(
\tilde{c}_i &\leftarrow \frac{1}{d_i} \left(
\sum_{j\in N_v\left( i \right) } \Big( \left( \boldsymbol{z}_j \right)_i
- \frac{1}{\mu} \left( \boldsymbol{\lambda}_j \right)_i \Big)
- \frac{\gamma_i}{\mu} \right)
@ -652,7 +652,7 @@ This representation can be slightly simplified by substituting
$\boldsymbol{\lambda}_j = \mu \cdot \boldsymbol{u}_j \,\forall\,j\in\mathcal{J}$:%
%
\begin{alignat*}{3}
\tilde{c}_i &\leftarrow \frac{1}{\left| N_v\left( i \right) \right|} \left(
\tilde{c}_i &\leftarrow \frac{1}{d_i} \left(
\sum_{j\in N_v\left( i \right) } \Big( \left( \boldsymbol{z}_j \right)_i
- \left( \boldsymbol{u}_j \right)_i \Big)
- \frac{\gamma_i}{\mu} \right)
@ -692,7 +692,7 @@ while $\sum_{j\in\mathcal{J}} \lVert \boldsymbol{T}_j\tilde{\boldsymbol{c}} - \b
- \boldsymbol{z}_j$
end for
for $i$ in $\mathcal{I}$ do
$\tilde{c}_i \leftarrow \frac{1}{\left| N_v\left( i \right) \right|} \left(
$\tilde{c}_i \leftarrow \frac{1}{d_i} \left(
\sum_{j\in N_v\left( i \right) } \Big(
\left( \boldsymbol{z}_j \right)_i - \left( \boldsymbol{u}_j
\right)_i
@ -724,7 +724,7 @@ The method chosen here is the one presented in \cite{lautern}.
\section{Implementation Details}%
\label{sec:lp:Implementation Details}
The development process used to implement this decoding algorithm is the same
The development process used to implement this decoding algorithm was the same
as outlined in section
\ref{sec:prox:Implementation Details} for proximal decoding.
At first, an initial version was implemented in Python, before repeating the
@ -752,11 +752,10 @@ Using this observation, the sum can be written as%
- \boldsymbol{u}_j \right) \right)_i
.\end{align*}
Further noticing that the vectors
$\boldsymbol{T}_j^\text{T}\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right),
\hspace{1mm} j\in\mathcal{J} $
$\boldsymbol{T}_j^\text{T}\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right)$
unrelated to \ac{VN} $i$ have $0$ as the $i$th component, the set of indices
the summation takes place over can be extended to $\mathcal{J}$, allowing the
expression to be rewritten to%
expression to be rewritten as%
%
\begin{align*}
\sum_{j\in \mathcal{J}}\left( \boldsymbol{T}_j^\text{T} \left( \boldsymbol{z}_j
@ -771,12 +770,12 @@ Defining%
\boldsymbol{D} := \begin{bmatrix}
d_1 \\
\vdots \\
d_m
d_n
\end{bmatrix}%
\hspace{5mm}%
\text{and}%
\hspace{5mm}%
\boldsymbol{M} := \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
\boldsymbol{s} := \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right)
\end{align*}%
%
@ -784,7 +783,7 @@ the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
%
\begin{align*}
\tilde{\boldsymbol{c}} \leftarrow \boldsymbol{D}^{\circ -1} \circ
\left( \boldsymbol{M} - \frac{1}{\mu}\boldsymbol{\gamma} \right)
\left( \boldsymbol{s} - \frac{1}{\mu}\boldsymbol{\gamma} \right)
.\end{align*}
%