Minor changes to text. Moved codes into captions for ADMM

This commit is contained in:
Andreas Tsouchlos 2023-04-23 12:21:52 +02:00
parent 7fa0ee80d3
commit 3ba87d5558
2 changed files with 339 additions and 334 deletions

View File

@ -690,7 +690,6 @@ This can also be understood by interpreting the decoding process as a message-pa
algorithm \cite[Sec. III. D.]{original_admm}, \cite[Sec. II. B.]{efficient_lp_dec_admm},
depicted in algorithm \ref{alg:admm}.
\todo{How are the variables being initialized?}
\todo{Overrelaxation}
\begin{genericAlgorithm}[caption={\ac{LP} decoding using \ac{ADMM} interpreted
as a message passing algorithm\protect\footnotemark{}}, label={alg:admm},
@ -727,6 +726,15 @@ a check-node update step (lines $3$-$6$) and the $\tilde{c}_i$-updates can be un
a variable-node update step (lines $7$-$9$ in figure \ref{alg:admm}).
The updates for each variable- and check-node can be perfomed in parallel.
A technique called \textit{over-relaxation} can be employed to further improve
convergence, introducing the over-relaxation parameter $\rho$.
This consists of computing the term
$\rho \boldsymbol{T}_j \tilde{\boldsymbol{c}} - \left( 1 - \rho \right)\boldsymbol{z}_j$
before the $\boldsymbol{z}_j$ and $\boldsymbol{u}_j$ update steps (lines 4 and
5 of algorithm \ref{alg:admm}) and
subsequently replacing $\boldsymbol{T}_j \tilde{\boldsymbol{c}}$ with the
computed value in the two updates \cite[Sec. 3.4.3]{distr_opt_book}.
The main computational effort in solving the linear program then amounts to
computing the projection operation $\Pi_{\mathcal{P}_{d_j}} \left( \cdot \right) $
onto each check polytope. Various different methods to perform this projection
@ -850,14 +858,18 @@ complexity of the algorithm are studied.
\subsection{Choice of Parameters}
The first two parameters to be investigated are the penalty parameter $\mu$
and the over-relaxation parameter $\rho$. \todo{Are these their actual names?}
and the over-relaxation parameter $\rho$.
A first approach to get some indication of the values that might be chosen
for these parameters is to look at how the decoding performance depends
on them.
The \ac{FER} is plotted as a function of $\mu$ and $\rho$ in figure
\ref{fig:admm:mu_rho}, for three different \acp{SNR}.
When varying $\mu$, $\rho$ is set to a constant value of 1 and when varying
The code chosen for this examination is a (3,6) regular \ac{LDPC} code with
$n=204$ and $k=102$ \cite[\text{204.33.484}]{mackay_enc}.
When varying $\mu$, $\rho$ is set to 1 and when varying
$\rho$, $\mu$ is set to 5.
$K$ is set to 200 and $\epsilon_\text{dual}$ and $\epsilon_\text{pri}$ to
$10^{-5}$.
The behavior that can be observed is very similar to that of the
parameter $\gamma$ in proximal decoding, analyzed in section
\ref{sec:prox:Analysis and Simulation Results}.
@ -942,7 +954,8 @@ approximately equally good.
\end{tikzpicture}
\end{subfigure}
\caption{Dependence of the decoding performance on the parameters $\mu$ and $\rho$.}
\caption{Dependence of the decoding performance on the parameters $\mu$ and $\rho$.
(3,6) regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:mu_rho}
\end{figure}%
@ -954,10 +967,11 @@ Figure \ref{fig:admm:mu_rho_iterations} shows the average number of iterations
over $\SI{1000}{}$ decodings, as a function of $\rho$.
This time the \ac{SNR} is kept constant at $\SI{4}{dB}$ and the parameter
$\mu$ is varied.
The values chosen for the rest of the parameters are the same as before.
It is visible that choosing a large value for $\rho$ as well as a small value
for $\mu$ minimizes the average number of iterations and thus the average
runtime of the decoding process.
run time of the decoding process.
%
\begin{figure}[h]
\centering
@ -988,336 +1002,14 @@ runtime of the decoding process.
\end{tikzpicture}
\caption{Dependence of the average number of iterations required on $\mu$ and $\rho$
for $E_b / N_0 = \SI{4}{dB}$.}
for $E_b / N_0 = \SI{4}{dB}$. (3,6) regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:mu_rho_iterations}
\end{figure}%
To get an estimate for the parameter $K$, the average error during decoding
can be used.
This is shown in figure \ref{fig:admm:avg_error} as an average of
$\SI{100000}{}$ decodings.
Similarly to the results in section
\ref{sec:prox:Analysis and Simulation Results}, a dip is visible around the
$20$ iteration mark.
This is due to the fact that as the number of iterations increases
more and more decodings converge, leaving only the mistaken ones to be
averaged.
The point at which the wrong decodings start to become dominant and the
decoding performance does not increase any longer is largely independent of
the \ac{SNR}, allowing the value of $K$ to be chosen without considering the
\ac{SNR}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
width=0.6\textwidth,
height=0.45\textwidth,
xlabel={Iteration}, ylabel={Average $\left\Vert \hat{\boldsymbol{c}}
- \boldsymbol{c} \right\Vert$}
]
\addplot[ForestGreen, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{1.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{1}{dB}$}
\addplot[RedOrange, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{2.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{2}{dB}$}
\addplot[NavyBlue, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{3.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{3}{dB}$}
\addplot[RoyalPurple, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{4.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{4}{dB}$}
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{100000}{}$ decodings\protect\footnotemark{}}
\label{fig:admm:avg_error}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}; $K=200, \rho=1, \epsilon_\text{pri} = 10^{-5},
\epsilon_\text{dual} = 10^{-5}$
}%
The same behavior can be observed when looking at a number of different codes,
as shown in figure \ref{fig:admm:mu_rho_multiple}.
%
The last two parameters remaining to be examined are the tolerances for the
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$.
These are considered as having the same value.
The effect of their value on the decoding performance is visualized in figure
\ref{fig:admm:epsilon} for a (3,6) regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}.
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
kept constant, with $K=200$, $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$.
A lower value for the tolerance initially leads to a dramatic decrease in the
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\epsilon$}, ylabel={\acs{FER}},
ymode=log,
xmode=log,
x dir=reverse,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=epsilon, y=FER,
discard if not={SNR}{3.0},]
{res/admm/fer_epsilon_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$ on the \acs{FER}}
\label{fig:admm:epsilon}
\end{figure}%
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
small and large, respectively, to reduce the average runtime of the decoding
process, while keeping them within a certain range as to not compromise the
decoding performance.
The maximum number of iterations $K$ performed can be chosen independantly
of the \ac{SNR}.
Finally, relatively small values should be given to the parameters
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
possible error rate.
\subsection{Decoding Performance}
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
\cite{original_admm} are compared to the results from the simulations
conducted in the context of this thesis.
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
the same as in \cite{original_admm};
the two \ac{FER} curves are practically identical.
Also shown is the curve resulting from \ac{BP} decoding, performing
1000 iterations.
The two algorithms perform relatively similarly, coming within $\SI{0.5}{dB}$
of one another.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
legend cell align={left},
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results.
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
\footnotetext{; $K=200, \mu = 3.3, \rho=1.9,
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
}%
%
In figure \ref{fig:admm:ber_fer}, the \ac{BER} and \ac{FER} for \ac{LP} decoding
using\ac{ADMM} and \ac{BP} are shown for a (3, 6) regular \ac{LDPC} code with
$n=204$.
To ensure comparability, in both cases the number of iterations was set to
$K=200$.
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
$\epsilon = 10^{-5}$ and $\epsilon=10^{-5}$.
Comparing figures \ref{fig:admm:results} and \ref{fig:admm:ber_fer} it is
apparent that the difference in decoding performance depends on the code being
considered.
More simulation results are presented in figure \ref{fig:comp:prox_admm_dec}
in section \ref{sec:comp:res}.
\begin{figure}[h]
\centering
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\mu$}, ylabel={\acs{BER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\hfill%
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={\acs{FER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}[t]{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=3,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{Turquoise, line width=1pt, mark=*}
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\addlegendimage{RoyalPurple, line width=1pt, mark=*}
\addlegendentry{BP (20 iterations)}
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{Comparison of the decoding performance of \acs{LP} decoding using
\acs{ADMM} and \acs{BP}. (3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:ber_fer}
\end{figure}%
In summary, the decoding performance of \ac{LP} decoding using \ac{ADMM} comes
close to that of \ac{BP}, their difference staying in the range of
approximately $\SI{0.5}{dB}$, depending on the code in question.
\subsection{Computational Performance}
\label{subsec:admm:comp_perf}
In terms of time complexity, the three steps of the decoding algorithm
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
considered.
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
algorithm employed.
Since for the implementation completed for this work the projection algorithm
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
also has linear time complexity.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
\label{fig:admm:time}
\end{figure}%
Simulation results from a range of different codes can be used to verify this
analysis.
Figure \ref{fig:admm:time} shows the average time needed to decode one
frame as a function of its length.
\todo{List codes used}
The results are necessarily skewed because the codes considered vary not only
in their length, but also in their construction scheme and rate.
Additionally, different optimization opportunities arise depending on the
length of a code, since for smaller codes dynamic memory allocation can be
completely omitted.
This may explain why the datapoint at $n=504$ is higher then would be expected
with linear behavior.
Nonetheless, the simulation results roughly match the expected behavior
following from the theoretical considerations.
\textbf{Game Plan}
\begin{itemize}
\item Choice of Parameters (Take decomposition paper as guide)
\begin{itemize}
\item epsilon pri / epslion dual
\end{itemize}
\end{itemize}
\begin{figure}[h]
\centering
@ -1513,5 +1205,318 @@ following from the theoretical considerations.
\end{subfigure}
\caption{Dependence of the \ac{BER} on the value of the parameter $\gamma$ for various codes}
\label{fig:prox:results_3d_multiple}
\label{fig:admm:mu_rho_multiple}
\end{figure}
To get an estimate for the parameter $K$, the average error during decoding
can be used.
This is shown in figure \ref{fig:admm:avg_error} as an average of
$\SI{100000}{}$ decodings.
$\mu$ is set to 5 and $\rho$ is set to $1$ and the rest of the parameters are
again chosen as $K=200, \epsilon_\text{pri}=10^{-5}$ and $ \epsilon_\text{dual}=10^{-5}$.
Similarly to the results in section
\ref{sec:prox:Analysis and Simulation Results}, a dip is visible around the
$20$ iteration mark.
This is due to the fact that as the number of iterations increases
more and more decodings converge, leaving only the mistaken ones to be
averaged.
The point at which the wrong decodings start to become dominant and the
decoding performance does not increase any longer is largely independent of
the \ac{SNR}, allowing the value of $K$ to be chosen without considering the
\ac{SNR}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
width=0.6\textwidth,
height=0.45\textwidth,
xlabel={Iteration}, ylabel={Average $\left\Vert \hat{\boldsymbol{c}}
- \boldsymbol{c} \right\Vert$}
]
\addplot[ForestGreen, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{1.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{1}{dB}$}
\addplot[RedOrange, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{2.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{2}{dB}$}
\addplot[NavyBlue, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{3.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{3}{dB}$}
\addplot[RoyalPurple, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{4.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{4}{dB}$}
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{100000}{}$ decodings. (3,6)
regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:avg_error}
\end{figure}%
The last two parameters remaining to be examined are the tolerances for the
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$.
These are both set to the same value $\epsilon$.
The effect of their value on the decoding performance is visualized in figure
\ref{fig:admm:epsilon}.
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
kept constant, with $K=200$, $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$.
A lower value for the tolerance initially leads to a dramatic decrease in the
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\epsilon$}, ylabel={\acs{FER}},
ymode=log,
xmode=log,
x dir=reverse,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=epsilon, y=FER,
discard if not={SNR}{3.0},]
{res/admm/fer_epsilon_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$ on the \acs{FER}. (3,6) regular \ac{LDPC} code with
$n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:epsilon}
\end{figure}%
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
small and large, respectively, to reduce the average runtime of the decoding
process, while keeping them within a certain range as to not compromise the
decoding performance.
The maximum number of iterations $K$ performed can be chosen independantly
of the \ac{SNR}.
Finally, relatively small values should be given to the parameters
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
possible error rate.
\subsection{Decoding Performance}
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
\cite{original_admm} are compared to the results from the simulations
conducted in the context of this thesis.
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
the same as in \cite{original_admm};
the two \ac{FER} curves are practically identical.
Also shown is the curve resulting from \ac{BP} decoding, performing
1000 iterations.
The two algorithms perform relatively similarly, coming within $\SI{0.5}{dB}$
of one another.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
legend cell align={left},
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results.
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
\footnotetext{; $K=200, \mu = 3.3, \rho=1.9,
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
}%
%
In figure \ref{fig:admm:ber_fer}, the \ac{BER} and \ac{FER} for \ac{LP} decoding
using\ac{ADMM} and \ac{BP} are shown for a (3, 6) regular \ac{LDPC} code with
$n=204$.
To ensure comparability, in both cases the number of iterations was set to
$K=200$.
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
$\epsilon = 10^{-5}$ and $\epsilon=10^{-5}$.
Comparing figures \ref{fig:admm:results} and \ref{fig:admm:ber_fer} it is
apparent that the difference in decoding performance depends on the code being
considered.
More simulation results are presented in figure \ref{fig:comp:prox_admm_dec}
in section \ref{sec:comp:res}.
\begin{figure}[h]
\centering
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\mu$}, ylabel={\acs{BER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\hfill%
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={\acs{FER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}[t]{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=3,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{Turquoise, line width=1pt, mark=*}
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\addlegendimage{RoyalPurple, line width=1pt, mark=*}
\addlegendentry{BP (200 iterations)}
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{Comparison of the decoding performance of \acs{LP} decoding using
\acs{ADMM} and \acs{BP}. (3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:ber_fer}
\end{figure}%
In summary, the decoding performance of \ac{LP} decoding using \ac{ADMM} comes
close to that of \ac{BP}, their difference staying in the range of
approximately $\SI{0.5}{dB}$, depending on the code in question.
\subsection{Computational Performance}
\label{subsec:admm:comp_perf}
In terms of time complexity, the three steps of the decoding algorithm
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
considered.
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
algorithm employed.
Since for the implementation completed for this work the projection algorithm
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
also has linear time complexity.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
\label{fig:admm:time}
\end{figure}%
Simulation results from a range of different codes can be used to verify this
analysis.
Figure \ref{fig:admm:time} shows the average time needed to decode one
frame as a function of its length.
\todo{List codes used}
The results are necessarily skewed because the codes considered vary not only
in their length, but also in their construction scheme and rate.
Additionally, different optimization opportunities arise depending on the
length of a code, since for smaller codes dynamic memory allocation can be
completely omitted.
This may explain why the datapoint at $n=504$ is higher then would be expected
with linear behavior.
Nonetheless, the simulation results roughly match the expected behavior
following from the theoretical considerations.

View File

@ -401,7 +401,7 @@ while the newly generated ones are shown with dashed lines.
table [x=SNR, y=BER, col sep=comma,
discard if gt={SNR}{3.5}]
{res/generic/bp_20433484.csv};
\addlegendentry{BP (20 iterations)}
\addlegendentry{BP (200 iterations)}
\end{axis}
\end{tikzpicture}