Done with first version of content except conclusion etc.

This commit is contained in:
Andreas Tsouchlos 2023-04-23 07:20:31 +02:00
parent 488949c0a9
commit 7fa0ee80d3
15 changed files with 729 additions and 239 deletions

View File

@ -219,11 +219,16 @@
doi={10.1109/TIT.1962.1057683}
}
@misc{lautern_channelcodes,
@online{lautern_channelcodes,
author = "Helmling, Michael and Scholl, Stefan and Gensheimer, Florian and Dietz, Tobias and Kraft, Kira and Ruzika, Stefan and Wehn, Norbert",
title = "{D}atabase of {C}hannel {C}odes and {ML} {S}imulation {R}esults",
howpublished = "\url{www.uni-kl.de/channel-codes}",
url={https://www.uni-kl.de/channel-codes},
year = "2023"
title = "{D}atabase of {C}hannel {C}odes and {ML} {S}imulation {R}esults",
url = {https://www.uni-kl.de/channel-codes},
date = {2023-04}
}
@online{mackay_enc,
author = {MacKay, David J.C.},
title = {Encyclopedia of Sparse Graph Codes},
date = {2023-04},
url = {http://www.inference.org.uk/mackay/codes/data.html}
}

View File

@ -20,7 +20,7 @@ differences are interpreted based on their theoretical structure.
proximal operators \cite[Sec. 4.4]{proximal_algorithms}.
When using \ac{ADMM} as an optimization method to solve the \ac{LP} decoding
problem specifically, this is not quite possible because of the multiple
constraints.
constraints. \todo{Elaborate}
In spite of that, the two algorithms still show some striking similarities.
To see the first of these similarities, the \ac{LP} decoding problem in
@ -227,7 +227,6 @@ return $\tilde{\boldsymbol{c}}$
\label{fig:comp:message_passing}
\end{figure}%
%
\todo{Remove figure caption and add algorithm caption}
This message passing structure means that both algorithms can be implemented
very efficiently, as the update steps can be performed in parallel for all
\acp{CN} and for all \acp{VN}, respectively.
@ -249,9 +248,11 @@ respect to $n$ and are heavily parallelisable.
The decoding performance of the two algorithms is shown in figure
\ref{fig:comp:prox_admm_dec} in the form of the \ac{FER}.
Shown as well are the performance of the improved proximal decoding
altorithm presented in section \ref{sec:prox:Improved Implementation}
and, wherever available, the \ac{ML} decoding \ac{FER}.
Shown as well is the performance of the improved proximal decoding
algorithm presented in section \ref{sec:prox:Improved Implementation}.
Additionally, the \ac{FER} resulting from decoding using \ac{BP} and,
wherever available, the \ac{ML} decoding \ac{FER} taken from
\cite{lautern_channelcodes} are plotted as a reference.
The parameters chosen for the proximal and improved proximal decoders are
$\gamma=0.05$, $\omega=0.05$, $K=200$, $\eta = 1.5$ and $N=12$.
The parameters chosen for \ac{LP} decoding using \ac{ADMM} are $\mu = 5$,
@ -259,7 +260,7 @@ $\rho = 1$, $K=200$, $\epsilon_\text{pri} = 10^{-5}$ and
$\epsilon_\text{dual} = 10^{-5}$.
For all codes considered in the scope of this work, \ac{LP} decoding using
\ac{ADMM} consistently outperforms both proximal decoding and the improved
version.
version, reaching very similar performance to \ac{BP}.
The decoding gain heavily depends on the code, evidently becoming greater for
codes with larger $n$ and reaching values of up to $\SI{2}{dB}$.
@ -273,7 +274,7 @@ calculations performed in each case.
With proximal decoding, the calculations are approximate, leading
to the constraints never being quite satisfied.
With \ac{LP} decoding using \ac{ADMM}
the constraints are fulfilled for each parity check individualy after each
the constraints are fulfilled for each parity check individualy, after each
iteration of the decoding process.
The timing requirements of the decoding algorithms are visualized in figure
@ -281,13 +282,14 @@ The timing requirements of the decoding algorithms are visualized in figure
The datapoints have been generated by evaluating the metadata from \ac{FER}
and \ac{BER} simulations using the parameters mentioned earlier when
discussing the decoding performance.
While in this case the \ac{LP} decoding using \ac{ADMM} implementation seems
to be faster the the proximal decoding and improved proximal decoding
implementations, infering some general behavior is difficult.
While the \ac{ADMM} implementation seems to be faster the the proximal
decoding and improved proximal decoding implementations, infering some
general behavior is difficult in this case.
This is because of the comparison of actual implementations, making the
results dependent on factors such as the grade of optimization of each
implementation.
Nevertheless, the run time of both implementations is similar and both are
Nevertheless, the run time of both the proximal decoding and the \ac{LP}
decoding using \ac{ADMM} implementations is similar and both are
reasonably performant, owing to the parallelisable structure of the
algorithms.
%
@ -350,13 +352,16 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_963965.csv};
\addplot[NavyBlue, line width=1pt, mark=*]
\addplot[Turquoise, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
%{res/hybrid/2d_ber_fer_dfr_963965.csv};
{res/admm/ber_2d_963965.csv};
\addplot[Black, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,]
{res/generic/fer_ml_9633965.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_963965.csv};
\end{axis}
\end{tikzpicture}
@ -383,15 +388,17 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_bch_31_26.csv};
\addplot[NavyBlue, line width=1pt, mark=*]
\addplot[Turquoise, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_bch_31_26.csv};
\addplot[Black, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma,
discard if gt={SNR}{5.5},
discard if lt={SNR}{1},
]
discard if lt={SNR}{1},]
{res/generic/fer_ml_bch_31_26.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_bch_31_26.csv};
\end{axis}
\end{tikzpicture}
@ -419,9 +426,10 @@ algorithms.
discard if gt={SNR}{5.5}]
{res/proximal/2d_ber_fer_dfr_20433484.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05},
discard if gt={SNR}{5.5}]
{res/hybrid/2d_ber_fer_dfr_20433484.csv};
\addplot[NavyBlue, line width=1pt, mark=*]
\addplot[Turquoise, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma,
discard if not={mu}{3.0},
discard if gt={SNR}{5.5}]
@ -430,6 +438,9 @@ algorithms.
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{5.5}]
{res/generic/fer_ml_20433484.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
@ -456,9 +467,13 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_20455187.csv};
\addplot[NavyBlue, line width=1pt, mark=*]
\addplot[Turquoise, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_20455187.csv};
\addplot [RoyalPurple, mark=*, line width=1pt,
discard if gt={SNR}{5}]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_20455187.csv};
\end{axis}
\end{tikzpicture}
@ -487,9 +502,13 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_40833844.csv};
\addplot[NavyBlue, line width=1pt, mark=*]
\addplot[Turquoise, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_40833844.csv};
\addplot [RoyalPurple, mark=*, line width=1pt,
discard if gt={SNR}{3}]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_40833844.csv};
\end{axis}
\end{tikzpicture}
@ -516,9 +535,13 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_pegreg252x504.csv};
\addplot[NavyBlue, line width=1pt, mark=*]
\addplot[Turquoise, line width=1pt, mark=*]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_pegreg252x504.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma,
discard if gt={SNR}{3}]
{res/generic/bp_pegreg252x504.csv};
\end{axis}
\end{tikzpicture}
@ -544,9 +567,12 @@ algorithms.
\addlegendimage{RedOrange, line width=1pt, mark=triangle, densely dashed}
\addlegendentry{Improved proximal decoding}
\addlegendimage{NavyBlue, line width=1pt, mark=*}
\addlegendimage{Turquoise, line width=1pt, mark=*}
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\addlegendimage{RoyalPurple, line width=1pt, mark=*, solid}
\addlegendentry{\acs{BP} (20 iterations)}
\addlegendimage{Black, line width=1pt, mark=*, solid}
\addlegendentry{\acs{ML} decoding}
\end{axis}
@ -558,4 +584,3 @@ algorithms.
\label{fig:comp:prox_admm_dec}
\end{figure}
\todo{Add BP curve}

View File

@ -663,20 +663,20 @@ The same is true for the updates of the individual components of $\tilde{\boldsy
This representation can be slightly simplified by substituting
$\boldsymbol{\lambda}_j = \mu \cdot \boldsymbol{u}_j \,\forall\,j\in\mathcal{J}$:%
%
\begin{alignat*}{3}
\begin{alignat}{3}
\tilde{c}_i &\leftarrow \frac{1}{d_i} \left(
\sum_{j\in N_v\left( i \right) } \Big( \left( \boldsymbol{z}_j \right)_i
- \left( \boldsymbol{u}_j \right)_i \Big)
- \frac{\gamma_i}{\mu} \right)
\hspace{3mm} && \forall i\in\mathcal{I} \\
\hspace{3mm} && \forall i\in\mathcal{I} \label{eq:admm:c_update}\\
\boldsymbol{z}_j &\leftarrow \Pi_{\mathcal{P}_{d_j}}\left(
\boldsymbol{T}_j\tilde{\boldsymbol{c}} + \boldsymbol{u}_j \right)
\hspace{3mm} && \forall j\in\mathcal{J} \\
\hspace{3mm} && \forall j\in\mathcal{J} \label{eq:admm:z_update}\\
\boldsymbol{u}_j &\leftarrow \boldsymbol{u}_j
+ \boldsymbol{T}_j\tilde{\boldsymbol{c}}
- \boldsymbol{z}_j
\hspace{3mm} && \forall j\in\mathcal{J}
.\end{alignat*}
\hspace{3mm} && \forall j\in\mathcal{J} \label{eq:admm:u_update}
.\end{alignat}
%
The reason \ac{ADMM} is able to perform so well is due to the relocation of the constraints
@ -732,7 +732,7 @@ computing the projection operation $\Pi_{\mathcal{P}_{d_j}} \left( \cdot \right)
onto each check polytope. Various different methods to perform this projection
have been proposed (e.g., in \cite{original_admm}, \cite{efficient_lp_dec_admm},
\cite{lautern}).
The method chosen here is the one presented in \cite{lautern}.
The method chosen here is the one presented in \cite{original_admm}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@ -793,6 +793,7 @@ Defining%
\boldsymbol{s} := \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right)
\end{align*}%
\todo{Rename $\boldsymbol{D}$}%
%
the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
%
@ -801,7 +802,6 @@ the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
\left( \boldsymbol{s} - \frac{1}{\mu}\boldsymbol{\gamma} \right)
.\end{align*}
%
This modified version of the decoding process is depicted in algorithm \ref{alg:admm:mod}.
\begin{genericAlgorithm}[caption={\ac{LP} decoding using \ac{ADMM} algorithm with rewritten
@ -835,137 +835,37 @@ return $\tilde{\boldsymbol{c}}$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Results}%
\label{sec:lp:Results}
\section{Analysis and Simulation Results}%
\label{sec:lp:Analysis and Simulation Results}
In this section, \ac{LP} decoding using \ac{ADMM} is examined based on
simulation results for various codes.
First, the effect of the different parameters and how their values should be
chosen is investigated.
Subsequently, the decoding performance is observed and compared to that of
\ac{BP}.
Finally, the computational performance of the implementation and time
complexity of the algorithm are studied.
%\begin{figure}[H]
% \centering
%
% \begin{tikzpicture}
% \begin{axis}[
% colormap/viridis,
% xlabel={$E_b / N_0$}, ylabel={$\mu$}, zlabel={\acs{BER}},
% view={75}{30},
% zmode=log,
% ]
% \addplot3[
% surf,
% mesh/rows=14, mesh/cols=18
% ]
% table [col sep=comma, x=SNR, y=mu, z=BER]
% {res/admm/ber_2d_20433484.csv};
% \end{axis}
% \end{tikzpicture}
%\end{figure}
%
%\begin{figure}[H]
% \centering
%
% \begin{tikzpicture}
% \begin{axis}[
% colormap/viridis,
% xlabel={$E_b / N_0 2$}, ylabel={$\mu$}, zlabel={\acs{BER}},
% view={75}{30},
% zmode=log,
% ]
% \addplot3[
% surf,
% mesh/rows=14, mesh/cols=18
% ]
% table [col sep=comma, x=SNR, y=mu, z=BER]
% {/home/andreas/git/ba_sw/scripts/admm/sim_results/ber_2d_20433484.csv};
% \end{axis}
% \end{tikzpicture}
%\end{figure}
\subsection{Choice of Parameters}
The first two parameters to be investigated are the penalty parameter $\mu$
and the over-relaxation parameter $\rho$. \todo{Are these their actual names?}
A first approach to get some indication of the values that might be chosen
for these parameters is to look at how the decoding performance depends
on them.
The \ac{FER} is plotted as a function of $\mu$ and $\rho$ in figure
\ref{fig:admm:mu_rho}, for three different \acp{SNR}.
When varying $\mu$, $\rho$ is set to a constant value of 1 and when varying
$\rho$, $\mu$ is set to 5.
The behavior that can be observed is very similar to that of the
parameter $\gamma$ in proximal decoding, analyzed in section
\ref{sec:prox:Analysis and Simulation Results}.
A single optimal value giving optimal performance does not exist; rather,
as long as the value is chosen within a certain range, the performance is
approximately equally good.
\textbf{Game Plan}
\begin{enumerate}
\item Determine parameters
\item Make non-regular admm implementation use indexing instead of matrix vector
multiplication and simulate the pegreg, bch and 204.55.187 codes (parameter choice)
\item Computational performance
\item Comparison of proximal and admm (decoding performance and computational performance)
\item Find different codewords
\item Examine weird behavior when c is allowed to be negative
\item BP as comparison
\item Combination of proximal and BP
\end{enumerate}
\begin{itemize}
\item Choice of Parameters (Take decomposition paper as guide)
\begin{itemize}
\item mu
\item K
\item rho
\item epsilon pri / epslion dual
\end{itemize}
\item Decoding Performance
\begin{itemize}
\item FER and BER similar
\item DFR and FER pratically identical -> FER may be due to DFR
\item Compare to BP
\end{itemize}
\item Convergence Behavior
\begin{itemize}
\item Plot average error
\item Find out if converging to pseudocodeword or not converging.
How does pseudocodeword and not pseudocodeword relate to rounding and clipping?
\end{itemize}
\item Computational Performance
\begin{itemize}
\item Linear? Difficult to verify due to difference in adjustments
in the implementation for different codes
\end{itemize}
\end{itemize}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
]
\addplot[RedOrange, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results%
\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
\footnotetext{``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}; $K=200, \mu = 3.3, \rho=1.9,
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
}%
%
\begin{figure}[H]
\begin{figure}[h]
\centering
\begin{subfigure}[c]{0.48\textwidth}
@ -1029,6 +929,7 @@ return $\tilde{\boldsymbol{c}}$
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=3,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{ForestGreen, line width=1pt, densely dashed, mark=*}
@ -1041,83 +942,72 @@ return $\tilde{\boldsymbol{c}}$
\end{tikzpicture}
\end{subfigure}
\caption{asf}
\caption{Dependence of the decoding performance on the parameters $\mu$ and $\rho$.}
\label{fig:admm:mu_rho}
\end{figure}%
%
%\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
% \cite[\text{204.33.484}]{mackay_enc}; $K=200, \rho=1, \epsilon_\text{pri} = 10^{-5},
% \epsilon_\text{dual} = 10^{-5}$
%}%
%
\begin{figure}[H]
To aid in the choice of the parameters, an additional criterion can be used:
the number of iterations performed for a decoding operation.
This is directly related to the time needed to decode a received vector
$\boldsymbol{y}$, which the aim is to minimize.
Figure \ref{fig:admm:mu_rho_iterations} shows the average number of iterations
over $\SI{1000}{}$ decodings, as a function of $\rho$.
This time the \ac{SNR} is kept constant at $\SI{4}{dB}$ and the parameter
$\mu$ is varied.
It is visible that choosing a large value for $\rho$ as well as a small value
for $\mu$ minimizes the average number of iterations and thus the average
runtime of the decoding process.
\begin{figure}[h]
\centering
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 9$}
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 5$}
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 2$}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\mu$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=mu, y=k_avg,
discard if not={rho}{0.5000000000000001},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\rho = 0.5$}
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=mu, y=k_avg,
discard if not={rho}{1.1000000000000003},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\rho = 1.1$}
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=mu, y=k_avg,
discard if not={rho}{1.9000000000000004},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\rho = 1.9$}
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\hfill%
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 2$}
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 5$}
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 9$}
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\caption{asf}
\caption{Dependence of the average number of iterations required on $\mu$ and $\rho$
for $E_b / N_0 = \SI{4}{dB}$.}
\label{fig:admm:mu_rho_iterations}
\end{figure}%
To get an estimate for the parameter $K$, the average error during decoding
can be used.
This is shown in figure \ref{fig:admm:avg_error} as an average of
$\SI{100000}{}$ decodings.
Similarly to the results in section
\ref{sec:prox:Analysis and Simulation Results}, a dip is visible around the
$20$ iteration mark.
This is due to the fact that as the number of iterations increases
more and more decodings converge, leaving only the mistaken ones to be
averaged.
The point at which the wrong decodings start to become dominant and the
decoding performance does not increase any longer is largely independent of
the \ac{SNR}, allowing the value of $K$ to be chosen without considering the
\ac{SNR}.
\begin{figure}[H]
\begin{figure}[h]
\centering
\begin{tikzpicture}
@ -1159,7 +1049,7 @@ return $\tilde{\boldsymbol{c}}$
\end{tikzpicture}
\caption{Average error for $\SI{100000}{}$ decodings\protect\footnotemark{}}
\label{fig:}
\label{fig:admm:avg_error}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
@ -1168,4 +1058,460 @@ return $\tilde{\boldsymbol{c}}$
}%
%
The last two parameters remaining to be examined are the tolerances for the
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$.
These are considered as having the same value.
The effect of their value on the decoding performance is visualized in figure
\ref{fig:admm:epsilon} for a (3,6) regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}.
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
kept constant, with $K=200$, $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$.
A lower value for the tolerance initially leads to a dramatic decrease in the
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\epsilon$}, ylabel={\acs{FER}},
ymode=log,
xmode=log,
x dir=reverse,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=epsilon, y=FER,
discard if not={SNR}{3.0},]
{res/admm/fer_epsilon_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$ on the \acs{FER}}
\label{fig:admm:epsilon}
\end{figure}%
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
small and large, respectively, to reduce the average runtime of the decoding
process, while keeping them within a certain range as to not compromise the
decoding performance.
The maximum number of iterations $K$ performed can be chosen independantly
of the \ac{SNR}.
Finally, relatively small values should be given to the parameters
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
possible error rate.
\subsection{Decoding Performance}
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
\cite{original_admm} are compared to the results from the simulations
conducted in the context of this thesis.
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
the same as in \cite{original_admm};
the two \ac{FER} curves are practically identical.
Also shown is the curve resulting from \ac{BP} decoding, performing
1000 iterations.
The two algorithms perform relatively similarly, coming within $\SI{0.5}{dB}$
of one another.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
legend cell align={left},
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results.
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
\footnotetext{; $K=200, \mu = 3.3, \rho=1.9,
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
}%
%
In figure \ref{fig:admm:ber_fer}, the \ac{BER} and \ac{FER} for \ac{LP} decoding
using\ac{ADMM} and \ac{BP} are shown for a (3, 6) regular \ac{LDPC} code with
$n=204$.
To ensure comparability, in both cases the number of iterations was set to
$K=200$.
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
$\epsilon = 10^{-5}$ and $\epsilon=10^{-5}$.
Comparing figures \ref{fig:admm:results} and \ref{fig:admm:ber_fer} it is
apparent that the difference in decoding performance depends on the code being
considered.
More simulation results are presented in figure \ref{fig:comp:prox_admm_dec}
in section \ref{sec:comp:res}.
\begin{figure}[h]
\centering
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\mu$}, ylabel={\acs{BER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\hfill%
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={\acs{FER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}[t]{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=3,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{Turquoise, line width=1pt, mark=*}
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\addlegendimage{RoyalPurple, line width=1pt, mark=*}
\addlegendentry{BP (20 iterations)}
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{Comparison of the decoding performance of \acs{LP} decoding using
\acs{ADMM} and \acs{BP}. (3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:ber_fer}
\end{figure}%
In summary, the decoding performance of \ac{LP} decoding using \ac{ADMM} comes
close to that of \ac{BP}, their difference staying in the range of
approximately $\SI{0.5}{dB}$, depending on the code in question.
\subsection{Computational Performance}
\label{subsec:admm:comp_perf}
In terms of time complexity, the three steps of the decoding algorithm
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
considered.
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
algorithm employed.
Since for the implementation completed for this work the projection algorithm
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
also has linear time complexity.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
\label{fig:admm:time}
\end{figure}%
Simulation results from a range of different codes can be used to verify this
analysis.
Figure \ref{fig:admm:time} shows the average time needed to decode one
frame as a function of its length.
\todo{List codes used}
The results are necessarily skewed because the codes considered vary not only
in their length, but also in their construction scheme and rate.
Additionally, different optimization opportunities arise depending on the
length of a code, since for smaller codes dynamic memory allocation can be
completely omitted.
This may explain why the datapoint at $n=504$ is higher then would be expected
with linear behavior.
Nonetheless, the simulation results roughly match the expected behavior
following from the theoretical considerations.
\textbf{Game Plan}
\begin{itemize}
\item Choice of Parameters (Take decomposition paper as guide)
\begin{itemize}
\item epsilon pri / epslion dual
\end{itemize}
\end{itemize}
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_963965.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_963965.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_963965.csv};
\end{axis}
\end{tikzpicture}
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=96, k=48$
\cite[\text{96.3.965}]{mackay_enc}}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_bch_31_26.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_bch_31_26.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_bch_31_26.csv};
\end{axis}
\end{tikzpicture}
\caption{BCH code with $n=31, k=26$\\[2\baselineskip]}
\end{subfigure}
\vspace{3mm}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20455187.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20455187.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20455187.csv};
\end{axis}
\end{tikzpicture}
\caption{$\left( 5, 10 \right)$-regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.55.187}]{mackay_enc}}
\end{subfigure}%
\vspace{3mm}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_40833844.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_40833844.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_40833844.csv};
\end{axis}
\end{tikzpicture}
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=408, k=204$
\cite[\text{408.33.844}]{mackay_enc}}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_pegreg252x504.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_pegreg252x504.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_pegreg252x504.csv};
\end{axis}
\end{tikzpicture}
\caption{LDPC code (progressive edge growth construction) with $n=504, k=252$
\cite[\text{PEGReg252x504}]{mackay_enc}}
\end{subfigure}%
\vspace{5mm}
\begin{subfigure}[t]{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{NavyBlue, line width=1.5pt, densely dashed, mark=*}
\addlegendentry{$\mu = 9$};
\addlegendimage{RedOrange, line width=1.5pt, densely dashed, mark=*}
\addlegendentry{$\mu = 5$};
\addlegendimage{ForestGreen, line width=1.5pt, densely dashed, mark=*}
\addlegendentry{$\mu = 2$};
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{Dependence of the \ac{BER} on the value of the parameter $\gamma$ for various codes}
\label{fig:prox:results_3d_multiple}
\end{figure}

View File

@ -300,7 +300,7 @@ the gradient can be written as%
\begin{align*}
\nabla h\left( \tilde{\boldsymbol{x}} \right) =
4\left( \tilde{\boldsymbol{x}}^{\circ 3} - \tilde{\boldsymbol{x}} \right)
+ 2\tilde{\boldsymbol{x}}^{\circ -1} \circ \boldsymbol{H}^\text{T}
+ 2\tilde{\boldsymbol{x}}^{\circ \left( -1 \right) } \circ \boldsymbol{H}^\text{T}
\boldsymbol{v}
,\end{align*}
%
@ -359,6 +359,7 @@ while the newly generated ones are shown with dashed lines.
xlabel={$E_b / N_0$ (dB)}, ylabel={BER},
ymode=log,
legend style={at={(0.5,-0.7)},anchor=south},
legend cell align={left},
width=0.6\textwidth,
height=0.45\textwidth,
ymax=1.2, ymin=0.8e-4,
@ -397,8 +398,10 @@ while the newly generated ones are shown with dashed lines.
\addlegendentry{$\gamma = 0.05$ (Own results)}
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=BP, col sep=comma] {res/proximal/ber_paper.csv};
\addlegendentry{BP (Wadayama et al.)}
table [x=SNR, y=BER, col sep=comma,
discard if gt={SNR}{3.5}]
{res/generic/bp_20433484.csv};
\addlegendentry{BP (20 iterations)}
\end{axis}
\end{tikzpicture}
@ -1209,7 +1212,7 @@ $\SI{2.80}{GHz}$ and utilizing all cores.
height=0.45\textwidth,
legend cell align={left},]
\addplot[RedOrange, only marks, mark=*]
\addplot[RedOrange, only marks, mark=square*]
table [col sep=comma, x=n, y=spf]
{res/proximal/fps_vs_n.csv};
\end{axis}
@ -1509,12 +1512,12 @@ theoretical considerations.
legend style={at={(0.05,0.77)},anchor=south west},
legend cell align={left},]
\addplot[RedOrange, only marks, mark=*]
\addplot[RedOrange, only marks, mark=square*]
table [col sep=comma, x=n, y=spf]
{res/proximal/fps_vs_n.csv};
\addlegendentry{proximal}
\addplot[RoyalPurple, only marks, mark=triangle*]
\addplot[Gray, only marks, mark=*]
table [col sep=comma, x=n, y=spf]
{res/hybrid/fps_vs_n.csv};
\addlegendentry{improved ($N = 12$)}

View File

@ -0,0 +1,9 @@
SNR,BER,FER,DFR,num_iterations
1.0,0.06409994553376906,0.7013888888888888,0.7013888888888888,144.0
1.5,0.03594771241830065,0.45495495495495497,0.47297297297297297,222.0
2.0,0.014664163537755528,0.2148936170212766,0.2297872340425532,470.0
2.5,0.004731522238525039,0.07634164777021919,0.08163265306122448,1323.0
3.0,0.000911436423803915,0.016994783779236074,0.01749957933703517,5943.0
3.5,0.00011736135227863285,0.002537369677176234,0.002587614621278734,39805.0
4.0,1.0686274509803922e-05,0.00024,0.00023,100000.0
4.5,4.411764705882353e-07,1e-05,1e-05,100000.0
1 SNR BER FER DFR num_iterations
2 1.0 0.06409994553376906 0.7013888888888888 0.7013888888888888 144.0
3 1.5 0.03594771241830065 0.45495495495495497 0.47297297297297297 222.0
4 2.0 0.014664163537755528 0.2148936170212766 0.2297872340425532 470.0
5 2.5 0.004731522238525039 0.07634164777021919 0.08163265306122448 1323.0
6 3.0 0.000911436423803915 0.016994783779236074 0.01749957933703517 5943.0
7 3.5 0.00011736135227863285 0.002537369677176234 0.002587614621278734 39805.0
8 4.0 1.0686274509803922e-05 0.00024 0.00023 100000.0
9 4.5 4.411764705882353e-07 1e-05 1e-05 100000.0

View File

@ -0,0 +1,10 @@
{
"duration": 46.20855645602569,
"name": "ber_20433484",
"platform": "Linux-6.2.10-arch1-1-x86_64-with-glibc2.37",
"K": 200,
"epsilon_pri": 1e-05,
"epsilon_dual": 1e-05,
"max_frame_errors": 100,
"end_time": "2023-04-22 19:15:37.252176"
}

View File

@ -0,0 +1,31 @@
SNR,epsilon,BER,FER,DFR,num_iterations,k_avg
1.0,1e-06,0.294328431372549,0.722,0.0,1000.0,0.278
1.0,1e-05,0.294328431372549,0.722,0.0,1000.0,0.278
1.0,0.0001,0.3059313725490196,0.745,0.0,1000.0,0.254
1.0,0.001,0.3059558823529412,0.748,0.0,1000.0,0.254
1.0,0.01,0.30637254901960786,0.786,0.0,1000.0,0.254
1.0,0.1,0.31590686274509805,0.9,0.0,1000.0,0.925
2.0,1e-06,0.11647058823529412,0.298,0.0,1000.0,0.702
2.0,1e-05,0.11647058823529412,0.298,0.0,1000.0,0.702
2.0,0.0001,0.11647058823529412,0.298,0.0,1000.0,0.702
2.0,0.001,0.11652450980392157,0.306,0.0,1000.0,0.702
2.0,0.01,0.09901470588235294,0.297,0.0,1000.0,0.75
2.0,0.1,0.10512745098039215,0.543,0.0,1000.0,0.954
3.0,1e-06,0.004563725490196078,0.012,0.0,1000.0,0.988
3.0,1e-05,0.004563725490196078,0.012,0.0,1000.0,0.988
3.0,0.0001,0.005730392156862745,0.015,0.0,1000.0,0.985
3.0,0.001,0.005730392156862745,0.015,0.0,1000.0,0.985
3.0,0.01,0.007200980392156863,0.037,0.0,1000.0,0.981
3.0,0.1,0.009151960784313726,0.208,0.0,1000.0,0.995
4.0,1e-06,0.0,0.0,0.0,1000.0,1.0
4.0,1e-05,0.0,0.0,0.0,1000.0,1.0
4.0,0.0001,0.0,0.0,0.0,1000.0,1.0
4.0,0.001,0.0002598039215686275,0.001,0.0,1000.0,0.999
4.0,0.01,4.901960784313725e-05,0.01,0.0,1000.0,1.0
4.0,0.1,0.0006862745098039216,0.103,0.0,1000.0,1.0
5.0,1e-06,0.0,0.0,0.0,1000.0,1.0
5.0,1e-05,0.0,0.0,0.0,1000.0,1.0
5.0,0.0001,0.0,0.0,0.0,1000.0,1.0
5.0,0.001,0.0,0.0,0.0,1000.0,1.0
5.0,0.01,9.803921568627451e-06,0.002,0.0,1000.0,1.0
5.0,0.1,0.0005245098039215687,0.097,0.0,1000.0,1.0
1 SNR epsilon BER FER DFR num_iterations k_avg
2 1.0 1e-06 0.294328431372549 0.722 0.0 1000.0 0.278
3 1.0 1e-05 0.294328431372549 0.722 0.0 1000.0 0.278
4 1.0 0.0001 0.3059313725490196 0.745 0.0 1000.0 0.254
5 1.0 0.001 0.3059558823529412 0.748 0.0 1000.0 0.254
6 1.0 0.01 0.30637254901960786 0.786 0.0 1000.0 0.254
7 1.0 0.1 0.31590686274509805 0.9 0.0 1000.0 0.925
8 2.0 1e-06 0.11647058823529412 0.298 0.0 1000.0 0.702
9 2.0 1e-05 0.11647058823529412 0.298 0.0 1000.0 0.702
10 2.0 0.0001 0.11647058823529412 0.298 0.0 1000.0 0.702
11 2.0 0.001 0.11652450980392157 0.306 0.0 1000.0 0.702
12 2.0 0.01 0.09901470588235294 0.297 0.0 1000.0 0.75
13 2.0 0.1 0.10512745098039215 0.543 0.0 1000.0 0.954
14 3.0 1e-06 0.004563725490196078 0.012 0.0 1000.0 0.988
15 3.0 1e-05 0.004563725490196078 0.012 0.0 1000.0 0.988
16 3.0 0.0001 0.005730392156862745 0.015 0.0 1000.0 0.985
17 3.0 0.001 0.005730392156862745 0.015 0.0 1000.0 0.985
18 3.0 0.01 0.007200980392156863 0.037 0.0 1000.0 0.981
19 3.0 0.1 0.009151960784313726 0.208 0.0 1000.0 0.995
20 4.0 1e-06 0.0 0.0 0.0 1000.0 1.0
21 4.0 1e-05 0.0 0.0 0.0 1000.0 1.0
22 4.0 0.0001 0.0 0.0 0.0 1000.0 1.0
23 4.0 0.001 0.0002598039215686275 0.001 0.0 1000.0 0.999
24 4.0 0.01 4.901960784313725e-05 0.01 0.0 1000.0 1.0
25 4.0 0.1 0.0006862745098039216 0.103 0.0 1000.0 1.0
26 5.0 1e-06 0.0 0.0 0.0 1000.0 1.0
27 5.0 1e-05 0.0 0.0 0.0 1000.0 1.0
28 5.0 0.0001 0.0 0.0 0.0 1000.0 1.0
29 5.0 0.001 0.0 0.0 0.0 1000.0 1.0
30 5.0 0.01 9.803921568627451e-06 0.002 0.0 1000.0 1.0
31 5.0 0.1 0.0005245098039215687 0.097 0.0 1000.0 1.0

View File

@ -0,0 +1,10 @@
{
"duration": 15.407989402010571,
"name": "rho_kavg_20433484",
"platform": "Linux-6.2.10-arch1-1-x86_64-with-glibc2.37",
"K": 200,
"rho": 1,
"mu": 5,
"max_frame_errors": 100000,
"end_time": "2023-04-23 06:39:23.561294"
}

View File

@ -0,0 +1,9 @@
SNR,BER,FER,num_iterations
1,0.0315398614535635,0.598802395209581,334
1.5,0.0172476397966594,0.352733686067019,567
2,0.00668670591018522,0.14194464158978,1409
2.5,0.00168951075575861,0.0388349514563107,5150
3,0.000328745799468201,0.00800288103717338,24991
3.5,4.21796065456326e-05,0.00109195935727272,183157
4,4.95098039215686e-06,0.000134,500000
4.5,2.94117647058824e-07,1.2e-05,500000
1 SNR BER FER num_iterations
2 1 0.0315398614535635 0.598802395209581 334
3 1.5 0.0172476397966594 0.352733686067019 567
4 2 0.00668670591018522 0.14194464158978 1409
5 2.5 0.00168951075575861 0.0388349514563107 5150
6 3 0.000328745799468201 0.00800288103717338 24991
7 3.5 4.21796065456326e-05 0.00109195935727272 183157
8 4 4.95098039215686e-06 0.000134 500000
9 4.5 2.94117647058824e-07 1.2e-05 500000

View File

@ -0,0 +1,10 @@
SNR,BER,FER,num_iterations
1,0.0592497868712702,0.966183574879227,207
1.5,0.0465686274509804,0.854700854700855,234
2,0.0326898561282098,0.619195046439629,323
2.5,0.0171613765211425,0.368324125230203,543
3,0.00553787541776455,0.116346713205352,1719
3.5,0.00134027952469441,0.0275900124155056,7249
4,0.000166480738027721,0.0034201480924124,58477
4.5,1.19607843137255e-05,0.000252,500000
5,9.50980392156863e-07,2e-05,500000
1 SNR BER FER num_iterations
2 1 0.0592497868712702 0.966183574879227 207
3 1.5 0.0465686274509804 0.854700854700855 234
4 2 0.0326898561282098 0.619195046439629 323
5 2.5 0.0171613765211425 0.368324125230203 543
6 3 0.00553787541776455 0.116346713205352 1719
7 3.5 0.00134027952469441 0.0275900124155056 7249
8 4 0.000166480738027721 0.0034201480924124 58477
9 4.5 1.19607843137255e-05 0.000252 500000
10 5 9.50980392156863e-07 2e-05 500000

View File

@ -0,0 +1,6 @@
SNR,BER,FER,num_iterations
1,0.0303507766743061,0.649350649350649,308
1.5,0.0122803195352215,0.296296296296296,675
2,0.00284899516991547,0.0733137829912024,2728
2.5,0.000348582879119279,0.00916968502131952,21811
3,2.52493009763634e-05,0.000760274154860243,263063
1 SNR BER FER num_iterations
2 1 0.0303507766743061 0.649350649350649 308
3 1.5 0.0122803195352215 0.296296296296296 675
4 2 0.00284899516991547 0.0733137829912024 2728
5 2.5 0.000348582879119279 0.00916968502131952 21811
6 3 2.52493009763634e-05 0.000760274154860243 263063

View File

@ -0,0 +1,9 @@
SNR,BER,FER,num_iterations
1,0.0352267331433998,0.56980056980057,351
1.5,0.0214331413947537,0.383877159309021,521
2,0.0130737396538751,0.225733634311512,886
2.5,0.00488312520012808,0.0960614793467819,2082
3,0.00203045475576707,0.0396589331746976,5043
3.5,0.000513233833401836,0.010275380189067,19464
4,0.000107190497363908,0.0025408117893667,78715
4.5,3.2074433522095e-05,0.00092605883251763,215969
1 SNR BER FER num_iterations
2 1 0.0352267331433998 0.56980056980057 351
3 1.5 0.0214331413947537 0.383877159309021 521
4 2 0.0130737396538751 0.225733634311512 886
5 2.5 0.00488312520012808 0.0960614793467819 2082
6 3 0.00203045475576707 0.0396589331746976 5043
7 3.5 0.000513233833401836 0.010275380189067 19464
8 4 0.000107190497363908 0.0025408117893667 78715
9 4.5 3.2074433522095e-05 0.00092605883251763 215969

View File

@ -0,0 +1,11 @@
SNR,BER,FER,num_iterations
1,0.0584002878042931,0.743494423791822,269
1.5,0.0458084740620892,0.626959247648903,319
2,0.0318872821653689,0.459770114942529,435
2.5,0.0248431459534825,0.392927308447937,509
3,0.0158453558113146,0.251256281407035,796
3.5,0.0115615186982586,0.176991150442478,1130
4,0.00708558642550811,0.115606936416185,1730
4.5,0.00389714705036542,0.0614817091915155,3253
5,0.00221548053104734,0.0345125107851596,5795
5.5,0.00106888328072207,0.0172131852999398,11619
1 SNR BER FER num_iterations
2 1 0.0584002878042931 0.743494423791822 269
3 1.5 0.0458084740620892 0.626959247648903 319
4 2 0.0318872821653689 0.459770114942529 435
5 2.5 0.0248431459534825 0.392927308447937 509
6 3 0.0158453558113146 0.251256281407035 796
7 3.5 0.0115615186982586 0.176991150442478 1130
8 4 0.00708558642550811 0.115606936416185 1730
9 4.5 0.00389714705036542 0.0614817091915155 3253
10 5 0.00221548053104734 0.0345125107851596 5795
11 5.5 0.00106888328072207 0.0172131852999398 11619

View File

@ -0,0 +1,6 @@
SNR,BER,FER,num_iterations
1,0.0294665331073098,0.647249190938511,309
1.5,0.0104905437352246,0.265957446808511,752
2,0.0021089358705197,0.05616399887672,3561
2.5,0.000172515567971134,0.00498840196543037,40093
3,6.3531746031746e-06,0.0002,500000
1 SNR BER FER num_iterations
2 1 0.0294665331073098 0.647249190938511 309
3 1.5 0.0104905437352246 0.265957446808511 752
4 2 0.0021089358705197 0.05616399887672 3561
5 2.5 0.000172515567971134 0.00498840196543037 40093
6 3 6.3531746031746e-06 0.0002 500000

View File

@ -9,9 +9,9 @@
\thesisTitle{Application of Optimization Algorithms for Channel Decoding}
\thesisType{Bachelor's Thesis}
\thesisAuthor{Andreas Tsouchlos}
%\thesisAdvisor{Prof. Dr.-Ing. Laurent Schmalen}
\thesisAdvisor{Prof. Dr.-Ing. Laurent Schmalen}
%\thesisHeadOfInstitute{Prof. Dr.-Ing. Laurent Schmalen}
\thesisSupervisor{Name of assistant}
\thesisSupervisor{Dr.-Ing. Holger Jäkel}
\thesisStartDate{24.10.2022}
\thesisEndDate{24.04.2023}
\thesisSignatureDate{Signature date} % TODO: Signature date