1 Commits

Author SHA1 Message Date
96099b7fd2 Added compiled presentation 2023-04-20 11:09:13 +02:00
18 changed files with 393 additions and 918 deletions

Binary file not shown.

View File

@@ -45,6 +45,15 @@
% eprint = {https://doi.org/10.1080/24725854.2018.1550692}
}
@online{mackay_enc,
author = {MacKay, David J.C.},
title = {Encyclopedia of Sparse Graph Codes},
date = {2023-01},
url = {http://www.inference.org.uk/mackay/codes/data.html}
}
@article{proximal_algorithms,
author = {Parikh, Neal and Boyd, Stephen},
title = {Proximal Algorithms},
@@ -210,16 +219,11 @@
doi={10.1109/TIT.1962.1057683}
}
@online{lautern_channelcodes,
@misc{lautern_channelcodes,
author = "Helmling, Michael and Scholl, Stefan and Gensheimer, Florian and Dietz, Tobias and Kraft, Kira and Ruzika, Stefan and Wehn, Norbert",
title = "{D}atabase of {C}hannel {C}odes and {ML} {S}imulation {R}esults",
howpublished = "\url{www.uni-kl.de/channel-codes}",
url={https://www.uni-kl.de/channel-codes},
date = {2023-04}
year = "2023"
}
@online{mackay_enc,
author = {MacKay, David J.C.},
title = {Encyclopedia of Sparse Graph Codes},
date = {2023-04},
url = {http://www.inference.org.uk/mackay/codes/data.html}
}

View File

@@ -2,9 +2,14 @@
\label{chapter:comparison}
In this chapter, proximal decoding and \ac{LP} Decoding using \ac{ADMM} are compared.
First, the two algorithms are studied on a theoretical basis.
Subsequently, their respective simulation results are examined and their
differences are interpreted based on their theoretical structure.
First the two algorithms are compared on a theoretical basis.
Subsequently, their respective simulation results are examined, and their
differences are interpreted on the basis of their theoretical structure.
%some similarities between the proximal decoding algorithm
%and \ac{LP} decoding using \ac{ADMM} are be pointed out.
%The two algorithms are compared and their different computational and decoding
%performance is interpreted on the basis of their theoretical structure.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -13,11 +18,12 @@ differences are interpreted based on their theoretical structure.
\ac{ADMM} and the proximal gradient method can both be expressed in terms of
proximal operators \cite[Sec. 4.4]{proximal_algorithms}.
Additionally, the two algorithms show some striking similarities with
regard to their general structure and the way in which the minimization of the
respective objective functions is accomplished.
When using \ac{ADMM} as an optimization method to solve the \ac{LP} decoding
problem specifically, this is not quite possible because of the multiple
constraints.
In spite of that, the two algorithms still show some striking similarities.
The \ac{LP} decoding problem in
To see the first of these similarities, the \ac{LP} decoding problem in
equation (\ref{eq:lp:relaxed_formulation}) can be slightly rewritten using the
\textit{indicator functions} $g_j : \mathbb{R}^{d_j} \rightarrow
\left\{ 0, +\infty \right\} \hspace{1mm}, j\in\mathcal{J}$ for the polytopes
@@ -34,11 +40,10 @@ by moving the constraints into the objective function, as shown in figure
\ref{fig:ana:theo_comp_alg:admm}.
Both algorithms are composed of an iterative approach consisting of two
alternating steps.
The objective functions of the two problems are similar in that they
The objective functions of both problems are similar in that they
both comprise two parts: one associated to the likelihood that a given
codeword was sent, stemming from the channel model, and one associated
to the constraints the decoding process is subjected to, stemming from the
code used.
codeword was sent and one associated to the constraints the decoding process
is subjected to.
%
\begin{figure}[h]
@@ -118,11 +123,15 @@ Their major difference is that while with proximal decoding the constraints
are regarded in a global context, considering all parity checks at the same
time, with \ac{ADMM} each parity check is
considered separately and in a more local context (line 4 in both algorithms).
This difference means that while with proximal decoding the alternating
minimization of the two parts of the objective function inevitably leads to
oscillatory behavior (as explained in section
\ref{subsec:prox:conv_properties}), this is not the case with \ac{ADMM}, which
partly explains the disparate decoding performance of the two methods.
Furthermore, while with proximal decoding the step considering the constraints
is realized using gradient descent - amounting to an approximation -
with \ac{ADMM} it reduces to a number of projections onto the parity polytopes
$\mathcal{P}_{d_j}, \hspace{1mm} j\in\mathcal{J}$, which always provide exact
results.
$\mathcal{P}_{d_j}$ which always provide exact results.
The contrasting treatment of the constraints (global and approximate with
proximal decoding as opposed to local and exact with \ac{LP} decoding using
@@ -133,27 +142,34 @@ calculation, whereas with \ac{LP} decoding it occurs due to the approximate
formulation of the constraints - independent of the optimization method
itself.
The advantage which arises because of this when employing \ac{LP} decoding is
the \ac{ML} certificate property: when a valid codeword is returned, it is
also the \ac{ML} codeword.
that it can be easily detected \todo{Not 'easily' detected}, when the algorithm gets stuck - it
returns a solution corresponding to a pseudocodeword, the components of which
are fractional.
Moreover, when a valid codeword is returned, it is also the \ac{ML} codeword.
This means that additional redundant parity-checks can be added successively
until the codeword returned is valid and thus the \ac{ML} solution is found
\cite[Sec. IV.]{alp}.
In terms of time complexity the two decoding algorithms are comparable.
In terms of time complexity, the two decoding algorithms are comparable.
Each of the operations required for proximal decoding can be performed
in $\mathcal{O}\left( n \right) $ time for \ac{LDPC} codes (see section
\ref{subsec:prox:comp_perf}).
The same is true for \ac{LP} decoding using \ac{ADMM} (see section
\ref{subsec:admm:comp_perf}).
Additionally, both algorithms can be understood as message-passing algorithms,
\ac{LP} decoding using \ac{ADMM} as similarly to
\cite[Sec. III. D.]{original_admm} and
\cite[Sec. II. B.]{efficient_lp_dec_admm}, and proximal decoding by starting
with algorithm \ref{alg:prox}, substituting for the gradient of the
code-constraint polynomial and separating the $\boldsymbol{s}$ update into two parts.
in linear time for \ac{LDPC} codes (see section \ref{subsec:prox:comp_perf}).
The same is true for the $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}$-update
steps of \ac{LP} decoding using \ac{ADMM}, while
the projection step has a worst-case time complexity of
$\mathcal{O}\left( n^2 \right)$ and an average complexity of
$\mathcal{O}\left( n \right)$ (see section TODO, \cite[Sec. VIII.]{lautern}).
Both algorithms can be understood as message-passing algorithms, \ac{LP}
decoding using \ac{ADMM} as similarly to \cite[Sec. III. D.]{original_admm}
or \cite[Sec. II. B.]{efficient_lp_dec_admm} and proximal decoding by
starting with algorithm \ref{alg:prox},
substituting for the gradient of the code-constraint polynomial and separating
it into two parts.
The algorithms in their message-passing form are depicted in figure
\ref{fig:comp:message_passing}.
$M_{j\to i}$ denotes a message transmitted from \ac{CN} j to \ac{VN} i.
$M_{j\to}$ signifies the special case where a \ac{VN} transmits the same
message to all \acp{VN}.
%
\begin{figure}[h]
\centering
@@ -168,14 +184,14 @@ Initialize $\boldsymbol{r}, \boldsymbol{s}, \omega, \gamma$
while stopping critierion unfulfilled do
for j in $\mathcal{J}$ do
$p_j \leftarrow \prod_{i\in N_c\left( j \right) } r_i $
$M_{j\to i} \leftarrow p_j^2 - p_j$|\Suppressnumber|
$M_{j\to} \leftarrow p_j^2 - p_j$|\Suppressnumber|
|\vspace{0.22mm}\Reactivatenumber|
end for
for i in $\mathcal{I}$ do
$s_i \leftarrow s_i + \gamma \left[ 4\left( s_i^2 - 1 \right)s_i
\phantom{\frac{4}{s_i}}\right.$|\Suppressnumber|
|\Reactivatenumber|$\left.+ \frac{4}{s_i}\sum_{j\in N_v\left( i \right) }
M_{j\to i} \right] $
M_{j\to} \right] $
$r_i \leftarrow r_i + \omega \left( s_i - y_i \right)$
end for
end while
@@ -221,9 +237,15 @@ return $\tilde{\boldsymbol{c}}$
\label{fig:comp:message_passing}
\end{figure}%
%
This message passing structure means that both algorithms can be implemented
very efficiently, as the update steps can be performed in parallel for all
\acp{CN} and for all \acp{VN}, respectively.
It is evident that while the two algorithms are very similar in their general
structure, with \ac{LP} decoding using \ac{ADMM}, multiple messages have to be
computed for each check node (line 6 in figure
\ref{fig:comp:message_passing:admm}), whereas
with proximal decoding, the same message is transmitted to all \acp{VN}
(line 5 of figure \ref{fig:comp:message_passing:proximal}).
This means that while both algorithms have an average time complexity of
$\mathcal{O}\left( n \right)$, more arithmetic operations are required in the
\ac{ADMM} case.
In conclusion, the two algorithms have a very similar structure, where the
parts of the objective function relating to the likelihood and to the
@@ -231,8 +253,9 @@ constraints are minimized in an alternating fashion.
With proximal decoding this minimization is performed for all constraints at once
in an approximative manner, while with \ac{LP} decoding using \ac{ADMM} it is
performed for each constraint individually and with exact results.
In terms of time complexity, both algorithms are linear with
respect to $n$ and are heavily parallelisable.
In terms of time complexity, both algorithms are, on average, linear with
respect to $n$, although for \ac{LP} decoding using \ac{ADMM} significantly
more arithmetic operations are necessary in each iteration.
@@ -240,98 +263,24 @@ respect to $n$ and are heavily parallelisable.
\section{Comparison of Simulation Results}%
\label{sec:comp:res}
The decoding performance of the two algorithms is compared in figure
\ref{fig:comp:prox_admm_dec} in the form of the \ac{FER}.
Shown as well is the performance of the improved proximal decoding
algorithm presented in section \ref{sec:prox:Improved Implementation}.
The \ac{FER} resulting from decoding using \ac{BP} and,
wherever available, the \ac{FER} of \ac{ML} decoding taken from
\cite{lautern_channelcodes} are plotted as a reference.
The parameters chosen for the proximal and improved proximal decoders are
$\gamma=0.05$, $\omega=0.05$, $K=200$, $\eta = 1.5$ and $N=12$.
The parameters chosen for \ac{LP} decoding using \ac{ADMM} are $\mu = 5$,
$\rho = 1$, $K=200$, $\epsilon_\text{pri} = 10^{-5}$ and
$\epsilon_\text{dual} = 10^{-5}$.
For all codes considered in the scope of this work, \ac{LP} decoding using
\ac{ADMM} consistently outperforms both proximal decoding and the improved
version, reaching very similar performance to \ac{BP}.
The decoding gain heavily depends on the code, evidently becoming greater for
codes with larger $n$ and reaching values of up to $\SI{2}{dB}$.
\begin{itemize}
\item The comparison of actual implementations is always debatable /
contentious, since it is difficult to separate differences in
algorithm performance from differences in implementation
\item No large difference in computational performance $\rightarrow$
Parallelism cannot come to fruition as decoding is performed on the
same number of cores for both algorithms (Multiple decodings in parallel)
\item Nonetheless, in realtime applications / applications where the focus
is not the mass decoding of raw data, \ac{ADMM} has advantages, since
the decoding of a single codeword is performed faster
\item \ac{ADMM} faster than proximal decoding $\rightarrow$
Parallelism
\item Proximal decoding faster than \ac{ADMM} $\rightarrow$ dafuq
(larger number of iterations before convergence? More values to compute for ADMM?)
\end{itemize}
These simulation results can be interpreted with regard to the theoretical
structure of the decoding methods, as analyzed in section \ref{sec:comp:theo}.
The worse performance of proximal decoding is somewhat surprising, considering
the global treatment of the constraints in contrast to the local treatment
in the case of \ac{LP} decoding using \ac{ADMM}.
It may be explained, however, in the context of the nature of the
calculations performed in each case.
With proximal decoding, the calculations are approximate, leading
to the constraints never being quite satisfied.
With \ac{LP} decoding using \ac{ADMM},
the constraints are fulfilled for each parity check individualy after each
iteration of the decoding process.
It should be noted that while in this thesis proximal decoding was
examined with respect to its performance in \ac{AWGN} channels, in
\cite{proximal_paper} it is presented as a method applicable to non-trivial
channel models such as \ac{LDPC}-coded massive \ac{MIMO} channels, perhaps
broadening its usefulness beyond what is shown here.
The timing requirements of the decoding algorithms are visualized in figure
\ref{fig:comp:time}.
The datapoints have been generated by evaluating the metadata from \ac{FER}
and \ac{BER} simulations using the parameters mentioned earlier when
discussing the decoding performance.
The codes considered are the same as in sections \ref{subsec:prox:comp_perf}
and \ref{subsec:admm:comp_perf}.
While the \ac{ADMM} implementation seems to be faster the the proximal
decoding and improved proximal decoding implementations, infering some
general behavior is difficult in this case.
This is because of the comparison of actual implementations, making the
results dependent on factors such as the grade of optimization of each of the
implementations.
Nevertheless, the run time of both the proximal decoding and the \ac{LP}
decoding using \ac{ADMM} implementations is similar and both are
reasonably performant, owing to the parallelisable structure of the
algorithms.
%
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (ms)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.52)},anchor=south},
legend cell align={left},]
\addplot[RedOrange, only marks, mark=square*]
table [col sep=comma, x=n, y=spf,
y expr=\thisrow{spf} * 1000]
{res/proximal/fps_vs_n.csv};
\addlegendentry{Proximal decoding}
\addplot[Gray, only marks, mark=*]
table [col sep=comma, x=n, y=spf,
y expr=\thisrow{spf} * 1000]
{res/hybrid/fps_vs_n.csv};
\addlegendentry{Improved proximal decoding ($N=12$)}
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf,
y expr=\thisrow{spf} * 1000]
{res/admm/fps_vs_n.csv};
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\end{axis}
\end{tikzpicture}
\caption{Comparison of the timing requirements of the different decoder implementations}
\label{fig:comp:time}
\end{figure}%
%
\footnotetext{asdf}
%
\begin{figure}[h]
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.48\textwidth}
@@ -350,19 +299,13 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=*, solid]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/proximal/2d_ber_fer_dfr_963965.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_963965.csv};
\addplot[Turquoise, line width=1pt, mark=*]
\addplot[NavyBlue, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
%{res/hybrid/2d_ber_fer_dfr_963965.csv};
{res/admm/ber_2d_963965.csv};
\addplot[Black, line width=1pt, mark=*]
\addplot[PineGreen, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/generic/fer_ml_9633965.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_963965.csv};
\end{axis}
\end{tikzpicture}
@@ -386,20 +329,15 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=*, solid]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/proximal/2d_ber_fer_dfr_bch_31_26.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_bch_31_26.csv};
\addplot[Turquoise, line width=1pt, mark=*]
\addplot[NavyBlue, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_bch_31_26.csv};
\addplot[Black, line width=1pt, mark=*]
\addplot[PineGreen, line width=1pt, mark=triangle*]
table [x=SNR, y=FER, col sep=comma,
discard if gt={SNR}{5.5},
discard if lt={SNR}{1},]
discard if lt={SNR}{1},
]
{res/generic/fer_ml_bch_31_26.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_bch_31_26.csv};
\end{axis}
\end{tikzpicture}
@@ -426,22 +364,15 @@ algorithms.
discard if not={gamma}{0.05},
discard if gt={SNR}{5.5}]
{res/proximal/2d_ber_fer_dfr_20433484.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05},
discard if gt={SNR}{5.5}]
{res/hybrid/2d_ber_fer_dfr_20433484.csv};
\addplot[Turquoise, line width=1pt, mark=*]
\addplot[NavyBlue, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma,
discard if not={mu}{3.0},
discard if gt={SNR}{5.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[Black, line width=1pt, mark=*]
\addplot[PineGreen, line width=1pt, mark=triangle, solid]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{5.5}]
{res/generic/fer_ml_20433484.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
@@ -465,16 +396,9 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=*, solid]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/proximal/2d_ber_fer_dfr_20455187.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_20455187.csv};
\addplot[Turquoise, line width=1pt, mark=*]
\addplot[NavyBlue, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_20455187.csv};
\addplot [RoyalPurple, mark=*, line width=1pt,
discard if gt={SNR}{5}]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_20455187.csv};
\end{axis}
\end{tikzpicture}
@@ -500,16 +424,9 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=*, solid]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/proximal/2d_ber_fer_dfr_40833844.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_40833844.csv};
\addplot[Turquoise, line width=1pt, mark=*]
\addplot[NavyBlue, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_40833844.csv};
\addplot [RoyalPurple, mark=*, line width=1pt,
discard if gt={SNR}{3}]
table [x=SNR, y=FER, col sep=comma]
{res/generic/bp_40833844.csv};
\end{axis}
\end{tikzpicture}
@@ -533,16 +450,9 @@ algorithms.
\addplot[RedOrange, line width=1pt, mark=*, solid]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/proximal/2d_ber_fer_dfr_pegreg252x504.csv};
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
{res/hybrid/2d_ber_fer_dfr_pegreg252x504.csv};
\addplot[Turquoise, line width=1pt, mark=*]
\addplot[NavyBlue, line width=1pt, mark=triangle, densely dashed]
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
{res/admm/ber_2d_pegreg252x504.csv};
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=FER, col sep=comma,
discard if gt={SNR}{3}]
{res/generic/bp_pegreg252x504.csv};
\end{axis}
\end{tikzpicture}
@@ -560,21 +470,14 @@ algorithms.
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=1,
legend cell align={left},
legend style={draw=white!15!black}]
\addlegendimage{RedOrange, line width=1pt, mark=*, solid}
\addlegendentry{Proximal decoding}
\addlegendimage{RedOrange, line width=1pt, mark=triangle, densely dashed}
\addlegendentry{Improved proximal decoding}
\addlegendimage{Turquoise, line width=1pt, mark=*}
\addlegendimage{NavyBlue, line width=1pt, mark=triangle, densely dashed}
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\addlegendimage{RoyalPurple, line width=1pt, mark=*, solid}
\addlegendentry{\acs{BP} (20 iterations)}
\addlegendimage{Black, line width=1pt, mark=*, solid}
\addlegendimage{PineGreen, line width=1pt, mark=triangle*, solid}
\addlegendentry{\acs{ML} decoding}
\end{axis}
\end{tikzpicture}
@@ -585,3 +488,32 @@ algorithms.
\label{fig:comp:prox_admm_dec}
\end{figure}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[RedOrange, only marks, mark=*]
table [col sep=comma, x=n, y=spf]
{res/proximal/fps_vs_n.csv};
\addlegendentry{Proximal decoding}
\addplot[PineGreen, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the proximal decoding imlementation%
\protect\footnotemark{}}
\label{fig:comp:time}
\end{figure}%
%
\footnotetext{asdf}
%

View File

@@ -1,55 +1,8 @@
\chapter{Conclusion}%
\label{chapter:conclusion}
In the context of this thesis, two decoding algorithms were considered:
proximal decoding and \ac{LP} decoding using \ac{ADMM}.
The two algorithms were first analyzed individually, before comparing them
based on simulation results as well as their theoretical structure.
For proximal decoding, the effect of each parameter on the behavior of the
decoder was examined, leading to an approach to choosing the value of each
of the parameters.
The convergence properties of the algorithm were investigated in the context
of the relatively high decoding failure rate, to derive an approach to correct
possible wrong componets of the estimate.
Based on this approach, an improvement over proximal decoding was suggested,
leading to a decoding gain of up to $\SI{1}{dB}$, depending on the code and
the parameters considered.
For \ac{LP} decoding using \ac{ADMM}, the circumstances brought about via the
relaxation while formulating the \ac{LP} decoding problem were first explored.
The decomposable nature arising from the relocation of the constraints into
the objective function itself was recognized as the major driver in enabling
the efficent implementation of the decoding algorithm.
Based on simulation results, general guidelines for choosing each parameter
were again derived.
The decoding performance, in form of the \ac{FER}, of the algorithm was
analyzed, observing that \ac{LP} decoding using \ac{ADMM} nearly reaches that
of \ac{BP}, staying within approximately $\SI{0.5}{dB}$ depending on the code
in question.
Finally, strong parallells were discovered with regard to the theoretical
structure of the two algorithms, both in the constitution of their respective
objective functions as in the iterative approaches used to minimize them.
One difference noted was the approximate nature of the minimization in the
case of proximal decoding, leading to the constraints never being truly
satisfied.
In conjunction with the alternating minimization with respect to the same
variable leading to oscillatory behavior, this was identified as the
root cause of its comparatively worse decoding performance.
Furthermore, both algorithms were expressed as message passing algorithms,
justifying their similar computational performance.
While the modified proximal decoding algorithm presented in section
\ref{sec:prox:Improved Implementation} shows some promising results, further
investigation is required to determine how different choices of parameters
affect the decoding performance.
Additionally, a more mathematically rigorous foundation for determining the
potentially wrong components of the estimate is desirable.
Another area benefiting from future work is the expantion of the \ac{ADMM}
based \ac{LP} decoder into a decoder approximating \ac{ML} performance,
using \textit{adaptive \ac{LP} decoding}.
With this method, the successive addition of redundant parity checks is used
to mitigate the decoder becoming stuck in erroneous solutions introduced due
the relaxation of the constraints of the \ac{LP} decoding problem \cite{alp}.
\begin{itemize}
\item Summary of results
\item Future work
\end{itemize}

View File

@@ -33,6 +33,13 @@ examined with respect to its performance in \ac{AWGN} channels, in
channel models such as \ac{LDPC}-coded massive \ac{MIMO} channels, perhaps
broadening its usefulness beyond what is shown here.
While the modified proximal decoding algorithm presented in section
\ref{sec:prox:Improved Implementation} shows some promising results, further
investigation is required to determine how different choices of parameters
affect the decoding performance.
Additionally, a more mathematically rigorous foundation for determining the
potentially wrong components of the estimate is desirable.
Another interesting approach might be the combination of proximal and \ac{LP}
decoding.
Performing an initial number of iterations using proximal decoding to obtain

View File

@@ -663,20 +663,20 @@ The same is true for the updates of the individual components of $\tilde{\boldsy
This representation can be slightly simplified by substituting
$\boldsymbol{\lambda}_j = \mu \cdot \boldsymbol{u}_j \,\forall\,j\in\mathcal{J}$:%
%
\begin{alignat}{3}
\begin{alignat*}{3}
\tilde{c}_i &\leftarrow \frac{1}{d_i} \left(
\sum_{j\in N_v\left( i \right) } \Big( \left( \boldsymbol{z}_j \right)_i
- \left( \boldsymbol{u}_j \right)_i \Big)
- \frac{\gamma_i}{\mu} \right)
\hspace{3mm} && \forall i\in\mathcal{I} \label{eq:admm:c_update}\\
\hspace{3mm} && \forall i\in\mathcal{I} \\
\boldsymbol{z}_j &\leftarrow \Pi_{\mathcal{P}_{d_j}}\left(
\boldsymbol{T}_j\tilde{\boldsymbol{c}} + \boldsymbol{u}_j \right)
\hspace{3mm} && \forall j\in\mathcal{J} \label{eq:admm:z_update}\\
\hspace{3mm} && \forall j\in\mathcal{J} \\
\boldsymbol{u}_j &\leftarrow \boldsymbol{u}_j
+ \boldsymbol{T}_j\tilde{\boldsymbol{c}}
- \boldsymbol{z}_j
\hspace{3mm} && \forall j\in\mathcal{J} \label{eq:admm:u_update}
.\end{alignat}
\hspace{3mm} && \forall j\in\mathcal{J}
.\end{alignat*}
%
The reason \ac{ADMM} is able to perform so well is due to the relocation of the constraints
@@ -690,6 +690,7 @@ This can also be understood by interpreting the decoding process as a message-pa
algorithm \cite[Sec. III. D.]{original_admm}, \cite[Sec. II. B.]{efficient_lp_dec_admm},
depicted in algorithm \ref{alg:admm}.
\todo{How are the variables being initialized?}
\todo{Overrelaxation}
\begin{genericAlgorithm}[caption={\ac{LP} decoding using \ac{ADMM} interpreted
as a message passing algorithm\protect\footnotemark{}}, label={alg:admm},
@@ -726,21 +727,12 @@ a check-node update step (lines $3$-$6$) and the $\tilde{c}_i$-updates can be un
a variable-node update step (lines $7$-$9$ in figure \ref{alg:admm}).
The updates for each variable- and check-node can be perfomed in parallel.
A technique called \textit{over-relaxation} can be employed to further improve
convergence, introducing the over-relaxation parameter $\rho$.
This consists of computing the term
$\rho \boldsymbol{T}_j \tilde{\boldsymbol{c}} - \left( 1 - \rho \right)\boldsymbol{z}_j$
before the $\boldsymbol{z}_j$ and $\boldsymbol{u}_j$ update steps (lines 4 and
5 of algorithm \ref{alg:admm}) and
subsequently replacing $\boldsymbol{T}_j \tilde{\boldsymbol{c}}$ with the
computed value in the two updates \cite[Sec. 3.4.3]{distr_opt_book}.
The main computational effort in solving the linear program then amounts to
computing the projection operation $\Pi_{\mathcal{P}_{d_j}} \left( \cdot \right) $
onto each check polytope. Various different methods to perform this projection
have been proposed (e.g., in \cite{original_admm}, \cite{efficient_lp_dec_admm},
\cite{lautern}).
The method chosen here is the one presented in \cite{original_admm}.
The method chosen here is the one presented in \cite{lautern}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -801,7 +793,6 @@ Defining%
\boldsymbol{s} := \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right)
\end{align*}%
\todo{Rename $\boldsymbol{D}$}%
%
the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
%
@@ -810,6 +801,7 @@ the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
\left( \boldsymbol{s} - \frac{1}{\mu}\boldsymbol{\gamma} \right)
.\end{align*}
%
This modified version of the decoding process is depicted in algorithm \ref{alg:admm:mod}.
\begin{genericAlgorithm}[caption={\ac{LP} decoding using \ac{ADMM} algorithm with rewritten
@@ -843,41 +835,137 @@ return $\tilde{\boldsymbol{c}}$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Analysis and Simulation Results}%
\label{sec:lp:Analysis and Simulation Results}
\section{Results}%
\label{sec:lp:Results}
In this section, \ac{LP} decoding using \ac{ADMM} is examined based on
simulation results for various codes.
First, the effect of the different parameters and how their values should be
chosen is investigated.
Subsequently, the decoding performance is observed and compared to that of
\ac{BP}.
Finally, the computational performance of the implementation and time
complexity of the algorithm are studied.
\subsection{Choice of Parameters}
%\begin{figure}[H]
% \centering
%
% \begin{tikzpicture}
% \begin{axis}[
% colormap/viridis,
% xlabel={$E_b / N_0$}, ylabel={$\mu$}, zlabel={\acs{BER}},
% view={75}{30},
% zmode=log,
% ]
% \addplot3[
% surf,
% mesh/rows=14, mesh/cols=18
% ]
% table [col sep=comma, x=SNR, y=mu, z=BER]
% {res/admm/ber_2d_20433484.csv};
% \end{axis}
% \end{tikzpicture}
%\end{figure}
%
%\begin{figure}[H]
% \centering
%
% \begin{tikzpicture}
% \begin{axis}[
% colormap/viridis,
% xlabel={$E_b / N_0 2$}, ylabel={$\mu$}, zlabel={\acs{BER}},
% view={75}{30},
% zmode=log,
% ]
% \addplot3[
% surf,
% mesh/rows=14, mesh/cols=18
% ]
% table [col sep=comma, x=SNR, y=mu, z=BER]
% {/home/andreas/git/ba_sw/scripts/admm/sim_results/ber_2d_20433484.csv};
% \end{axis}
% \end{tikzpicture}
%\end{figure}
The first two parameters to be investigated are the penalty parameter $\mu$
and the over-relaxation parameter $\rho$.
A first approach to get some indication of the values that might be chosen
for these parameters is to look at how the decoding performance depends
on them.
The \ac{FER} is plotted as a function of $\mu$ and $\rho$ in figure
\ref{fig:admm:mu_rho}, for three different \acp{SNR}.
The code chosen for this examination is a (3,6) regular \ac{LDPC} code with
$n=204$ and $k=102$ \cite[\text{204.33.484}]{mackay_enc}.
When varying $\mu$, $\rho$ is set to 1 and when varying
$\rho$, $\mu$ is set to 5.
$K$ is set to 200 and $\epsilon_\text{dual}$ and $\epsilon_\text{pri}$ to
$10^{-5}$.
The behavior that can be observed is very similar to that of the
parameter $\gamma$ in proximal decoding, analyzed in section
\ref{sec:prox:Analysis and Simulation Results}.
A single optimal value giving optimal performance does not exist; rather,
as long as the value is chosen within a certain range, the performance is
approximately equally good.
\begin{figure}[h]
\textbf{Game Plan}
\begin{enumerate}
\item Determine parameters
\item Make non-regular admm implementation use indexing instead of matrix vector
multiplication and simulate the pegreg, bch and 204.55.187 codes (parameter choice)
\item Computational performance
\item Comparison of proximal and admm (decoding performance and computational performance)
\item Find different codewords
\item Examine weird behavior when c is allowed to be negative
\item BP as comparison
\item Combination of proximal and BP
\end{enumerate}
\begin{itemize}
\item Choice of Parameters (Take decomposition paper as guide)
\begin{itemize}
\item mu
\item K
\item rho
\item epsilon pri / epslion dual
\end{itemize}
\item Decoding Performance
\begin{itemize}
\item FER and BER similar
\item DFR and FER pratically identical -> FER may be due to DFR
\item Compare to BP
\end{itemize}
\item Convergence Behavior
\begin{itemize}
\item Plot average error
\item Find out if converging to pseudocodeword or not converging.
How does pseudocodeword and not pseudocodeword relate to rounding and clipping?
\end{itemize}
\item Computational Performance
\begin{itemize}
\item Linear? Difficult to verify due to difference in adjustments
in the implementation for different codes
\end{itemize}
\end{itemize}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
]
\addplot[RedOrange, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results%
\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
\footnotetext{``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}; $K=200, \mu = 3.3, \rho=1.9,
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
}%
%
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.48\textwidth}
@@ -941,7 +1029,6 @@ approximately equally good.
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=3,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{ForestGreen, line width=1pt, densely dashed, mark=*}
@@ -954,95 +1041,51 @@ approximately equally good.
\end{tikzpicture}
\end{subfigure}
\caption{Dependence of the decoding performance on the parameters $\mu$ and $\rho$.
(3,6) regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:mu_rho}
\end{figure}%
To aid in the choice of the parameters, an additional criterion can be used:
the number of iterations performed for a decoding operation.
This is directly related to the time needed to decode a received vector
$\boldsymbol{y}$, which the aim is to minimize.
Figure \ref{fig:admm:mu_rho_iterations} shows the average number of iterations
over $\SI{1000}{}$ decodings, as a function of $\rho$.
This time the \ac{SNR} is kept constant at $\SI{4}{dB}$ and the parameter
$\mu$ is varied.
The values chosen for the rest of the parameters are the same as before.
It is visible that choosing a large value for $\rho$ as well as a small value
for $\mu$ minimizes the average number of iterations and thus the average
run time of the decoding process.
%
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 9$}
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 5$}
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 2$}
\end{axis}
\end{tikzpicture}
\caption{Dependence of the average number of iterations required on $\mu$ and $\rho$
for $E_b / N_0 = \SI{4}{dB}$. (3,6) regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:mu_rho_iterations}
\caption{asf}
\end{figure}%
%
The same behavior can be observed when looking at a number of different codes,
as shown in figure \ref{fig:admm:mu_rho_multiple}.
%\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
% \cite[\text{204.33.484}]{mackay_enc}; $K=200, \rho=1, \epsilon_\text{pri} = 10^{-5},
% \epsilon_\text{dual} = 10^{-5}$
%}%
%
\begin{figure}[h]
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
xlabel={$\mu$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_963965.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_963965.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_963965.csv};
table [col sep=comma, x=mu, y=k_avg,
discard if not={rho}{0.5000000000000001},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\rho = 0.5$}
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=mu, y=k_avg,
discard if not={rho}{1.1000000000000003},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\rho = 1.1$}
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=mu, y=k_avg,
discard if not={rho}{1.9000000000000004},]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\rho = 1.9$}
\end{axis}
\end{tikzpicture}
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=96, k=48$
\cite[\text{96.3.965}]{mackay_enc}}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\hfill%
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
@@ -1051,181 +1094,30 @@ as shown in figure \ref{fig:admm:mu_rho_multiple}.
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_bch_31_26.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_bch_31_26.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_bch_31_26.csv};
\end{axis}
\end{tikzpicture}
\caption{BCH code with $n=31, k=26$\\[2\baselineskip]}
\end{subfigure}
\vspace{3mm}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_20455187.csv};
\addlegendentry{$\mu = 2$}
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_20455187.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_20455187.csv};
\end{axis}
\end{tikzpicture}
\caption{$\left( 5, 10 \right)$-regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.55.187}]{mackay_enc}}
\end{subfigure}%
\vspace{3mm}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 5$}
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_40833844.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_40833844.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_40833844.csv};
{res/admm/mu_rho_kavg_20433484.csv};
\addlegendentry{$\mu = 9$}
\end{axis}
\end{tikzpicture}
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=408, k=204$
\cite[\text{408.33.844}]{mackay_enc}}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={Average \# of iterations},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{9.0},]
{res/admm/mu_rho_kavg_pegreg252x504.csv};
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{5.0},]
{res/admm/mu_rho_kavg_pegreg252x504.csv};
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=rho, y=k_avg,
discard if not={mu}{2.0},]
{res/admm/mu_rho_kavg_pegreg252x504.csv};
\end{axis}
\end{tikzpicture}
\caption{LDPC code (progressive edge growth construction) with $n=504, k=252$
\cite[\text{PEGReg252x504}]{mackay_enc}}
\end{subfigure}%
\vspace{5mm}
\caption{asf}
\end{figure}%
\begin{subfigure}[t]{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{NavyBlue, line width=1.5pt, densely dashed, mark=*}
\addlegendentry{$\mu = 9$};
\addlegendimage{RedOrange, line width=1.5pt, densely dashed, mark=*}
\addlegendentry{$\mu = 5$};
\addlegendimage{ForestGreen, line width=1.5pt, densely dashed, mark=*}
\addlegendentry{$\mu = 2$};
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{Dependence of the \ac{BER} on the value of the parameter $\gamma$ for various codes}
\label{fig:admm:mu_rho_multiple}
\end{figure}
To get an estimate for the parameter $K$, the average error during decoding
can be used.
This is shown in figure \ref{fig:admm:avg_error} as an average of
$\SI{100000}{}$ decodings.
$\mu$ is set to 5 and $\rho$ is set to $1$ and the rest of the parameters are
again chosen as $K=200, \epsilon_\text{pri}=10^{-5}$ and $ \epsilon_\text{dual}=10^{-5}$.
Similarly to the results in section
\ref{sec:prox:Analysis and Simulation Results}, a dip is visible around the
$20$ iteration mark.
This is due to the fact that as the number of iterations increases
more and more decodings converge, leaving only the mistaken ones to be
averaged.
The point at which the wrong decodings start to become dominant and the
decoding performance does not increase any longer is largely independent of
the \ac{SNR}, allowing the value of $K$ to be chosen without considering the
\ac{SNR}.
\begin{figure}[h]
\begin{figure}[H]
\centering
\begin{tikzpicture}
@@ -1266,252 +1158,14 @@ the \ac{SNR}, allowing the value of $K$ to be chosen without considering the
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{100000}{}$ decodings. (3,6)
regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:avg_error}
\end{figure}%
The last two parameters remaining to be examined are the tolerances for the
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$.
These are both set to the same value $\epsilon$.
The effect of their value on the decoding performance is visualized in figure
\ref{fig:admm:epsilon}.
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
kept constant, with $K=200$, $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$.
A lower value for the tolerance initially leads to a dramatic decrease in the
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\epsilon$}, ylabel={\acs{FER}},
ymode=log,
xmode=log,
x dir=reverse,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=epsilon, y=FER,
discard if not={SNR}{3.0},]
{res/admm/fer_epsilon_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$ on the \acs{FER}. (3,6) regular \ac{LDPC} code with
$n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:epsilon}
\end{figure}%
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
small and large, respectively, to reduce the average runtime of the decoding
process, while keeping them within a certain range as to not compromise the
decoding performance.
The maximum number of iterations $K$ performed can be chosen independantly
of the \ac{SNR}.
Finally, relatively small values should be given to the parameters
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
possible error rate.
\subsection{Decoding Performance}
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
\cite{original_admm} are compared to the results from the simulations
conducted in the context of this thesis.
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
the same as in \cite{original_admm}.
The two \ac{FER} curves are practically identical.
Also shown is the curve resulting from \ac{BP} decoding, performing
1000 iterations.
The two algorithms perform relatively similarly, coming within $\SI{0.5}{dB}$
of one another.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
legend cell align={left},
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results.
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
\label{fig:admm:results}
\caption{Average error for $\SI{100000}{}$ decodings\protect\footnotemark{}}
\label{fig:}
\end{figure}%
%
In figure \ref{fig:admm:ber_fer}, the \ac{BER} and \ac{FER} for \ac{LP} decoding
using\ac{ADMM} and \ac{BP} are shown for a (3, 6) regular \ac{LDPC} code with
$n=204$.
To ensure comparability, in both cases the number of iterations was set to
$K=200$.
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
$\epsilon = 10^{-5}$ and $\epsilon=10^{-5}$.
Comparing figures \ref{fig:admm:results} and \ref{fig:admm:ber_fer} it is
apparent that the difference in decoding performance depends on the code being
considered.
More simulation results are presented in figure \ref{fig:comp:prox_admm_dec}
in section \ref{sec:comp:res}.
\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}; $K=200, \rho=1, \epsilon_\text{pri} = 10^{-5},
\epsilon_\text{dual} = 10^{-5}$
}%
%
\begin{figure}[h]
\centering
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\mu$}, ylabel={\acs{BER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=BER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\hfill%
\begin{subfigure}[c]{0.48\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\rho$}, ylabel={\acs{FER}},
ymode=log,
width=\textwidth,
height=0.75\textwidth,
ymax=1.5, ymin=3e-7,
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if not={mu}{5.0},
discard if gt={SNR}{4.5}]
{res/admm/ber_2d_20433484.csv};
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{4.5}]
{/home/andreas/bp_20433484.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}[t]{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[hide axis,
xmin=10, xmax=50,
ymin=0, ymax=0.4,
legend columns=3,
legend style={draw=white!15!black,legend cell align=left}]
\addlegendimage{Turquoise, line width=1pt, mark=*}
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
\addlegendimage{RoyalPurple, line width=1pt, mark=*}
\addlegendentry{BP (200 iterations)}
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{Comparison of the decoding performance of \acs{LP} decoding using
\acs{ADMM} and \acs{BP}. (3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:ber_fer}
\end{figure}%
In summary, the decoding performance of \ac{LP} decoding using \ac{ADMM} comes
close to that of \ac{BP}, their difference staying in the range of
approximately $\SI{0.5}{dB}$, depending on the code in question.
\subsection{Computational Performance}
\label{subsec:admm:comp_perf}
In terms of time complexity, the three steps of the decoding algorithm
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
considered.
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
algorithm employed.
Since for the implementation completed for this work the projection algorithm
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
also has linear time complexity.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
\label{fig:admm:time}
\end{figure}%
Simulation results from a range of different codes can be used to verify this
analysis.
Figure \ref{fig:admm:time} shows the average time needed to decode one
frame as a function of its length.
The codes used for this consideration are the same as in section \ref{subsec:prox:comp_perf}
The results are necessarily skewed because these vary not only
in their length, but also in their construction scheme and rate.
Additionally, different optimization opportunities arise depending on the
length of a code, since for smaller codes dynamic memory allocation can be
completely omitted.
This may explain why the datapoint at $n=504$ is higher then would be expected
with linear behavior.
Nonetheless, the simulation results roughly match the expected behavior
following from the theoretical considerations.

View File

@@ -300,7 +300,7 @@ the gradient can be written as%
\begin{align*}
\nabla h\left( \tilde{\boldsymbol{x}} \right) =
4\left( \tilde{\boldsymbol{x}}^{\circ 3} - \tilde{\boldsymbol{x}} \right)
+ 2\tilde{\boldsymbol{x}}^{\circ \left( -1 \right) } \circ \boldsymbol{H}^\text{T}
+ 2\tilde{\boldsymbol{x}}^{\circ -1} \circ \boldsymbol{H}^\text{T}
\boldsymbol{v}
,\end{align*}
%
@@ -350,7 +350,7 @@ $\gamma$ are shown, as well as the curve resulting from decoding
using a \ac{BP} decoder, as a reference.
The results from Wadayama et al. are shown with solid lines,
while the newly generated ones are shown with dashed lines.
%
\begin{figure}[h]
\centering
@@ -359,7 +359,6 @@ while the newly generated ones are shown with dashed lines.
xlabel={$E_b / N_0$ (dB)}, ylabel={BER},
ymode=log,
legend style={at={(0.5,-0.7)},anchor=south},
legend cell align={left},
width=0.6\textwidth,
height=0.45\textwidth,
ymax=1.2, ymin=0.8e-4,
@@ -398,19 +397,21 @@ while the newly generated ones are shown with dashed lines.
\addlegendentry{$\gamma = 0.05$ (Own results)}
\addplot [RoyalPurple, mark=*, line width=1pt]
table [x=SNR, y=BER, col sep=comma,
discard if gt={SNR}{3.5}]
{res/generic/bp_20433484.csv};
\addlegendentry{BP (200 iterations)}
table [x=SNR, y=BP, col sep=comma] {res/proximal/ber_paper.csv};
\addlegendentry{BP (Wadayama et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Wadayama et al. with own simulation results.
(3, 6) regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\caption{Comparison of datapoints from Wadayama et al. with own simulation results%
\protect\footnotemark{}}
\label{fig:prox:results}
\end{figure}
%
It is noticeable that for a moderately chosen value of $\gamma$
\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
\cite[\text{204.33.484}]{mackay_enc}; $\omega = 0.05, K=200, \eta=1.5$
}%
%
\noindent It is noticeable that for a moderately chosen value of $\gamma$
($\gamma = 0.05$) the decoding performance is better than for low
($\gamma = 0.01$) or high ($\gamma = 0.15$) values.
The question arises whether there is some optimal value maximizing the decoding
@@ -468,11 +469,14 @@ rather a certain interval in which it stays largely unchanged.
\end{tikzpicture}
\caption{Visualization of the relationship between the decoding performance
and the parameter $\gamma$. (3,6) regular \ac{LDPC} code with
$n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\protect\footnotemark{} and the parameter $\gamma$}
\label{fig:prox:results_3d}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $\omega = 0.05, K=200, \eta=1.5$
}%
%
This indicates that while the choice of the parameter $\gamma$
significantly affects the decoding performance, there is not much benefit
attainable in undertaking an extensive search for an exact optimum.
@@ -534,11 +538,15 @@ depicted in figure \ref{fig:prox:gamma_omega_multiple}.
\end{axis}
\end{tikzpicture}
\caption{The \ac{BER} as a function of the two step sizes. (3,6) regular
\ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\caption{The \ac{BER} as a function of the two step sizes\protect\footnotemark{}}
\label{fig:prox:gamma_omega}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $E_b / N_0=\SI{4}{dB}, K=100, \eta=1.5$
}%
%
To better understand how to determine the optimal value for the parameter $K$,
the average error is inspected.
@@ -594,10 +602,13 @@ stabilizes.
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{500000}{}$ decodings. (3,6) regular \ac{LDPC} code
with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\caption{Average error for $\SI{500000}{}$ decodings\protect\footnotemark{}}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $\gamma=0.05, \omega = 0.05, K=200, \eta=1.5$
}%
%
Changing the parameter $\eta$ does not appear to have a significant effect on
the decoding performance when keeping the value within a reasonable window
@@ -691,11 +702,14 @@ means to bring about numerical stability.
\end{axis}
\end{tikzpicture}
\caption{Comparison of \ac{FER}, \ac{BER} and decoding failure rate. (3,6) regular
\ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\caption{Comparison of \ac{FER}, \ac{BER} and decoding failure rate\protect\footnotemark{}}
\label{fig:prox:ber_fer_dfr}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $\omega = 0.05, K=100, \eta=1.5$
}%
%
Until now, only the \ac{BER} has been considered to gauge the decoding
performance.
@@ -741,7 +755,7 @@ iteration ($\boldsymbol{r}$ and $\boldsymbol{s}$ are counted as different
estimates and their values are interwoven to obtain the shown result), as well
as the gradients of the negative log-likelihood and the code-constraint
polynomial, which influence the next estimate.
%
\begin{figure}[h]
\begin{subfigure}[t]{0.48\textwidth}
\centering
@@ -947,11 +961,16 @@ polynomial, which influence the next estimate.
\end{tikzpicture}
\end{subfigure}
\caption{Visualization of a single decoding operation. BCH$\left( 7, 4 \right)$ code}
\caption{Visualization of a single decoding operation\protect\footnotemark{}
for a code with $n=7$}
\label{fig:prox:convergence}
\end{figure}%
%
\footnotetext{BCH$\left( 7,4 \right) $ code; $\gamma = 0.05, \omega = 0.05, K=200,
\eta = 1.5, E_b / N_0 = \SI{5}{dB}$
}%
%
It is evident that in all cases, past a certain number of
\noindent It is evident that in all cases, past a certain number of
iterations, the estimate starts to oscillate around a particular value.
Jointly, the two gradients stop further approaching the value
zero.
@@ -1137,11 +1156,16 @@ an invalid codeword.
\end{axis}
\end{tikzpicture}
\caption{Visualization of a single decoding operation. (3,6) regular \ac{LDPC} code
with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\caption{Visualization of a single decoding operation\protect\footnotemark{}
for a code with $n=204$}
\label{fig:prox:convergence_large_n}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $\gamma=0.05, \omega = 0.05, K=200, \eta=1.5,
E_b / N_0 = \SI{5}{dB}$
}%
%
\subsection{Computational Performance}
@@ -1167,10 +1191,6 @@ practical since it is the same as that of $\ac{BP}$.
This theoretical analysis is also corroborated by the practical results shown
in figure \ref{fig:prox:time_comp}.
The codes considered are the BCH(31, 11) and BCH(31, 26) codes, a number of (3, 6)
regular \ac{LDPC} codes (\cite[\text{96.3.965, 204.33.484, 408.33.844}]{mackay_enc}),
a (5,10) regular \ac{LDPC} code (\cite[\text{204.55.187}]{mackay_enc}) and a
progressive edge growth construction code (\cite[\text{PEGReg252x504}]{mackay_enc}).
Some deviations from linear behavior are unavoidable because not all codes
considered are actually \ac{LDPC} codes, or \ac{LDPC} codes constructed
according to the same scheme.
@@ -1189,16 +1209,23 @@ $\SI{2.80}{GHz}$ and utilizing all cores.
height=0.45\textwidth,
legend cell align={left},]
\addplot[RedOrange, only marks, mark=square*]
\addplot[RedOrange, only marks, mark=*]
table [col sep=comma, x=n, y=spf]
{res/proximal/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the proximal decoding imlementation}
\caption{Timing requirements of the proximal decoding imlementation%
\protect\footnotemark{}}
\label{fig:prox:time_comp}
\end{figure}%
%
\footnotetext{The datapoints depicted were calculated by evaluating the
metadata of \ac{FER} simulation results from the following codes:
BCH (31, 11); BCH (31, 26); \cite[\text{96.3.965; 204.33.484; 204.55.187;
408.33.844; PEGReg252x504}]{mackay_enc}
}%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@ -1230,7 +1257,7 @@ section \ref{subsec:prox:conv_properties} and shown in figure
\ref{fig:prox:convergence_large_n}) and the probability of having a bit
error are strongly correlated, a relationship depicted in figure
\ref{fig:prox:correlation}.
%
\begin{figure}[h]
\centering
@@ -1253,17 +1280,22 @@ error are strongly correlated, a relationship depicted in figure
\end{tikzpicture}
\caption{Correlation between the occurrence of a bit error and the
amplitude of oscillation of the gradient of the code-constraint polynomial.
(3,6) regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
amplitude of oscillation of the gradient of the code-constraint polynomial%
\protect\footnotemark{}}
\label{fig:prox:correlation}
\end{figure}%
%
The y-axis depicts whether there is a bit error and the x-axis the
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $\gamma = 0.05, \omega = 0.05, K=100, \eta=1.5$
}%
%
\noindent The y-axis depicts whether there is a bit error and the x-axis the
variance in $\nabla h\left( \tilde{\boldsymbol{x}} \right)$ after the
100th iteration.
While this is not exactly the magnitude of the oscillation, it is
proportional and easier to compute.
The datapoints are taken from a single decoding operation.
The datapoints are taken from a single decoding operation
Using this observation as a rule to determine the $N\in\mathbb{N}$ most
probably wrong bits, all variations of the estimate with those bits modified
@@ -1447,9 +1479,7 @@ In some cases, a gain of up to $\SI{1}{dB}$ or higher can be achieved.
\end{axis}
\end{tikzpicture}
\caption{Comparison of the decoding performance between proximal decoding and the
improved implementation. (3,6) regular \ac{LDPC} code with $n=204, k=102$
\cite[\text{204.33.484}]{mackay_enc}}
\caption{Simulation results for $\gamma = 0.05, \omega = 0.05, K=200, N=12$}
\label{fig:prox:improved_results}
\end{figure}
@@ -1479,12 +1509,12 @@ theoretical considerations.
legend style={at={(0.05,0.77)},anchor=south west},
legend cell align={left},]
\addplot[RedOrange, only marks, mark=square*]
\addplot[RedOrange, only marks, mark=*]
table [col sep=comma, x=n, y=spf]
{res/proximal/fps_vs_n.csv};
\addlegendentry{proximal}
\addplot[Gray, only marks, mark=*]
\addplot[RoyalPurple, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/hybrid/fps_vs_n.csv};
\addlegendentry{improved ($N = 12$)}
@@ -1492,10 +1522,16 @@ theoretical considerations.
\end{tikzpicture}
\caption{Comparison of the timing requirements of the implementations of proximal
decoding and the improved algorithm}
decoding and the improved algorithm\protect\footnotemark{}}
\label{fig:prox:time_complexity_comp}
\end{figure}%
%
\footnotetext{The datapoints depicted were calculated by evaluating the
metadata of \ac{FER} simulation results from the following codes:
BCH (31, 11); BCH (31, 26); \cite[\text{96.3.965; 204.33.484; 204.55.187;
408.33.844; PEGReg252x504}]{mackay_enc}
}%
%
In conclusion, the decoding performance of proximal decoding can be improved
by appending an ML-in-the-List step when the algorithm does not produce a

View File

@@ -1,9 +0,0 @@
SNR,BER,FER,DFR,num_iterations
1.0,0.06409994553376906,0.7013888888888888,0.7013888888888888,144.0
1.5,0.03594771241830065,0.45495495495495497,0.47297297297297297,222.0
2.0,0.014664163537755528,0.2148936170212766,0.2297872340425532,470.0
2.5,0.004731522238525039,0.07634164777021919,0.08163265306122448,1323.0
3.0,0.000911436423803915,0.016994783779236074,0.01749957933703517,5943.0
3.5,0.00011736135227863285,0.002537369677176234,0.002587614621278734,39805.0
4.0,1.0686274509803922e-05,0.00024,0.00023,100000.0
4.5,4.411764705882353e-07,1e-05,1e-05,100000.0
1 SNR BER FER DFR num_iterations
2 1.0 0.06409994553376906 0.7013888888888888 0.7013888888888888 144.0
3 1.5 0.03594771241830065 0.45495495495495497 0.47297297297297297 222.0
4 2.0 0.014664163537755528 0.2148936170212766 0.2297872340425532 470.0
5 2.5 0.004731522238525039 0.07634164777021919 0.08163265306122448 1323.0
6 3.0 0.000911436423803915 0.016994783779236074 0.01749957933703517 5943.0
7 3.5 0.00011736135227863285 0.002537369677176234 0.002587614621278734 39805.0
8 4.0 1.0686274509803922e-05 0.00024 0.00023 100000.0
9 4.5 4.411764705882353e-07 1e-05 1e-05 100000.0

View File

@@ -1,10 +0,0 @@
{
"duration": 46.20855645602569,
"name": "ber_20433484",
"platform": "Linux-6.2.10-arch1-1-x86_64-with-glibc2.37",
"K": 200,
"epsilon_pri": 1e-05,
"epsilon_dual": 1e-05,
"max_frame_errors": 100,
"end_time": "2023-04-22 19:15:37.252176"
}

View File

@@ -1,31 +0,0 @@
SNR,epsilon,BER,FER,DFR,num_iterations,k_avg
1.0,1e-06,0.294328431372549,0.722,0.0,1000.0,0.278
1.0,1e-05,0.294328431372549,0.722,0.0,1000.0,0.278
1.0,0.0001,0.3059313725490196,0.745,0.0,1000.0,0.254
1.0,0.001,0.3059558823529412,0.748,0.0,1000.0,0.254
1.0,0.01,0.30637254901960786,0.786,0.0,1000.0,0.254
1.0,0.1,0.31590686274509805,0.9,0.0,1000.0,0.925
2.0,1e-06,0.11647058823529412,0.298,0.0,1000.0,0.702
2.0,1e-05,0.11647058823529412,0.298,0.0,1000.0,0.702
2.0,0.0001,0.11647058823529412,0.298,0.0,1000.0,0.702
2.0,0.001,0.11652450980392157,0.306,0.0,1000.0,0.702
2.0,0.01,0.09901470588235294,0.297,0.0,1000.0,0.75
2.0,0.1,0.10512745098039215,0.543,0.0,1000.0,0.954
3.0,1e-06,0.004563725490196078,0.012,0.0,1000.0,0.988
3.0,1e-05,0.004563725490196078,0.012,0.0,1000.0,0.988
3.0,0.0001,0.005730392156862745,0.015,0.0,1000.0,0.985
3.0,0.001,0.005730392156862745,0.015,0.0,1000.0,0.985
3.0,0.01,0.007200980392156863,0.037,0.0,1000.0,0.981
3.0,0.1,0.009151960784313726,0.208,0.0,1000.0,0.995
4.0,1e-06,0.0,0.0,0.0,1000.0,1.0
4.0,1e-05,0.0,0.0,0.0,1000.0,1.0
4.0,0.0001,0.0,0.0,0.0,1000.0,1.0
4.0,0.001,0.0002598039215686275,0.001,0.0,1000.0,0.999
4.0,0.01,4.901960784313725e-05,0.01,0.0,1000.0,1.0
4.0,0.1,0.0006862745098039216,0.103,0.0,1000.0,1.0
5.0,1e-06,0.0,0.0,0.0,1000.0,1.0
5.0,1e-05,0.0,0.0,0.0,1000.0,1.0
5.0,0.0001,0.0,0.0,0.0,1000.0,1.0
5.0,0.001,0.0,0.0,0.0,1000.0,1.0
5.0,0.01,9.803921568627451e-06,0.002,0.0,1000.0,1.0
5.0,0.1,0.0005245098039215687,0.097,0.0,1000.0,1.0
1 SNR epsilon BER FER DFR num_iterations k_avg
2 1.0 1e-06 0.294328431372549 0.722 0.0 1000.0 0.278
3 1.0 1e-05 0.294328431372549 0.722 0.0 1000.0 0.278
4 1.0 0.0001 0.3059313725490196 0.745 0.0 1000.0 0.254
5 1.0 0.001 0.3059558823529412 0.748 0.0 1000.0 0.254
6 1.0 0.01 0.30637254901960786 0.786 0.0 1000.0 0.254
7 1.0 0.1 0.31590686274509805 0.9 0.0 1000.0 0.925
8 2.0 1e-06 0.11647058823529412 0.298 0.0 1000.0 0.702
9 2.0 1e-05 0.11647058823529412 0.298 0.0 1000.0 0.702
10 2.0 0.0001 0.11647058823529412 0.298 0.0 1000.0 0.702
11 2.0 0.001 0.11652450980392157 0.306 0.0 1000.0 0.702
12 2.0 0.01 0.09901470588235294 0.297 0.0 1000.0 0.75
13 2.0 0.1 0.10512745098039215 0.543 0.0 1000.0 0.954
14 3.0 1e-06 0.004563725490196078 0.012 0.0 1000.0 0.988
15 3.0 1e-05 0.004563725490196078 0.012 0.0 1000.0 0.988
16 3.0 0.0001 0.005730392156862745 0.015 0.0 1000.0 0.985
17 3.0 0.001 0.005730392156862745 0.015 0.0 1000.0 0.985
18 3.0 0.01 0.007200980392156863 0.037 0.0 1000.0 0.981
19 3.0 0.1 0.009151960784313726 0.208 0.0 1000.0 0.995
20 4.0 1e-06 0.0 0.0 0.0 1000.0 1.0
21 4.0 1e-05 0.0 0.0 0.0 1000.0 1.0
22 4.0 0.0001 0.0 0.0 0.0 1000.0 1.0
23 4.0 0.001 0.0002598039215686275 0.001 0.0 1000.0 0.999
24 4.0 0.01 4.901960784313725e-05 0.01 0.0 1000.0 1.0
25 4.0 0.1 0.0006862745098039216 0.103 0.0 1000.0 1.0
26 5.0 1e-06 0.0 0.0 0.0 1000.0 1.0
27 5.0 1e-05 0.0 0.0 0.0 1000.0 1.0
28 5.0 0.0001 0.0 0.0 0.0 1000.0 1.0
29 5.0 0.001 0.0 0.0 0.0 1000.0 1.0
30 5.0 0.01 9.803921568627451e-06 0.002 0.0 1000.0 1.0
31 5.0 0.1 0.0005245098039215687 0.097 0.0 1000.0 1.0

View File

@@ -1,10 +0,0 @@
{
"duration": 15.407989402010571,
"name": "rho_kavg_20433484",
"platform": "Linux-6.2.10-arch1-1-x86_64-with-glibc2.37",
"K": 200,
"rho": 1,
"mu": 5,
"max_frame_errors": 100000,
"end_time": "2023-04-23 06:39:23.561294"
}

View File

@@ -1,9 +0,0 @@
SNR,BER,FER,num_iterations
1,0.0315398614535635,0.598802395209581,334
1.5,0.0172476397966594,0.352733686067019,567
2,0.00668670591018522,0.14194464158978,1409
2.5,0.00168951075575861,0.0388349514563107,5150
3,0.000328745799468201,0.00800288103717338,24991
3.5,4.21796065456326e-05,0.00109195935727272,183157
4,4.95098039215686e-06,0.000134,500000
4.5,2.94117647058824e-07,1.2e-05,500000
1 SNR BER FER num_iterations
2 1 0.0315398614535635 0.598802395209581 334
3 1.5 0.0172476397966594 0.352733686067019 567
4 2 0.00668670591018522 0.14194464158978 1409
5 2.5 0.00168951075575861 0.0388349514563107 5150
6 3 0.000328745799468201 0.00800288103717338 24991
7 3.5 4.21796065456326e-05 0.00109195935727272 183157
8 4 4.95098039215686e-06 0.000134 500000
9 4.5 2.94117647058824e-07 1.2e-05 500000

View File

@@ -1,10 +0,0 @@
SNR,BER,FER,num_iterations
1,0.0592497868712702,0.966183574879227,207
1.5,0.0465686274509804,0.854700854700855,234
2,0.0326898561282098,0.619195046439629,323
2.5,0.0171613765211425,0.368324125230203,543
3,0.00553787541776455,0.116346713205352,1719
3.5,0.00134027952469441,0.0275900124155056,7249
4,0.000166480738027721,0.0034201480924124,58477
4.5,1.19607843137255e-05,0.000252,500000
5,9.50980392156863e-07,2e-05,500000
1 SNR BER FER num_iterations
2 1 0.0592497868712702 0.966183574879227 207
3 1.5 0.0465686274509804 0.854700854700855 234
4 2 0.0326898561282098 0.619195046439629 323
5 2.5 0.0171613765211425 0.368324125230203 543
6 3 0.00553787541776455 0.116346713205352 1719
7 3.5 0.00134027952469441 0.0275900124155056 7249
8 4 0.000166480738027721 0.0034201480924124 58477
9 4.5 1.19607843137255e-05 0.000252 500000
10 5 9.50980392156863e-07 2e-05 500000

View File

@@ -1,6 +0,0 @@
SNR,BER,FER,num_iterations
1,0.0303507766743061,0.649350649350649,308
1.5,0.0122803195352215,0.296296296296296,675
2,0.00284899516991547,0.0733137829912024,2728
2.5,0.000348582879119279,0.00916968502131952,21811
3,2.52493009763634e-05,0.000760274154860243,263063
1 SNR BER FER num_iterations
2 1 0.0303507766743061 0.649350649350649 308
3 1.5 0.0122803195352215 0.296296296296296 675
4 2 0.00284899516991547 0.0733137829912024 2728
5 2.5 0.000348582879119279 0.00916968502131952 21811
6 3 2.52493009763634e-05 0.000760274154860243 263063

View File

@@ -1,9 +0,0 @@
SNR,BER,FER,num_iterations
1,0.0352267331433998,0.56980056980057,351
1.5,0.0214331413947537,0.383877159309021,521
2,0.0130737396538751,0.225733634311512,886
2.5,0.00488312520012808,0.0960614793467819,2082
3,0.00203045475576707,0.0396589331746976,5043
3.5,0.000513233833401836,0.010275380189067,19464
4,0.000107190497363908,0.0025408117893667,78715
4.5,3.2074433522095e-05,0.00092605883251763,215969
1 SNR BER FER num_iterations
2 1 0.0352267331433998 0.56980056980057 351
3 1.5 0.0214331413947537 0.383877159309021 521
4 2 0.0130737396538751 0.225733634311512 886
5 2.5 0.00488312520012808 0.0960614793467819 2082
6 3 0.00203045475576707 0.0396589331746976 5043
7 3.5 0.000513233833401836 0.010275380189067 19464
8 4 0.000107190497363908 0.0025408117893667 78715
9 4.5 3.2074433522095e-05 0.00092605883251763 215969

View File

@@ -1,11 +0,0 @@
SNR,BER,FER,num_iterations
1,0.0584002878042931,0.743494423791822,269
1.5,0.0458084740620892,0.626959247648903,319
2,0.0318872821653689,0.459770114942529,435
2.5,0.0248431459534825,0.392927308447937,509
3,0.0158453558113146,0.251256281407035,796
3.5,0.0115615186982586,0.176991150442478,1130
4,0.00708558642550811,0.115606936416185,1730
4.5,0.00389714705036542,0.0614817091915155,3253
5,0.00221548053104734,0.0345125107851596,5795
5.5,0.00106888328072207,0.0172131852999398,11619
1 SNR BER FER num_iterations
2 1 0.0584002878042931 0.743494423791822 269
3 1.5 0.0458084740620892 0.626959247648903 319
4 2 0.0318872821653689 0.459770114942529 435
5 2.5 0.0248431459534825 0.392927308447937 509
6 3 0.0158453558113146 0.251256281407035 796
7 3.5 0.0115615186982586 0.176991150442478 1130
8 4 0.00708558642550811 0.115606936416185 1730
9 4.5 0.00389714705036542 0.0614817091915155 3253
10 5 0.00221548053104734 0.0345125107851596 5795
11 5.5 0.00106888328072207 0.0172131852999398 11619

View File

@@ -1,6 +0,0 @@
SNR,BER,FER,num_iterations
1,0.0294665331073098,0.647249190938511,309
1.5,0.0104905437352246,0.265957446808511,752
2,0.0021089358705197,0.05616399887672,3561
2.5,0.000172515567971134,0.00498840196543037,40093
3,6.3531746031746e-06,0.0002,500000
1 SNR BER FER num_iterations
2 1 0.0294665331073098 0.647249190938511 309
3 1.5 0.0104905437352246 0.265957446808511 752
4 2 0.0021089358705197 0.05616399887672 3561
5 2.5 0.000172515567971134 0.00498840196543037 40093
6 3 6.3531746031746e-06 0.0002 500000

View File

@@ -9,9 +9,9 @@
\thesisTitle{Application of Optimization Algorithms for Channel Decoding}
\thesisType{Bachelor's Thesis}
\thesisAuthor{Andreas Tsouchlos}
\thesisAdvisor{Prof. Dr.-Ing. Laurent Schmalen}
%\thesisAdvisor{Prof. Dr.-Ing. Laurent Schmalen}
%\thesisHeadOfInstitute{Prof. Dr.-Ing. Laurent Schmalen}
\thesisSupervisor{Dr.-Ing. Holger Jäkel}
\thesisSupervisor{Name of assistant}
\thesisStartDate{24.10.2022}
\thesisEndDate{24.04.2023}
\thesisSignatureDate{Signature date} % TODO: Signature date
@@ -218,7 +218,7 @@
\include{chapters/proximal_decoding}
\include{chapters/lp_dec_using_admm}
\include{chapters/comparison}
% \include{chapters/discussion}
\include{chapters/discussion}
\include{chapters/conclusion}
\include{chapters/appendix}