ba-thesis/latex/thesis/chapters/discussion.tex

59 lines
3.2 KiB
TeX

\chapter{Discussion}%
\label{chapter:discussion}
A modification of the implementation to reduce the memory requirements, even
at some cost with regard to the running time, would allow for the examination
of longer codes.
This in turn would make possible studying the behavior of the decoding
algorithms covered here in error-rate regions where traditional approaches
exhibit an error floor.
The decoding algorithms could then be assessed for use in very
high reliability applications, where traditional methods like \ac{BP} or the
min-sum-algorithm fall short.
\todo{Doesn't make sense}
As mentioned in section \ref{subsec:prox:conv_properties}, the alternating
minimization of the two gradients in the proximal decoding algorithm leads to
an oscillation after a number of iterations.
One approach to alleviate this problem might be to use \ac{ADMM} instead of
the proximal gradient method to solve the optimization problem.
This is because due to the introduction of the dual variable, the minimization
of each part of the objective function would no longer take place with regard
to the same exact variable.
Additionally, ``\ac{ADMM} will converge even when the x- and z-minimization
steps are not carried out exactly [\ldots]''
\cite[Sec. 3.4.4]{distr_opt_book}, which is advantageous, as the
constraints are never truly satisfied; not even after the minimization step
dealing with the constraint part of the objective function.
Despite this, an initial examination by Yanxia Lu in
\cite[Sec. 4.2.4.]{yanxia_lu_thesis} shows only limited success.
It is also important to note that while in this thesis proximal decoding was
examined with respect to its performance in \ac{AWGN} channels, in
\cite{proximal_paper} it is presented as a method applicable to non-trivial
channel models such as \ac{LDPC}-coded massive \ac{MIMO} channels, perhaps
broadening its usefulness beyond what is shown here.
While the modified proximal decoding algorithm presented in section
\ref{sec:prox:Improved Implementation} shows some promising results, further
investigation is required to determine how different choices of parameters
affect the decoding performance.
Additionally, a more mathematically rigorous foundation for determining the
potentially wrong components of the estimate is desirable.
Another interesting approach might be the combination of proximal and \ac{LP}
decoding.
Performing an initial number of iterations using proximal decoding to obtain
a rough first estimate and subsequently using \ac{LP} decoding with only the
violated constraints may be a way to achieve a shorter running time, because
of the low-complexity nature of proximal decoding.
This could be usefull, for example, to mitigate the slow convergence of
\ac{ADMM} \cite[3.2.2]{distr_opt_book}.
Subsequently introducing additional parity checks might be a way of combining
the best properties of proximal decoding, \ac{LP} decoding using \ac{ADMM} and
\textit{adaptive \ac{LP} decoding} \cite{alp} to obtain a decoder efficiently
approximating \ac{ML} performance.
\todo{It turns out that ADMM is more compuationally efficient than proximal
decoding.
Find a way to combine them that still makes sense (maybe exploiting the
fact that the BER is so much better than the FER, in constrasto to ADMM)}