From 3661ccb23a0e31bb1f0ca87e0f0302e42f7a9503 Mon Sep 17 00:00:00 2001 From: Andreas Tsouchlos Date: Sun, 7 Jan 2024 22:48:15 +0100 Subject: [PATCH] Correct Improved Algorithm, round 1 --- letter.tex | 57 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 30 insertions(+), 27 deletions(-) diff --git a/letter.tex b/letter.tex index 5e04c63..cbb88b8 100644 --- a/letter.tex +++ b/letter.tex @@ -327,14 +327,14 @@ presented in Algorithm \ref{alg:proximal_decoding}. \section{Improved algorithm} %%%%%%%%%%%%%%%%%%%%% -\subsection{Analysis of Convergence Behavior} +\subsection{Analysis of the Convergence Behavior} -In figure \ref{fig:fer vs ber}, the \textit{frame error rate} (FER), +In Fig. \ref{fig:fer vs ber}, the \textit{frame error rate} (FER), \textit{bit error rate} (BER) and \textit{decoding failure rate} (DFR) of proximal decoding are shown for an LDPC code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}. -A decoding failure is defined as a decoding operation, the result of which is -not a valid codeword, i.e., as non-convergence of the algorithm. +A decoding failure is defined as a decoding operation returning an invalid +codeword, i.e., as non-convergence of the algorithm. The parameters chosen for this simulation are $\gamma=0.05, \omega=0.05, \eta=1.5$ and $K=200$. They were determined to offer the best performance in a preliminary examination, @@ -345,9 +345,9 @@ This means that most frame errors are not due to the algorithm converging to the wrong codeword, but due to the algorithm not converging at all. As proximal decoding is an optimization-based decoding method, one possible -explanation for this effect might be that during the decoding process convergence -on the final codeword is often not achieved, although the estimate is moving in -the right general direction. +explanation for this effect might be that during the decoding process, convergence +to the final codeword is often not achieved, although the estimate is moving into +the right direction. This would suggest that most frame errors occur due to only a few incorrectly decoded bits.% % @@ -395,23 +395,24 @@ decoded bits.% \label{fig:fer vs ber} \end{figure}% % + An approach for lowering the FER might then be to append an ``ML-in-the-list'' \cite{ml_in_the_list} step to the decoding process shown in Algorithm \ref{alg:proximal_decoding}. -This step would consist of determining the $N \in \mathbb{N}$ most probably -wrong bits, finding all variations of the current estimate with those bits +This step consists in determining the $N \in \mathbb{N}$ most probable +erroneous bits, finding all variations of the current estimate with those bits modified, and performing ML decoding on this list. -This approach crucially relies on identifying the most probably wrong bits. +This approach crucially relies on identifying the most probable erroneous bits. Therefore, the convergence properties of proximal decoding are investigated. -Considering equations (\ref{eq:s_update}) and (\ref{eq:r_update}), figure +Considering (\ref{eq:s_update}) and (\ref{eq:r_update}), Fig. \ref{fig:grad} shows the two gradients along which the minimization is performed for a repetition code with $n=2$. It is apparent that a net movement will result as long as the two gradients have a common component. As soon as this common component is exhausted, they will work in opposing -directions and an oscillation of the estimate will take place. -This behavior matches the conjecture that the reason for the high DFR is a +directions resulting in an oscillation of the estimate. +This behavior supports the conjecture that the reason for the high DFR is a failure to converge to the correct codeword in the final steps of the optimization process.% % @@ -514,11 +515,13 @@ optimization process.% \label{fig:grad} \end{figure}% % -In figure \ref{fig:prox:convergence_large_n}, only component -$\left(\tilde{\boldsymbol{x}}\right)_1$ of the estimate is considered during a -decoding operation for an LDPC code with $n=204$ and $k=102$. + +In Fig. \ref{fig:prox:convergence_large_n}, we consider only component +$\left(\tilde{\boldsymbol{x}}\right)_1$ of the estimate during a +decoding operation for the LDPC code used also for Fig. 1. Two qualities may be observed. -First, the average values of the two gradients are equal, except for their sign, +First, we observe the average absolute values of the two gradients are equal, +however, they have opposing signs, leading to the aforementioned oscillation. Second, the gradient of the code constraint polynomial itself starts to oscillate after a certain number of iterations.% @@ -567,11 +570,11 @@ oscillate after a certain number of iterations.% \end{figure}% %%%%%%%%%%%%%%%%%%%%% -\subsection{Improvement using ``ML-in-the-list'' step} +\subsection{Improvement Using ``ML-in-the-List'' Step} -Considering the magnitude of oscillation of the gradient of the code constraint +Considering the magnitude of the oscillation of the gradient of the code constraint polynomial, some interesting behavior may be observed. -Figure \ref{fig:p_error} shows the probability that a component of the estimate +Fig. \ref{fig:p_error} shows the probability that a component of the estimate is wrong, determined through a Monte Carlo simulation, when the components of $\boldsymbol{c}$ are ordered from smallest to largest oscillation of $\left(\nabla h\right)_i$. @@ -601,20 +604,20 @@ the probability that a given component was decoded incorrectly.% \end{tikzpicture} \caption{Probability that a component of the estimated codeword - $\hat{\boldsymbol{c}}\in \mathbb{F}_2^n$ is wrong for a (3,6) regular + $\hat{\boldsymbol{c}}\in \mathbb{F}_2^n$ is erroneous for a (3,6) regular LDPC code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay}. The indices $i'$ are ordered such that the amplitude of oscillation of $\left(\nabla h\right)_{i'}$ increases with $i'$. - Parameters used for simulation: $\gamma = 0.05, \omega = 0.05, + Parameters used for the simulation: $\gamma = 0.05, \omega = 0.05, \eta = 1.5, E_b/N_0 = \SI{4}{dB}$. - Simulated with $\SI{100000000}{}$ iterations.} + Simulated with $\SI{100000000}{}$ iterations using the all-zeros codeword.} \label{fig:p_error} \end{figure} -The complete improved algorithm is depicted in Algorithm \ref{alg:improved}. +The complete improved algorithm is given in Algorithm \ref{alg:improved}. First, the proximal decoding algorithm is applied. -If a valid codeword has been reached, i.e., if the algorithm has converged, this -is the solution returned. +If a valid codeword has been reached, i.e., if the algorithm has converged, +we return this solution. Otherwise, $N \in \mathbb{N}$ components are selected based on the criterion presented above. Beginning with the recent estimate $\hat{\boldsymbol{c}} \in \mathbb{F}_2^n$, @@ -661,7 +664,7 @@ generated and an ``ML-in-the-list'' step is performed. %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Simulation Results \& Discussion} -Figure \ref{fig:results} shows the FER and BER resulting from applying +Fig. \ref{fig:results} shows the FER and BER resulting from applying proximal decoding as presented in \cite{proximal_paper} and the improved algorithm presented here when applied to a $\left( 3,6 \right)$-regular LDPC code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}.