Correct Improved Algorithm, round 1

This commit is contained in:
Andreas Tsouchlos 2024-01-07 22:48:15 +01:00
parent ad354f8f02
commit 3661ccb23a

View File

@ -327,14 +327,14 @@ presented in Algorithm \ref{alg:proximal_decoding}.
\section{Improved algorithm} \section{Improved algorithm}
%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%
\subsection{Analysis of Convergence Behavior} \subsection{Analysis of the Convergence Behavior}
In figure \ref{fig:fer vs ber}, the \textit{frame error rate} (FER), In Fig. \ref{fig:fer vs ber}, the \textit{frame error rate} (FER),
\textit{bit error rate} (BER) and \textit{decoding failure rate} (DFR) of \textit{bit error rate} (BER) and \textit{decoding failure rate} (DFR) of
proximal decoding are shown for an LDPC code with $n=204$ and $k=102$ proximal decoding are shown for an LDPC code with $n=204$ and $k=102$
\cite[204.33.484]{mackay}. \cite[204.33.484]{mackay}.
A decoding failure is defined as a decoding operation, the result of which is A decoding failure is defined as a decoding operation returning an invalid
not a valid codeword, i.e., as non-convergence of the algorithm. codeword, i.e., as non-convergence of the algorithm.
The parameters chosen for this simulation are $\gamma=0.05, \omega=0.05, The parameters chosen for this simulation are $\gamma=0.05, \omega=0.05,
\eta=1.5$ and $K=200$. \eta=1.5$ and $K=200$.
They were determined to offer the best performance in a preliminary examination, They were determined to offer the best performance in a preliminary examination,
@ -345,9 +345,9 @@ This means that most frame errors are not due to the algorithm converging
to the wrong codeword, but due to the algorithm not converging at all. to the wrong codeword, but due to the algorithm not converging at all.
As proximal decoding is an optimization-based decoding method, one possible As proximal decoding is an optimization-based decoding method, one possible
explanation for this effect might be that during the decoding process convergence explanation for this effect might be that during the decoding process, convergence
on the final codeword is often not achieved, although the estimate is moving in to the final codeword is often not achieved, although the estimate is moving into
the right general direction. the right direction.
This would suggest that most frame errors occur due to only a few incorrectly This would suggest that most frame errors occur due to only a few incorrectly
decoded bits.% decoded bits.%
% %
@ -395,23 +395,24 @@ decoded bits.%
\label{fig:fer vs ber} \label{fig:fer vs ber}
\end{figure}% \end{figure}%
% %
An approach for lowering the FER might then be to append an ``ML-in-the-list'' An approach for lowering the FER might then be to append an ``ML-in-the-list''
\cite{ml_in_the_list} step to the decoding process shown in Algorithm \cite{ml_in_the_list} step to the decoding process shown in Algorithm
\ref{alg:proximal_decoding}. \ref{alg:proximal_decoding}.
This step would consist of determining the $N \in \mathbb{N}$ most probably This step consists in determining the $N \in \mathbb{N}$ most probable
wrong bits, finding all variations of the current estimate with those bits erroneous bits, finding all variations of the current estimate with those bits
modified, and performing ML decoding on this list. modified, and performing ML decoding on this list.
This approach crucially relies on identifying the most probably wrong bits. This approach crucially relies on identifying the most probable erroneous bits.
Therefore, the convergence properties of proximal decoding are investigated. Therefore, the convergence properties of proximal decoding are investigated.
Considering equations (\ref{eq:s_update}) and (\ref{eq:r_update}), figure Considering (\ref{eq:s_update}) and (\ref{eq:r_update}), Fig.
\ref{fig:grad} shows the two gradients along which the minimization is \ref{fig:grad} shows the two gradients along which the minimization is
performed for a repetition code with $n=2$. performed for a repetition code with $n=2$.
It is apparent that a net movement will result as long as the two gradients It is apparent that a net movement will result as long as the two gradients
have a common component. have a common component.
As soon as this common component is exhausted, they will work in opposing As soon as this common component is exhausted, they will work in opposing
directions and an oscillation of the estimate will take place. directions resulting in an oscillation of the estimate.
This behavior matches the conjecture that the reason for the high DFR is a This behavior supports the conjecture that the reason for the high DFR is a
failure to converge to the correct codeword in the final steps of the failure to converge to the correct codeword in the final steps of the
optimization process.% optimization process.%
% %
@ -514,11 +515,13 @@ optimization process.%
\label{fig:grad} \label{fig:grad}
\end{figure}% \end{figure}%
% %
In figure \ref{fig:prox:convergence_large_n}, only component
$\left(\tilde{\boldsymbol{x}}\right)_1$ of the estimate is considered during a In Fig. \ref{fig:prox:convergence_large_n}, we consider only component
decoding operation for an LDPC code with $n=204$ and $k=102$. $\left(\tilde{\boldsymbol{x}}\right)_1$ of the estimate during a
decoding operation for the LDPC code used also for Fig. 1.
Two qualities may be observed. Two qualities may be observed.
First, the average values of the two gradients are equal, except for their sign, First, we observe the average absolute values of the two gradients are equal,
however, they have opposing signs,
leading to the aforementioned oscillation. leading to the aforementioned oscillation.
Second, the gradient of the code constraint polynomial itself starts to Second, the gradient of the code constraint polynomial itself starts to
oscillate after a certain number of iterations.% oscillate after a certain number of iterations.%
@ -567,11 +570,11 @@ oscillate after a certain number of iterations.%
\end{figure}% \end{figure}%
%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%
\subsection{Improvement using ``ML-in-the-list'' step} \subsection{Improvement Using ``ML-in-the-List'' Step}
Considering the magnitude of oscillation of the gradient of the code constraint Considering the magnitude of the oscillation of the gradient of the code constraint
polynomial, some interesting behavior may be observed. polynomial, some interesting behavior may be observed.
Figure \ref{fig:p_error} shows the probability that a component of the estimate Fig. \ref{fig:p_error} shows the probability that a component of the estimate
is wrong, determined through a Monte Carlo simulation, when the components of is wrong, determined through a Monte Carlo simulation, when the components of
$\boldsymbol{c}$ are ordered from smallest to largest oscillation of $\boldsymbol{c}$ are ordered from smallest to largest oscillation of
$\left(\nabla h\right)_i$. $\left(\nabla h\right)_i$.
@ -601,20 +604,20 @@ the probability that a given component was decoded incorrectly.%
\end{tikzpicture} \end{tikzpicture}
\caption{Probability that a component of the estimated codeword \caption{Probability that a component of the estimated codeword
$\hat{\boldsymbol{c}}\in \mathbb{F}_2^n$ is wrong for a (3,6) regular $\hat{\boldsymbol{c}}\in \mathbb{F}_2^n$ is erroneous for a (3,6) regular
LDPC code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay}. LDPC code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay}.
The indices $i'$ are ordered such that the amplitude of oscillation of The indices $i'$ are ordered such that the amplitude of oscillation of
$\left(\nabla h\right)_{i'}$ increases with $i'$. $\left(\nabla h\right)_{i'}$ increases with $i'$.
Parameters used for simulation: $\gamma = 0.05, \omega = 0.05, Parameters used for the simulation: $\gamma = 0.05, \omega = 0.05,
\eta = 1.5, E_b/N_0 = \SI{4}{dB}$. \eta = 1.5, E_b/N_0 = \SI{4}{dB}$.
Simulated with $\SI{100000000}{}$ iterations.} Simulated with $\SI{100000000}{}$ iterations using the all-zeros codeword.}
\label{fig:p_error} \label{fig:p_error}
\end{figure} \end{figure}
The complete improved algorithm is depicted in Algorithm \ref{alg:improved}. The complete improved algorithm is given in Algorithm \ref{alg:improved}.
First, the proximal decoding algorithm is applied. First, the proximal decoding algorithm is applied.
If a valid codeword has been reached, i.e., if the algorithm has converged, this If a valid codeword has been reached, i.e., if the algorithm has converged,
is the solution returned. we return this solution.
Otherwise, $N \in \mathbb{N}$ components are selected based on the criterion Otherwise, $N \in \mathbb{N}$ components are selected based on the criterion
presented above. presented above.
Beginning with the recent estimate $\hat{\boldsymbol{c}} \in \mathbb{F}_2^n$, Beginning with the recent estimate $\hat{\boldsymbol{c}} \in \mathbb{F}_2^n$,
@ -661,7 +664,7 @@ generated and an ``ML-in-the-list'' step is performed.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation Results \& Discussion} \section{Simulation Results \& Discussion}
Figure \ref{fig:results} shows the FER and BER resulting from applying Fig. \ref{fig:results} shows the FER and BER resulting from applying
proximal decoding as presented in \cite{proximal_paper} and the improved proximal decoding as presented in \cite{proximal_paper} and the improved
algorithm presented here when applied to a $\left( 3,6 \right)$-regular LDPC algorithm presented here when applied to a $\left( 3,6 \right)$-regular LDPC
code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}. code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}.