Add changes made before submission for review

This commit is contained in:
Andreas Tsouchlos 2024-06-13 17:41:59 +02:00
parent 7211d63889
commit adb7321b93

View File

@ -56,7 +56,7 @@
\pgfplotsset{colorscheme/cel} \pgfplotsset{colorscheme/cel}
\newcommand{\figwidth}{\columnwidth} \newcommand{\figwidth}{\columnwidth}
\newcommand{\figheight}{0.75\columnwidth} \newcommand{\figheight}{0.7\columnwidth}
\pgfplotsset{ \pgfplotsset{
FERPlot/.style={ FERPlot/.style={
@ -107,17 +107,17 @@
\begin{abstract} \begin{abstract}
In this paper, the proximal decoding algorithm is considered within the In this paper, the proximal decoding algorithm described in, e.g., \cite{proximal_paper}, is considered within the
context of \textit{additive white Gaussian noise} (AWGN) channels. context of \textit{additive white Gaussian noise} (AWGN) channels.
An analysis of the convergence behavior of the algorithm shows that An analysis of the convergence behavior of the algorithm shows that
proximal decoding inherently enters an oscillating behavior of the estimate proximal decoding inherently enters an oscillating behavior of the estimate
after a certain number of iterations. after a certain number of iterations.
Due to this oscillation, frame errors arising during decoding can often Due to this oscillation, frame errors arising during decoding can often
be attributed to only a few remaining wrongly decoded bits. be attributed to only a few remaining wrongly decoded bit positions.
In this letter, an improvement of the proximal decoding algorithm is proposed In this letter, an improvement of the proximal decoding algorithm is proposed
by appending an additional step, in which these erroneous components are by establishing an additional step, in which these erroneous positions are
attempted to be corrected. attempted to be corrected.
We suggesst an empirical rule with which the components most likely needing We suggest an empirical rule with which the components most likely needing
correction can be determined. correction can be determined.
Using this insight and performing a subsequent ``ML-in-the-list'' decoding, Using this insight and performing a subsequent ``ML-in-the-list'' decoding,
a gain of up to 1 dB is achieved compared to conventional a gain of up to 1 dB is achieved compared to conventional
@ -151,10 +151,12 @@ While the established decoders for LDPC codes, such as belief propagation (BP)
and the min-sum algorithm, offer good decoding performance, they are generally and the min-sum algorithm, offer good decoding performance, they are generally
not optimal and exhibit an error floor for high not optimal and exhibit an error floor for high
\textit{signal-to-noise ratios} (SNRs) \cite{channel_codes_book}, making them \textit{signal-to-noise ratios} (SNRs) \cite{channel_codes_book}, making them
unsuitable for applications with extreme reliability requirements. inadequate for applications with extreme reliability requirements.
Optimization based decoding algorithms are an entirely different way of Optimization based decoding algorithms are an entirely different way of
approaching the decoding problem. approaching the decoding problem;
they map the decoding problem onto an optimization problem in order to
leverage the vast knowledge from the field of optimization theory.
A number of different such algorithms have been introduced. A number of different such algorithms have been introduced.
The field of \textit{linear programming} (LP) decoding \cite{feldman_paper}, The field of \textit{linear programming} (LP) decoding \cite{feldman_paper},
for example, represents one class of such algorithms, based on a relaxation for example, represents one class of such algorithms, based on a relaxation
@ -167,26 +169,26 @@ Proximal decoding relies on a non-convex optimization formulation
of the \textit{maximum a posteriori} (MAP) decoding problem. of the \textit{maximum a posteriori} (MAP) decoding problem.
The aim of this work is to improve upon the performance of proximal decoding by The aim of this work is to improve upon the performance of proximal decoding by
first presenting an examination of the algorithm's behavior and then suggesting first presenting an analysis of the algorithm's behavior and then suggesting
an approach to mitigate some of its flaws. an approach to mitigate some of its flaws.
This analysis is performed for This analysis is performed for
\textit{additive white Gaussian noise} (AWGN) channels. \textit{additive white Gaussian noise} (AWGN) channels.
We first observe that the algorithm initially moves the estimate in We first observe that the algorithm initially moves the estimate in
the right direction, however, in the final steps of the decoding process, the right direction; however, in the final steps of the decoding process,
convergence to the correct codeword is often not achieved. convergence to the correct codeword is often not achieved.
Furthermore, we suggest that the reason for this behavior is the nature Subsequently, we attributed this behavior to the nature
of the decoding algorithm itself, comprising two separate gradient descent of the decoding algorithm itself, comprising two separate gradient descent
steps working adversarially. steps working adversarially.
We propose a method mitigate this effect by appending an We, thus, propose a method to mitigate this effect by appending an
additional step to the decoding process. additional step to the iterative decoding process.
In this additional step, the components of the estimate with the highest In this additional step, the components of the estimate with the highest
probability of being erroneous are identified. probability of being erroneous are identified.
New codewords are then generated, over which an ``ML-in-the-list'' New codewords are then generated, over which an ``ML-in-the-list''
\cite{ml_in_the_list} decoding is performed. \cite{ml_in_the_list} decoding is performed.
A process to conduct this identification is proposed in this paper. A process to conduct this identification is proposed in this paper.
Using the improved algorithm, a gain of up to Using the improved algorithm, a gain of up to
1 dB can be achieved compared to conventional proximal decoding, $\SI{1}{dB}$ can be achieved compared to conventional proximal decoding,
depending on the decoder parameters and the code. depending on the decoder parameters and the code.
@ -200,7 +202,7 @@ When considering binary linear codes, data words are mapped onto
codewords, the lengths of which are denoted by $k \in \mathbb{N}$ codewords, the lengths of which are denoted by $k \in \mathbb{N}$
and $n \in \mathbb{N}$, respectively, with $k \le n$. and $n \in \mathbb{N}$, respectively, with $k \le n$.
The set of codewords $\mathcal{C} \subset \mathbb{F}_2^n$ of a binary linear The set of codewords $\mathcal{C} \subset \mathbb{F}_2^n$ of a binary linear
code can be represented using the parity-check matrix code can be characterized using the parity-check matrix
$\boldsymbol{H} \in \mathbb{F}_2^{m \times n} $, where $m$ represents the $\boldsymbol{H} \in \mathbb{F}_2^{m \times n} $, where $m$ represents the
number of parity-checks: number of parity-checks:
% %
@ -230,7 +232,7 @@ estimate of the transmitted codeword, denoted as
$\hat{\boldsymbol{c}} \in \mathbb{F}_2^n$. $\hat{\boldsymbol{c}} \in \mathbb{F}_2^n$.
A distinction is made between $\boldsymbol{x} \in \left\{\pm 1\right\}^n$ A distinction is made between $\boldsymbol{x} \in \left\{\pm 1\right\}^n$
and $\tilde{\boldsymbol{x}} \in \mathbb{R}^n$, and $\tilde{\boldsymbol{x}} \in \mathbb{R}^n$,
the former denoting the BPSK symbol physically transmitted over the channel and the former denoting the BPSK symbols transmitted over the channel and
the latter being used as a variable during the optimization process. the latter being used as a variable during the optimization process.
The posterior probability of having transmitted $\boldsymbol{x}$ when receiving The posterior probability of having transmitted $\boldsymbol{x}$ when receiving
$\boldsymbol{y}$ is expressed as a \textit{probability mass function} (PMF) $\boldsymbol{y}$ is expressed as a \textit{probability mass function} (PMF)
@ -267,8 +269,8 @@ One such expression, formulated under the assumption of BPSK, is the
.\end{align*}% .\end{align*}%
% %
Its intent is to penalize vectors far from a codeword. Its intent is to penalize vectors far from a codeword.
It comprises two terms: one representing the bipolar constraint It comprises two terms: one representing the bipolar constraint due to transmitting BPSK
and one representing the parity constraint, incorporating all of the and one representing the parity constraint, incorporating all
information regarding the code. information regarding the code.
The channel model can be considered using the negative log-likelihood The channel model can be considered using the negative log-likelihood
@ -279,7 +281,7 @@ The channel model can be considered using the negative log-likelihood
\boldsymbol{y} \mid \tilde{\boldsymbol{x}} \mright) \mright) \boldsymbol{y} \mid \tilde{\boldsymbol{x}} \mright) \mright)
.\end{align*} .\end{align*}
% %
The information about the channel and the code are consolidated in the objective Then, the information about the channel and the code are consolidated in the objective
function \cite{proximal_paper} function \cite{proximal_paper}
% %
\begin{align*} \begin{align*}
@ -305,17 +307,17 @@ introduced, describing the result of each of the two steps:
.\end{alignat} .\end{alignat}
% %
An equation for determining $\nabla h(\boldsymbol{r})$ is given in An equation for determining $\nabla h(\boldsymbol{r})$ is given in
\cite{proximal_paper}. \cite{proximal_paper}, where it is also proposed to initialized $\boldsymbol{s}=\boldsymbol{0}$.
It should be noted that the variables $\boldsymbol{r}$ and $\boldsymbol{s}$ It should be noted that the variables $\boldsymbol{r}$ and $\boldsymbol{s}$
represent $\tilde{\boldsymbol{x}}$ during different represent $\tilde{\boldsymbol{x}}$ during different
stages of the decoding process. stages of the decoding process.
As the gradient of the code-constraint polynomial can attain very large values As the gradient of the code-constraint polynomial can attain very large values
in some cases, an additional step is introduced to ensure numerical stability: in some cases, an additional step is introduced in \cite{proximal_paper} to ensure numerical stability:
every current estimate $\boldsymbol{s}$ is projected onto every estimate $\boldsymbol{s}$ is projected onto
$\left[-\eta, \eta\right]^n$ by a projection $\left[-\eta, \eta\right]^n$ by a projection
$\Pi_\eta : \mathbb{R}^n \rightarrow \left[-\eta, \eta\right]^n$, where $\eta$ $\Pi_\eta : \mathbb{R}^n \rightarrow \left[-\eta, \eta\right]^n$, where $\eta$
is a positive constant slightly larger than one, e.g., $\eta = 1.5$. is a positive constant larger than one, e.g., $\eta = 1.5$.
The resulting decoding process as described in \cite{proximal_paper} is The resulting decoding process as described in \cite{proximal_paper} is
presented in Algorithm \ref{alg:proximal_decoding}. presented in Algorithm \ref{alg:proximal_decoding}.
@ -346,13 +348,12 @@ presented in Algorithm \ref{alg:proximal_decoding}.
\subsection{Analysis of the Convergence Behavior} \subsection{Analysis of the Convergence Behavior}
In Fig. \ref{fig:fer vs ber}, the \textit{frame error rate} (FER), In Fig. \ref{fig:fer vs ber}, the \textit{frame error rate} (FER),
\textit{bit error rate} (BER) and \textit{decoding failure rate} (DFR) of \textit{bit error rate} (BER), and \textit{decoding failure rate} (DFR) of
proximal decoding are shown for an LDPC code with $n=204$ and $k=102$ proximal decoding are shown for an LDPC code with $n=204$ and $k=102$
\cite[204.33.484]{mackay}. \cite[204.33.484]{mackay}.
A decoding failure is defined as a decoding operation returning an invalid Hereby, a \emph{decoding failure} is defined as returning a \emph{non valid codeword}, i.e., as non-convergence of the algorithm.
codeword, i.e., as non-convergence of the algorithm.
The parameters chosen for this simulation are $\gamma=0.05, \omega=0.05, The parameters chosen for this simulation are $\gamma=0.05, \omega=0.05,
\eta=1.5$ and $K=200$. \eta=1.5$ and $K=200$ ($K$ describing the maximum number of iterations).
They were determined to offer the best performance in a preliminary examination, They were determined to offer the best performance in a preliminary examination,
where the effect of changing multiple parameters was simulated over a wide where the effect of changing multiple parameters was simulated over a wide
range of values. range of values.
@ -367,7 +368,7 @@ the right direction.
This would suggest that most frame errors occur due to only a few incorrectly This would suggest that most frame errors occur due to only a few incorrectly
decoded bits.% decoded bits.%
% %
\begin{figure} \begin{figure}[t]
\centering \centering
@ -417,14 +418,13 @@ decoded bits.%
\end{figure}% \end{figure}%
% %
An approach for lowering the FER might then be to append an ``ML-in-the-list'' An approach for lowering the FER might then be to add an ``ML-in-the-list''
\cite{ml_in_the_list} step to the decoding process shown in Algorithm \cite{ml_in_the_list} step to the decoding process shown in Algorithm
\ref{alg:proximal_decoding}. \ref{alg:proximal_decoding}.
This step consists in determining the $N \in \mathbb{N}$ most probable This step consists in determining the $N \in \mathbb{N}$ most probably
erroneous bits, finding all variations of the current estimate with those bits erroneous bit positions $\mathcal{I}'$, generating a list of $2^N$ codeword candidates out of the current estimate $\hat{\boldsymbol{c}}$ with bits in $\mathcal{I}'$ adopting all possible values, i.e., $\mathcal{L}'=\left\{ \hat{\boldsymbol{c}}'\in\mathbb{F}_2^n: \hat{c}'_i=\hat{c}_i, i\notin \mathcal{I}'\text{ and } \hat{c}'_i\in\mathbb{F}_2, i\in \mathcal{I}' \right\}$, and performing ML decoding on this list.
modified, and performing ML decoding on this list.
This approach crucially relies on identifying the most probable erroneous bits. This approach crucially relies on identifying the most probably erroneous bits.
Therefore, the convergence properties of proximal decoding are investigated. Therefore, the convergence properties of proximal decoding are investigated.
Considering (\ref{eq:s_update}) and (\ref{eq:r_update}), Fig. Considering (\ref{eq:s_update}) and (\ref{eq:r_update}), Fig.
\ref{fig:grad} shows the two gradients along which the minimization is \ref{fig:grad} shows the two gradients along which the minimization is
@ -437,7 +437,7 @@ This behavior supports the conjecture that the reason for the high DFR is a
failure to converge to the correct codeword in the final steps of the failure to converge to the correct codeword in the final steps of the
optimization process.% optimization process.%
% %
\begin{figure} \begin{figure}[t]
\centering \centering
\ifoverleaf \ifoverleaf
@ -538,7 +538,7 @@ optimization process.%
$\nabla L\left(\boldsymbol{y} \mid \tilde{\boldsymbol{x}}\right)$ $\nabla L\left(\boldsymbol{y} \mid \tilde{\boldsymbol{x}}\right)$
and $\nabla h \left( \tilde{\boldsymbol{x}} \right)$ for a repetition and $\nabla h \left( \tilde{\boldsymbol{x}} \right)$ for a repetition
code with $n=2$. code with $n=2$.
Shown for $\boldsymbol{y} = \begin{bmatrix} -0.5 & 0.8 \end{bmatrix}$. Shown for $\boldsymbol{y} = \begin{pmatrix} -0.5 & 0.8 \end{pmatrix}$.
} }
\label{fig:grad} \label{fig:grad}
\end{figure}% \end{figure}%
@ -548,7 +548,7 @@ In Fig. \ref{fig:prox:convergence_large_n}, we consider only component
$\left(\tilde{\boldsymbol{x}}\right)_1$ of the estimate during a $\left(\tilde{\boldsymbol{x}}\right)_1$ of the estimate during a
decoding operation for the LDPC code used also for Fig. 1. decoding operation for the LDPC code used also for Fig. 1.
Two qualities may be observed. Two qualities may be observed.
First, we observe the average absolute values of the two gradients are equal, First, we observe that the average absolute values of the two gradients are equal,
however, they have opposing signs, however, they have opposing signs,
leading to the aforementioned oscillation. leading to the aforementioned oscillation.
Second, the gradient of the code constraint polynomial itself starts to Second, the gradient of the code constraint polynomial itself starts to
@ -605,16 +605,16 @@ oscillate after a certain number of iterations.%
\subsection{Improvement Using ``ML-in-the-List'' Step} \subsection{Improvement Using ``ML-in-the-List'' Step}
Considering the magnitude of the oscillation of the gradient of the code constraint Considering the magnitude of the oscillation of the gradient of the code constraint
polynomial, some interesting behavior may be observed. polynomial, some interesting behavior may be observed. Let $\boldsymbol{i}'=(i'_1, \ldots, i_n')$ be a permutation of $\{1,\ldots, n\}$ such that $\left(\nabla h\right)_{i'}$ is arranged according to increasing variance of oscillation of its magnitude, i.e., $\text{Var}_\text{iter}(|\left(\nabla h\right)_{i'_1}|)\leq \cdots \leq \text{Var}_\text{iter}(|\left(\nabla h\right)_{i'_n}|)$ with $\text{Var}_\text{iter}(\cdot)$ denoting the empirical variance along the iterations.
Fig. \ref{fig:p_error} shows the probability that a component of the estimate
is wrong, determined through a Monte Carlo simulation, when the components of
$\boldsymbol{c}$ are ordered from smallest to largest oscillation of
$\left(\nabla h\right)_i$.
The lower the magnitude of the oscillation, the higher the probability that the Hereafter, Fig. \ref{fig:p_error} shows Monte Carlo simulations of the probability that decoded bit $\hat{c}_i'$ at position $i'$ of the estimated codeword
corresponding bit was not decoded correctly. is wrong. %, when the components of
This means that this magnitude is a suitable figure of merit for determining %$\boldsymbol{c}$ are ordered from smallest to largest oscillation of
the probability that a given component was decoded incorrectly.% %$\left(\nabla h\right)_i$.
It can be observed that lower magnitudes of oscillation correlate with higher probability that the corresponding bit was not decoded correctly.
Thus, this magnitude might be used as a feasible indicator
%for determining the probability that a given component was decoded incorrectly and, thus,
for identifying erroneously decoded bit positions as $\mathcal{I}'=\{i_1', \ldots, i_N'\}$.%
% %
\begin{figure}[H] \begin{figure}[H]
\centering \centering
@ -640,10 +640,9 @@ the probability that a given component was decoded incorrectly.%
\fi \fi
\caption{Probability that a component of the estimated codeword \caption{Probability that a component of the estimated codeword
$\hat{\boldsymbol{c}}\in \mathbb{F}_2^n$ is erroneous for a (3,6) regular $\boldsymbol{\hat{c}}\in \mathbb{F}_2^n$ is erroneous for a (3,6) regular
LDPC code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay}. LDPC code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay}.
The indices $i'$ are ordered such that the amplitude of oscillation of Indices $i'$ are ordered such that $|\left(\nabla h\right)_{i'_1}|\leq \cdots \leq |\left(\nabla h\right)_{i'_n}|$.
$\left(\nabla h\right)_{i'}$ increases with $i'$.
Parameters used for the simulation: $\gamma = 0.05, \omega = 0.05, Parameters used for the simulation: $\gamma = 0.05, \omega = 0.05,
\eta = 1.5, E_b/N_0 = \SI{4}{dB}$. \eta = 1.5, E_b/N_0 = \SI{4}{dB}$.
Simulated with $\SI{100000000}{}$ iterations using the all-zeros codeword.} Simulated with $\SI{100000000}{}$ iterations using the all-zeros codeword.}
@ -656,25 +655,10 @@ If a valid codeword has been reached, i.e., if the algorithm has converged,
we return this solution. we return this solution.
Otherwise, $N \in \mathbb{N}$ components are selected based on the criterion Otherwise, $N \in \mathbb{N}$ components are selected based on the criterion
presented above. presented above.
Beginning with the recent estimate $\hat{\boldsymbol{c}} \in \mathbb{F}_2^n$, Originating from $\boldsymbol{\hat{c}} \in \mathbb{F}_2^n$ resulting from proximal decoding,
all variations of words with the selected components modified are then the list $\mathcal{L}'$ of codeword candidates with bits in $\mathcal{I}'$ modified is
generated and an ``ML-in-the-list'' step is performed. generated and an ``ML-in-the-list'' step is performed.
\begin{algorithm}
\caption{ML-in-the-List algorithm.}
\label{alg:ml-in-the-list}
\begin{algorithmic}
\STATE Find valid codewords under $\left(\hat{\boldsymbol{c}}_{l}\right)_{1=1}^{2^N}$
\STATE \textbf{if} no valid codewords exist
\STATE \hspace{5mm} Compute $\langle \hat{\boldsymbol{c}}_l, \hat{\boldsymbol{c}} \rangle$ for all variations $\boldsymbol{c}_l$
\STATE \textbf{else}
\STATE \hspace{5mm} Compute $\langle \hat{\boldsymbol{c}}_l, \hat{\boldsymbol{c}} \rangle$ for valid codewords
\STATE \textbf{end if}
\STATE \textbf{return} $\hat{\boldsymbol{c}}_l$ with highest $\langle \hat{\boldsymbol{c}}_l, \hat{\boldsymbol{c}} \rangle$
\end{algorithmic}
\end{algorithm}%
%
\begin{algorithm} \begin{algorithm}
\caption{Improved proximal decoding algorithm. \caption{Improved proximal decoding algorithm.
} }
@ -685,24 +669,62 @@ generated and an ``ML-in-the-list'' step is performed.
\STATE \textbf{for} $K$ iterations \textbf{do} \STATE \textbf{for} $K$ iterations \textbf{do}
\STATE \hspace{5mm} $\boldsymbol{r} \leftarrow \boldsymbol{s} - \omega \left( \boldsymbol{s} - \boldsymbol{y} \right) $ \STATE \hspace{5mm} $\boldsymbol{r} \leftarrow \boldsymbol{s} - \omega \left( \boldsymbol{s} - \boldsymbol{y} \right) $
\STATE \hspace{5mm} $\boldsymbol{s} \leftarrow \Pi_\eta \left(\boldsymbol{r} - \gamma \nabla h\left( \boldsymbol{r} \right) \right)$ \STATE \hspace{5mm} $\boldsymbol{s} \leftarrow \Pi_\eta \left(\boldsymbol{r} - \gamma \nabla h\left( \boldsymbol{r} \right) \right)$
\STATE \hspace{5mm} $\boldsymbol{\hat{c}} \leftarrow \mathds{1} \left\{ \text{sign}\left( \boldsymbol{s} \right) = -1 \right\}$ \STATE \hspace{5mm} $\boldsymbol{\hat{c}} \leftarrow \mathds{1}_{ \left\{ \boldsymbol{s} \leq 0 \right\}}$
\STATE \hspace{10mm} \textbf{if} $\boldsymbol{H}\boldsymbol{\hat{c}} = \boldsymbol{0}$ \textbf{do} \STATE \hspace{5mm} \textbf{if} $\boldsymbol{H}\boldsymbol{\hat{c}} = \boldsymbol{0}$ \textbf{do}
\STATE \hspace{10mm} \textbf{return} $\boldsymbol{\hat{c}}$ \STATE \hspace{10mm} \textbf{return} $\boldsymbol{\hat{c}}$
\STATE \hspace{5mm} \textbf{end if} \STATE \hspace{5mm} \textbf{end if}
\STATE \textbf{end for} \STATE \textbf{end for}
\STATE $\textcolor{KITblue}{\text{Estimate $N$ wrong bit indices $\mathcal{I} = \{i_1,\ldots,i_N\}$}}$ \STATE $\textcolor{KITblue}{\text{$\mathcal{I}'\leftarrow \{i_1',\ldots, i_N'\}$ (indices of $N$ probably wrong bits)
\STATE $\textcolor{KITblue}{\text{Generate candidate list $\left(\hat{\boldsymbol{c}}_{l}\right)_{l=1}^{2^N}$ by varying bits in $\mathcal{I}$}}$\vspace{1mm} %$\mathcal{I} = \{i_1,\ldots,i_N\}$
\STATE $\textcolor{KITblue}{\textbf{return ml\textunderscore in\textunderscore the\textunderscore list}\left(\left(\hat{\boldsymbol{c}}_l\right)_{1=1}^{2^N}\right)}$ }
}$
\STATE $\textcolor{KITblue}{\text{%Generate candidates
$\mathcal{L}'\leftarrow\left\{ \boldsymbol{\hat{c}}'\in\mathbb{F}_2^n: \hat{c}'_i=\hat{c}_i, i\notin \mathcal{I}' \text{ and } \hat{c}'_i\in\mathbb{F}_2, i\in \mathcal{I}' \right\}
%\left(\boldsymbol{\hat{c}}_{l}\right)_{l=1}^{2^N}
$
%by varying bits in $\mathcal{I}$
}}
$\vspace{1mm}
%\STATE \hspace{20mm} \textcolor{KITblue}{(list of codeword candidates)}
\STATE $\textcolor{KITblue}{\textbf{return ML\textunderscore in\textunderscore the\textunderscore list}\left(
%\left(\boldsymbol{\hat{c}}_l\right)_{1=1}^{2^N}
\mathcal{L}'
\right)}$
\end{algorithmic} \end{algorithmic}
\end{algorithm} \end{algorithm}
\begin{algorithm}
\caption{ML-in-the-List algorithm.}
\label{alg:ml-in-the-list}
\begin{algorithmic}
\STATE $\mathcal{L}'_\text{valid} \leftarrow \{ \boldsymbol{\hat{c}}'\in\mathcal{L}': \boldsymbol{H}\boldsymbol{\hat{c}}'=\boldsymbol{0}\}$ (select valid codewords)
% Find valid codewords within $\mathcal{L}'$
%under $\left(\boldsymbol{\hat{c}}_{l}\right)_{1=1}^{2^N}$
\STATE \textbf{if} $\mathcal{L}'_\text{valid}\neq\emptyset$ \textbf{do}
%no valid codewords exist
\STATE \hspace{5mm}
\textbf{return} $\arg\max \{ \langle 1-2\boldsymbol{\hat{c}}'_l, \boldsymbol{y} \rangle : \boldsymbol{\hat{c}}'_l\in\mathcal{L}'_\text{valid}\}$
%Compute $\langle \boldsymbol{\hat{c}}'_l, \boldsymbol{\hat{c}} \rangle$ for all variations $\boldsymbol{\hat{c}}'_l\in\mathcal{L}$
\STATE \textbf{else}
\STATE \hspace{5mm}
\textbf{return} $\arg\max \{ \langle 1-2 \boldsymbol{\hat{c}}'_l, \boldsymbol{y} \rangle : \boldsymbol{\hat{c}}'_l\in\mathcal{L}'\}$
%Compute $\langle \boldsymbol{\hat{c}}'_l, \boldsymbol{\hat{c}} \rangle$ for valid codewords $\boldsymbol{\hat{c}}'_l\in\mathcal{L}$
\STATE \textbf{end if}
%\STATE \textbf{return} $\boldsymbol{\hat{c}}_l$ with highest $\langle \boldsymbol{\hat{c}}_l, \boldsymbol{\hat{c}} \rangle$
\end{algorithmic}
\end{algorithm}%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation Results \& Discussion} \section{Simulation Results \& Discussion}
Fig. \ref{fig:results} shows the FER and BER resulting from applying Fig. \ref{fig:results} shows the FER and BER resulting from applying
proximal decoding as presented in \cite{proximal_paper} and the improved proximal decoding as presented in \cite{proximal_paper} and the improved
algorithm presented here when applied to a $\left( 3,6 \right)$-regular LDPC algorithm presented in this work, when both are applied to a $\left( 3,6 \right)$-regular LDPC
code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}. code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}.
The parameters chosen for the simulation are The parameters chosen for the simulation are
$\gamma = 0.05, \omega=0.05, \eta=1.5, K=200$. $\gamma = 0.05, \omega=0.05, \eta=1.5, K=200$.
@ -711,7 +733,7 @@ as a preliminary examination
showed that they provide the best results for proximal decoding as well as showed that they provide the best results for proximal decoding as well as
the improved algorithm. the improved algorithm.
All points were generated by simulating at least 100 frame errors. All points were generated by simulating at least 100 frame errors.
The number $N$ of possibly wrong components selected was selected as $8$, The number of possibly wrong components selected was selected as $N=8$,
since this provides reasonable gain without requiring an unreasonable amount since this provides reasonable gain without requiring an unreasonable amount
of memory and computational resources. of memory and computational resources.
% %
@ -740,8 +762,18 @@ of memory and computational resources.
width=\figwidth, width=\figwidth,
height=\figheight, height=\figheight,
legend pos=north east, legend pos=north east,
ylabel={BER (\lineintext{}) / FER (\lineintext{dashed})}, ylabel={BER (\lineintext{}), FER (\lineintext{dashed})},
] ]
\addplot+[FERPlot, mark=o, mark options={solid}, scol0, forget plot]
table [x=SNR, y=FER, col sep=comma,
discard if gt={SNR}{9}]
{res/bp_20433484.csv};
\addplot+[BERPlot, mark=*, scol0]
table [x=SNR, y=BER, col sep=comma,
discard if gt={SNR}{7.5}]
{res/bp_20433484.csv};
\addlegendentry{BP};
\addplot+[FERPlot, mark=o, mark options={solid}, scol1, forget plot] \addplot+[FERPlot, mark=o, mark options={solid}, scol1, forget plot]
table [x=SNR, y=FER, col sep=comma, table [x=SNR, y=FER, col sep=comma,
@ -785,12 +817,12 @@ of memory and computational resources.
A noticeable improvement can be observed both in the FER as well as the BER. A noticeable improvement can be observed both in the FER as well as the BER.
The gain varies significantly The gain varies significantly
with the SNR (which is to be expected, since with higher SNR values the number with the SNR, which is to be expected since higher SNR values result in a decreased number
of bit errors decreases, making the correction of those errors in the of bit errors, making the correction of those errors in the
``ML-in-the-list'' step more likely). ``ML-in-the-list'' step more likely.
For an FER of $10^{-6}$, the gain is approximately $\SI{1}{dB}$. For an FER of $10^{-6}$, the gain is approximately $\SI{1}{dB}$.
Similar behavior can be observed with various other codes. Similar behavior was observed with a number of different codes, e.g., \cite[\text{PEGReg252x504, 204.55.187, 96.3.965}]{mackay}.
No immediate relationship between the code length and the gain was observed Furthermore, no immediate relationship between the code length and the gain was observed
during our examinations. during our examinations.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%