Correct Introduction, round 1

This commit is contained in:
Andreas Tsouchlos 2024-01-07 20:55:54 +01:00
parent 5af7e62f5d
commit 174548f4c4

View File

@ -102,7 +102,7 @@ attempted to be corrected.
We suggesst an empirical rule with which the components most likely needing We suggesst an empirical rule with which the components most likely needing
correction can be determined. correction can be determined.
Using this insight and performing a subsequent ``ML-in-the-list'' decoding, Using this insight and performing a subsequent ``ML-in-the-list'' decoding,
a gain of up to approximately 1 dB is achieved compared to conventional a gain of up to 1 dB is achieved compared to conventional
proximal decoding, depending on the decoder parameters and the code. proximal decoding, depending on the decoder parameters and the code.
\end{abstract} \end{abstract}
@ -126,7 +126,7 @@ the reliability of data by detecting and correcting any errors that may occur
during its transmission or storage. during its transmission or storage.
One class of binary linear codes, \textit{low-density parity-check} (LDPC) One class of binary linear codes, \textit{low-density parity-check} (LDPC)
codes, has become especially popular due to its ability to reach arbitrarily codes, has become especially popular due to its ability to reach arbitrarily
small probabilities of error at code rates up to the capacity of the channel small error probabilities at code rates up to the capacity of the channel
\cite{mackay99}, while retaining a structure that allows for very efficient \cite{mackay99}, while retaining a structure that allows for very efficient
decoding. decoding.
While the established decoders for LDPC codes, such as belief propagation (BP) While the established decoders for LDPC codes, such as belief propagation (BP)
@ -139,37 +139,37 @@ Optimization based decoding algorithms are an entirely different way of
approaching the decoding problem. approaching the decoding problem.
A number of different such algorithms have been introduced. A number of different such algorithms have been introduced.
The field of \textit{linear programming} (LP) decoding \cite{feldman_paper}, The field of \textit{linear programming} (LP) decoding \cite{feldman_paper},
for example, represents one class of such algorithms, based on a reformulation for example, represents one class of such algorithms, based on a relaxation
of the \textit{maximum likelihood} (ML) decoding problem as a linear program. of the \textit{maximum likelihood} (ML) decoding problem as a linear program.
Many different optimization algorithms can be used to solve the resulting Many different optimization algorithms can be used to solve the resulting
problem \cite{interior_point_decoding, ADMM, adaptive_lp_decoding}. problem \cite{ADMM, adaptive_lp_decoding, interior_point_decoding}.
Recently, proximal decoding for LDPC codes was presented by Recently, proximal decoding for LDPC codes was presented by
Wadayama et al. \cite{proximal_paper}. Wadayama \textit{et al.} \cite{proximal_paper}.
It is a novel approach and relies on a non-convex optimization formulation Proximal decoding relies on a non-convex optimization formulation
of the \textit{maximum a posteriori} (MAP) decoding problem. of the \textit{maximum a posteriori} (MAP) decoding problem.
The aim of this work is to improve upon the performance of proximal decoding by The aim of this work is to improve upon the performance of proximal decoding by
first presenting an examination of the algorithm's behavior and then suggesting first presenting an examination of the algorithm's behavior and then suggesting
an approach to mitigate some of its flaws. an approach to mitigate some of its flaws.
This analysis is performed within the context of This analysis is performed for
\textit{additive white Gaussian noise} (AWGN) channels. \textit{additive white Gaussian noise} (AWGN) channels.
It is first observed that, while the algorithm initially moves the estimate in We first observe that the algorithm initially moves the estimate in
the right direction, in the final steps of the decoding process convergence to the right direction, however, in the final steps of the decoding process,
the correct codeword is often not achieved. convergence to the correct codeword is often not achieved.
Furthermore, it is suggested that the reason for this behavior is the nature Furthermore, we suggest that the reason for this behavior is the nature
of the decoding algorithm itself, comprising two separate gradient descent of the decoding algorithm itself, comprising two separate gradient descent
steps working adversarially. steps working adversarially.
A method to mitigate this effect is proposed by appending an additional step We propose a method mitigate this effect by appending an
to the decoding process. additional step to the decoding process.
In this additional step, the components of the estimate with the highest In this additional step, the components of the estimate with the highest
probability of being erroneous are identified. probability of being erroneous are identified.
New codewords are then generated, over which an ``ML-in-the-list'' New codewords are then generated, over which an ``ML-in-the-list''
\cite{ml_in_the_list} decoding is performed. \cite{ml_in_the_list} decoding is performed.
A process to conduct this identification is proposed in this paper. A process to conduct this identification is proposed in this paper.
Using the improved algorithm, a gain of up to Using the improved algorithm, a gain of up to
approximately 1 dB can be achieved compared to proximal decoding, depending on 1 dB can be achieved compared to conventional proximal decoding,
the parameters chosen and the code considered. depending on the decoder parameters and the code.
%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%