Remove part of Conclusion; Limit lines to 80 cols; Lessen figure legend spacing

This commit is contained in:
Andreas Tsouchlos 2023-12-28 21:31:45 +01:00
parent 28a914b127
commit f408b139b7

View File

@ -265,8 +265,8 @@ function \cite{proximal_paper}
The objective function is minimized using the proximal gradient method, which The objective function is minimized using the proximal gradient method, which
amounts to iteratively performing two gradient-descent steps \cite{proximal_paper} amounts to iteratively performing two gradient-descent steps \cite{proximal_paper}
with the given objective function and considering AWGN channels. with the given objective function and considering AWGN channels.
To this end, two helper variables, $\boldsymbol{r}$ and $\boldsymbol{s}$, are introduced, To this end, two helper variables, $\boldsymbol{r}$ and $\boldsymbol{s}$, are
describing the result of each of the two steps: introduced, describing the result of each of the two steps:
% %
\begin{alignat}{3} \begin{alignat}{3}
\boldsymbol{r} &\leftarrow \boldsymbol{s} \boldsymbol{r} &\leftarrow \boldsymbol{s}
@ -285,7 +285,8 @@ stages of the decoding process.
As the gradient of the code-constraint polynomial can attain very large values As the gradient of the code-constraint polynomial can attain very large values
in some cases, an additional step is introduced to ensure numerical stability: in some cases, an additional step is introduced to ensure numerical stability:
every current estimate $\boldsymbol{s}$ is projected onto $\left[-\eta, \eta\right]^n$ by a projection every current estimate $\boldsymbol{s}$ is projected onto
$\left[-\eta, \eta\right]^n$ by a projection
$\Pi_\eta : \mathbb{R}^n \rightarrow \left[-\eta, \eta\right]^n$, where $\eta$ $\Pi_\eta : \mathbb{R}^n \rightarrow \left[-\eta, \eta\right]^n$, where $\eta$
is a positive constant slightly larger than one, e.g., $\eta = 1.5$. is a positive constant slightly larger than one, e.g., $\eta = 1.5$.
The resulting decoding process as described in \cite{proximal_paper} is The resulting decoding process as described in \cite{proximal_paper} is
@ -578,7 +579,8 @@ oscillate after a certain number of iterations.%
Considering the magnitude of oscillation of the gradient of the code constraint Considering the magnitude of oscillation of the gradient of the code constraint
polynomial, some interesting behavior may be observed. polynomial, some interesting behavior may be observed.
Figure \ref{fig:p_error} shows the probability that a component of the estimate Figure \ref{fig:p_error} shows the probability that a component of the estimate
is wrong, determined through a Monte Carlo simulation, when the components of $\boldsymbol{c}$ are ordered from smallest to largest oscillation of is wrong, determined through a Monte Carlo simulation, when the components of
$\boldsymbol{c}$ are ordered from smallest to largest oscillation of
$\left(\nabla h\right)_i$. $\left(\nabla h\right)_i$.
The lower the magnitude of the oscillation, the higher the probability that the The lower the magnitude of the oscillation, the higher the probability that the
@ -666,15 +668,15 @@ generated and an ``ML-in-the-list'' step is performed.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Simulation Results \& Discussion} \section{Simulation Results \& Discussion}
Figure \ref{fig:results} shows the FER and BER resulting from applying proximal Figure \ref{fig:results} shows the FER and BER resulting from applying
decoding as presented in \cite{proximal_paper} and the improved algorithm proximal decoding as presented in \cite{proximal_paper} and the improved
presented here when applied to a $\left( 3,6 \right)$-regular LDPC code with $n=204$ and algorithm presented here when applied to a $\left( 3,6 \right)$-regular LDPC
$k=102$ \cite[204.33.484]{mackay}. code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}.
The parameters chosen for the simulation are The parameters chosen for the simulation are
$\gamma = 0.05, \omega=0.05, \eta=1.5, K=200$. $\gamma = 0.05, \omega=0.05, \eta=1.5, K=200$.
Again, these parameters were chosen,% Again, these parameters were chosen,%
% %
\begin{figure}[H] \begin{figure}[ht]
\centering \centering
\begin{tikzpicture} \begin{tikzpicture}
@ -703,7 +705,7 @@ Again, these parameters were chosen,%
legend columns=2, legend columns=2,
legend style={draw=white!15!black, legend style={draw=white!15!black,
legend cell align=left, legend cell align=left,
at={(0.5,-0.5)},anchor=south} at={(0.5,-0.44)},anchor=south}
] ]
\addplot+[ProxPlot, scol1] \addplot+[ProxPlot, scol1]
@ -772,17 +774,11 @@ Wadayama et al. \cite{proximal_paper} is introduced for AWGN channels.
It relies on the fact that most errors observed in proximal decoding stem It relies on the fact that most errors observed in proximal decoding stem
from only a few components of the estimate being wrong. from only a few components of the estimate being wrong.
These few erroneous components can mostly be corrected by appending an These few erroneous components can mostly be corrected by appending an
additional step to the original algorithm that is only executed if the algorithm has not converged. additional step to the original algorithm that is only executed if the
algorithm has not converged.
A gain of up to $\sim\SI{1}{dB}$ can be observed, depending on the code, A gain of up to $\sim\SI{1}{dB}$ can be observed, depending on the code,
the parameters considered, and the SNR. the parameters considered, and the SNR.
While this work serves to introduce an approach to improve proximal decoding
by appending an ``ML-in-the-list'' step, the method used to detect the most
probably wrong components of the estimate is based mainly on empirical
observation and a more mathematically rigorous foundation for determining these
components could be beneficial.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Acknowledgements} \section{Acknowledgements}