Remove part of Conclusion; Limit lines to 80 cols; Lessen figure legend spacing
This commit is contained in:
parent
28a914b127
commit
f408b139b7
32
letter.tex
32
letter.tex
@ -265,8 +265,8 @@ function \cite{proximal_paper}
|
||||
The objective function is minimized using the proximal gradient method, which
|
||||
amounts to iteratively performing two gradient-descent steps \cite{proximal_paper}
|
||||
with the given objective function and considering AWGN channels.
|
||||
To this end, two helper variables, $\boldsymbol{r}$ and $\boldsymbol{s}$, are introduced,
|
||||
describing the result of each of the two steps:
|
||||
To this end, two helper variables, $\boldsymbol{r}$ and $\boldsymbol{s}$, are
|
||||
introduced, describing the result of each of the two steps:
|
||||
%
|
||||
\begin{alignat}{3}
|
||||
\boldsymbol{r} &\leftarrow \boldsymbol{s}
|
||||
@ -285,7 +285,8 @@ stages of the decoding process.
|
||||
|
||||
As the gradient of the code-constraint polynomial can attain very large values
|
||||
in some cases, an additional step is introduced to ensure numerical stability:
|
||||
every current estimate $\boldsymbol{s}$ is projected onto $\left[-\eta, \eta\right]^n$ by a projection
|
||||
every current estimate $\boldsymbol{s}$ is projected onto
|
||||
$\left[-\eta, \eta\right]^n$ by a projection
|
||||
$\Pi_\eta : \mathbb{R}^n \rightarrow \left[-\eta, \eta\right]^n$, where $\eta$
|
||||
is a positive constant slightly larger than one, e.g., $\eta = 1.5$.
|
||||
The resulting decoding process as described in \cite{proximal_paper} is
|
||||
@ -578,7 +579,8 @@ oscillate after a certain number of iterations.%
|
||||
Considering the magnitude of oscillation of the gradient of the code constraint
|
||||
polynomial, some interesting behavior may be observed.
|
||||
Figure \ref{fig:p_error} shows the probability that a component of the estimate
|
||||
is wrong, determined through a Monte Carlo simulation, when the components of $\boldsymbol{c}$ are ordered from smallest to largest oscillation of
|
||||
is wrong, determined through a Monte Carlo simulation, when the components of
|
||||
$\boldsymbol{c}$ are ordered from smallest to largest oscillation of
|
||||
$\left(\nabla h\right)_i$.
|
||||
|
||||
The lower the magnitude of the oscillation, the higher the probability that the
|
||||
@ -666,15 +668,15 @@ generated and an ``ML-in-the-list'' step is performed.
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Simulation Results \& Discussion}
|
||||
|
||||
Figure \ref{fig:results} shows the FER and BER resulting from applying proximal
|
||||
decoding as presented in \cite{proximal_paper} and the improved algorithm
|
||||
presented here when applied to a $\left( 3,6 \right)$-regular LDPC code with $n=204$ and
|
||||
$k=102$ \cite[204.33.484]{mackay}.
|
||||
Figure \ref{fig:results} shows the FER and BER resulting from applying
|
||||
proximal decoding as presented in \cite{proximal_paper} and the improved
|
||||
algorithm presented here when applied to a $\left( 3,6 \right)$-regular LDPC
|
||||
code with $n=204$ and $k=102$ \cite[204.33.484]{mackay}.
|
||||
The parameters chosen for the simulation are
|
||||
$\gamma = 0.05, \omega=0.05, \eta=1.5, K=200$.
|
||||
Again, these parameters were chosen,%
|
||||
%
|
||||
\begin{figure}[H]
|
||||
\begin{figure}[ht]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
@ -703,7 +705,7 @@ Again, these parameters were chosen,%
|
||||
legend columns=2,
|
||||
legend style={draw=white!15!black,
|
||||
legend cell align=left,
|
||||
at={(0.5,-0.5)},anchor=south}
|
||||
at={(0.5,-0.44)},anchor=south}
|
||||
]
|
||||
|
||||
\addplot+[ProxPlot, scol1]
|
||||
@ -772,17 +774,11 @@ Wadayama et al. \cite{proximal_paper} is introduced for AWGN channels.
|
||||
It relies on the fact that most errors observed in proximal decoding stem
|
||||
from only a few components of the estimate being wrong.
|
||||
These few erroneous components can mostly be corrected by appending an
|
||||
additional step to the original algorithm that is only executed if the algorithm has not converged.
|
||||
additional step to the original algorithm that is only executed if the
|
||||
algorithm has not converged.
|
||||
A gain of up to $\sim\SI{1}{dB}$ can be observed, depending on the code,
|
||||
the parameters considered, and the SNR.
|
||||
|
||||
While this work serves to introduce an approach to improve proximal decoding
|
||||
by appending an ``ML-in-the-list'' step, the method used to detect the most
|
||||
probably wrong components of the estimate is based mainly on empirical
|
||||
observation and a more mathematically rigorous foundation for determining these
|
||||
components could be beneficial.
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Acknowledgements}
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user