Added footnote about discrete -> continuous; Added quotation marks; minor other changes

This commit is contained in:
Andreas Tsouchlos 2023-03-12 20:23:22 +01:00
parent 6513fd2297
commit 52ba8c67ee
2 changed files with 18 additions and 9 deletions

View File

@ -97,6 +97,11 @@
long = probability density function
}
\DeclareAcronym{PMF}{
short = PMF,
long = probability mass function
}
%
% V
%

View File

@ -25,14 +25,14 @@ available optimization algorithms.
Generally, the original decoding problem considered is either the \ac{MAP} or
the \ac{ML} decoding problem:%
%
\begin{align*}
\begin{align}
\hat{\boldsymbol{c}}_{\text{\ac{MAP}}} &= \argmax_{\boldsymbol{c} \in \mathcal{C}}
p_{\boldsymbol{C} \mid \boldsymbol{Y}} \left(\boldsymbol{c} \mid \boldsymbol{y}
\right)\\
\right) \label{eq:dec:map}\\
\hat{\boldsymbol{c}}_{\text{\ac{ML}}} &= \argmax_{\boldsymbol{c} \in \mathcal{C}}
f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c}
\right)
.\end{align*}%
\right) \label{eq:dec:ml}
.\end{align}%
%
The goal is to arrive at a formulation, where a certain objective function
$g : \mathbb{R}^n \rightarrow \mathbb{R}^n $ must be minimized under certain constraints:%
@ -707,7 +707,11 @@ non-convex optimization formulation of the \ac{MAP} decoding problem.
In order to derive the objective function, the authors begin with the
\ac{MAP} decoding rule, expressed as a continuous maximization problem%
\footnote{The }%
\footnote{The expansion of the domain to be continuous doesn't constitute a
material difference.
The only change is that what previously were \acp{PMF} now have to be expressed
in terms of \acp{PDF}}
over $\boldsymbol{x}$
:%
%
\begin{align}
@ -726,7 +730,7 @@ The likelihood $f_{\boldsymbol{Y} \mid \tilde{\boldsymbol{X}}}
determined by the channel model.
The prior \ac{PDF} $f_{\tilde{\boldsymbol{X}}}\left( \tilde{\boldsymbol{x}} \right)$ is also
known, as the equal probability assumption is made on
$\mathcal{C}\left( \boldsymbol{H} \right)$.
$\mathcal{C}$.
However, since the considered domain is continuous,
the prior \ac{PDF} cannot be ignored as a constant during the minimization
as is often done, and has a rather unwieldy representation:%
@ -843,14 +847,14 @@ The second step thus becomes%
\hspace{5mm}\gamma > 0,\text{ small}
.\end{align*}
%
While the approximation of the prior \ac{PDF} made in \ref{eq:prox:prior_pdf_approx}
While the approximation of the prior \ac{PDF} made in equation (\ref{eq:prox:prior_pdf_approx})
theoretically becomes better
with larger $\gamma$, the constraint that $\gamma$ be small is important,
as it keeps the effect of $h\left( \boldsymbol{x} \right) $ on the landscape
of the objective function small.
Otherwise, unwanted stationary points, including local minima, are introduced.
The authors say that in practice, the value of $\gamma$ should be adjusted
according to the decoding performance \cite[Sec. 3.1]{proximal_paper}.
The authors say that ``in practice, the value of $\gamma$ should be adjusted
according to the decoding performance.'' \cite[Sec. 3.1]{proximal_paper}.
%The components of the gradient of the code-constraint polynomial can be computed as follows:%
%%