Minor changes to LP decoding; new proposed structure

This commit is contained in:
Andreas Tsouchlos 2023-02-16 21:16:39 +01:00
parent 3886074ee1
commit b5c5140582
2 changed files with 52 additions and 13 deletions

View File

@ -142,6 +142,7 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac
\label{sec:dec:LP Decoding using ADMM}
\Ac{LP} decoding is a subject area introduced by Feldman et al.
\todo{Space before citation?}
\cite{feldman_paper}. They reframed the decoding problem as an
\textit{integer linear program} and subsequently presented a relaxation into
a \textit{linear program}, lifting the integer requirement.
@ -152,27 +153,22 @@ work is the \ac{ADMM}.
Feldman at al. begin by looking at the \ac{ML} decoding problem%
\footnote{They assume that all codewords are equally likely to be transmitted,
making the \ac{ML} and \ac{MAP} decoding problems essentially equivalent}%
\todo{Dot after footnote?}%
making the \ac{ML} and \ac{MAP} decoding problems essentially equivalent.}%
%
\begin{align*}
\hat{\boldsymbol{x}} = \argmax_{\boldsymbol{x} \in
\left\{ \left( -1 \right)^{\boldsymbol{c}}
\text{ : } \boldsymbol{c} \in \mathcal{C} \right\} }
f_{\boldsymbol{Y} \mid \boldsymbol{X}} \left( \boldsymbol{y} \mid \boldsymbol{x} \right)
\hat{\boldsymbol{c}} = \argmax_{\boldsymbol{c} \in \mathcal{C}}
f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c} \right)
.\end{align*}
%
\todo{Define $\mathcal{X}$ as $\left\{ \left( -1 \right)
^{\boldsymbol{c}} : \boldsymbol{c}\in \mathcal{C} \right\} $?}%
They suggest that maximizing the likelihood
$f_{\boldsymbol{Y} \mid \boldsymbol{X}}\left( \boldsymbol{y} \mid \boldsymbol{x} \right)$
$f_{\boldsymbol{Y} \mid \boldsymbol{C}}\left( \boldsymbol{y} \mid \boldsymbol{c} \right)$
is equivalent to minimizing the negative log-likelihood.
\ldots
\ldots (Explaing arriving at cost function from ML decoding problem)
Based on this, they propose their cost function%
\footnote{In this context, \textit{cost function} and \textit{objective function}
mean the same thing}
mean the same thing.}
for the \ac{LP} decoding problem:%
%
\begin{align*}
@ -184,8 +180,29 @@ for the \ac{LP} decoding problem:%
\left( Y_i = y_i | C_i = 1 \right) } \right) \\
.\end{align*}
%
%
The exact integer linear program \todo{ILP acronym?} formulation of \ac{ML}
decoding is the following:%
%
\begin{align*}
\text{minimize }\hspace{2mm} &\sum_{i=1}^{n} \gamma_i c_i \\
\text{subject to }\hspace{2mm} &\boldsymbol{c} \in \mathcal{C}
.\end{align*}%
%
The
\ldots (LP Relaxation)
%They go on to define the constraints under which this minimization is to be
%accomplished.
%They define the concept of the \textit{codeword polytope} as a linear
%combination of all possible codewords, forming their convex hull:%
%%
%\begin{align*}
% \text{poly}\left( \mathcal{C} \right) = \left\{
% \sum_{c \in \mathcal{C}} \lambda_{\boldsymbol{c}} \boldsymbol{c}
% \text{ : } \lambda_{\boldsymbol{c}} \ge 0,
% \sum_{\boldsymbol{c} \in \mathcal{C}} \lambda_{\boldsymbol{c}} = 1 \right\}
%.\end{align*}
\begin{itemize}
\item Equivalent \ac{ML} optimization problem

View File

@ -179,7 +179,29 @@
% 7. Conclusion
% - Summary of results
% - Future work
% Proposed new structure:
%
% 1. Introduction
%
% 2. Theoretical Background
% \ldots
%
% 3. Proximal Decoding
% 3.1 Theory
% 3.2 Implementation details
% 3.3 Results
% 3.x Improved implementation
%
% 4. LP Decoding using ADMM
% 4.1 Theory
% 4.2 Implementation details
% 4.3 Results and comparison with proximal
%
% 5. Discussion
%
% 6. Conclusion
\tableofcontents