Split LP and ADMM into two sections; done with cost function derivation for LP
This commit is contained in:
parent
5ca393187e
commit
40e731d111
@ -160,8 +160,8 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac
|
|||||||
|
|
||||||
|
|
||||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
\section{LP Decoding using ADMM}%
|
\section{LP Decoding}%
|
||||||
\label{sec:dec:LP Decoding using ADMM}
|
\label{sec:dec:LP Decoding}
|
||||||
|
|
||||||
\Ac{LP} decoding is a subject area introduced by Feldman et al.
|
\Ac{LP} decoding is a subject area introduced by Feldman et al.
|
||||||
\todo{Space before citation?}
|
\todo{Space before citation?}
|
||||||
@ -171,40 +171,40 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac
|
|||||||
decoding and one, which is an approximation with a more manageable
|
decoding and one, which is an approximation with a more manageable
|
||||||
representation.
|
representation.
|
||||||
To solve the resulting linear program, various optimization methods can be
|
To solve the resulting linear program, various optimization methods can be
|
||||||
used;
|
used.
|
||||||
the one examined in this work is \ac{ADMM}.
|
|
||||||
\todo{Why chose ADMM?}
|
|
||||||
|
|
||||||
Feldman at al. begin by looking at the \ac{ML} decoding problem%
|
Feldman at al. begin by looking at the \ac{ML} decoding problem%
|
||||||
\footnote{They assume that all codewords are equally likely to be transmitted,
|
\footnote{They assume that all codewords are equally likely to be transmitted,
|
||||||
making the \ac{ML} and \ac{MAP} decoding problems equivalent.}%
|
making the \ac{ML} and \ac{MAP} decoding problems equivalent.}%
|
||||||
%
|
%
|
||||||
\begin{align*}
|
\begin{align}
|
||||||
\hat{\boldsymbol{c}} = \argmax_{\boldsymbol{c} \in \mathcal{C}}
|
\hat{\boldsymbol{c}} = \argmax_{\boldsymbol{c} \in \mathcal{C}}
|
||||||
f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c} \right)
|
f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c} \right)%
|
||||||
|
\label{eq:lp:ml}
|
||||||
|
.\end{align}%
|
||||||
|
%
|
||||||
|
Assuming a memoryless channel, \ref{eq:lp:ml} can be rewritten in terms
|
||||||
|
of the \acp{LLR} $\gamma_i$ \cite[Sec 2.5]{feldman_thesis}:%
|
||||||
|
%
|
||||||
|
\begin{align*}
|
||||||
|
\hat{\boldsymbol{c}} = \argmin_{\boldsymbol{c}\in\mathcal{C}}
|
||||||
|
\sum_{i=1}^{n} \gamma_i y_i,%
|
||||||
|
\hspace{5mm} \gamma_i = \ln\left(
|
||||||
|
\frac{f_{\boldsymbol{Y} | \boldsymbol{C}}
|
||||||
|
\left( Y_i = y_i \mid C_i = 0 \right) }
|
||||||
|
{f_{\boldsymbol{Y} | \boldsymbol{C}}
|
||||||
|
\left( Y_i = y_i | C_i = 1 \right) } \right)
|
||||||
.\end{align*}
|
.\end{align*}
|
||||||
%
|
%
|
||||||
They suggest that maximizing the likelihood
|
The authors propose the following cost function%
|
||||||
$f_{\boldsymbol{Y} \mid \boldsymbol{C}}\left( \boldsymbol{y} \mid \boldsymbol{c} \right)$
|
|
||||||
is equivalent to minimizing the negative log-likelihood.
|
|
||||||
|
|
||||||
\ldots (Explain arriving at the cost function from the ML decoding problem)
|
|
||||||
|
|
||||||
Based on this, they propose their cost function%
|
|
||||||
\footnote{In this context, \textit{cost function} and \textit{objective function}
|
\footnote{In this context, \textit{cost function} and \textit{objective function}
|
||||||
have the same meaning.}
|
have the same meaning.}
|
||||||
for the \ac{LP} decoding problem:%
|
for the \ac{LP} decoding problem:%
|
||||||
%
|
%
|
||||||
\begin{align*}
|
\begin{align*}
|
||||||
\sum_{i=1}^{n} \gamma_i c_i,
|
\sum_{i=1}^{n} \gamma_i c_i
|
||||||
\hspace{5mm} \gamma_i = \ln\left(
|
|
||||||
\frac{f_{\boldsymbol{Y} | \boldsymbol{C}}
|
|
||||||
\left( Y_i = y_i \mid C_i = 0 \right) }
|
|
||||||
{f_{\boldsymbol{Y} | \boldsymbol{C}}
|
|
||||||
\left( Y_i = y_i | C_i = 1 \right) } \right)
|
|
||||||
.\end{align*}
|
.\end{align*}
|
||||||
%
|
%
|
||||||
%
|
|
||||||
With this cost function, the exact integer linear program formulation of \ac{ML}
|
With this cost function, the exact integer linear program formulation of \ac{ML}
|
||||||
decoding is the following:%
|
decoding is the following:%
|
||||||
%
|
%
|
||||||
@ -213,6 +213,8 @@ decoding is the following:%
|
|||||||
\text{subject to }\hspace{2mm} &\boldsymbol{c} \in \mathcal{C}
|
\text{subject to }\hspace{2mm} &\boldsymbol{c} \in \mathcal{C}
|
||||||
.\end{align*}%
|
.\end{align*}%
|
||||||
%
|
%
|
||||||
|
\todo{$\boldsymbol{c}$ or some other variable name? e.g. $\boldsymbol{c}^{*}$.
|
||||||
|
Especially for the continuous consideration in LP decoding}
|
||||||
|
|
||||||
As solving integer linear programs is generally NP-hard, this decoding problem
|
As solving integer linear programs is generally NP-hard, this decoding problem
|
||||||
has to be approximated by one with looser constraints.
|
has to be approximated by one with looser constraints.
|
||||||
@ -551,11 +553,27 @@ vertices;
|
|||||||
these represent erroneous non-codeword solutions to the linear program and
|
these represent erroneous non-codeword solutions to the linear program and
|
||||||
correspond to the so-called \textit{pseudocodewords} introduced in
|
correspond to the so-called \textit{pseudocodewords} introduced in
|
||||||
\cite{feldman_paper}.
|
\cite{feldman_paper}.
|
||||||
However, since for \ac{LDPC} codes $Q$ scales linearly with $n$, it is a lot
|
However, since for \ac{LDPC} codes $Q$ scales linearly with $n$ instead of
|
||||||
more tractable for practical applications.
|
exponentially, it is a lot more tractable for practical applications.
|
||||||
|
|
||||||
|
The resulting formulation of the relaxed optimization problem
|
||||||
|
(called \ac{LCLP} by the authors) is the following:%
|
||||||
|
%
|
||||||
|
\begin{align*}
|
||||||
|
\text{minimize }\hspace{2mm} &\sum_{i=1}^{n} \gamma_i c_i \\
|
||||||
|
\text{subject to }\hspace{2mm} &\ldots
|
||||||
|
.\end{align*}%
|
||||||
|
%
|
||||||
|
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
\section{LP Decoding using ADMM}%
|
||||||
|
\label{sec:dec:LP Decoding using ADMM}
|
||||||
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item TODO: \Ac{ADMM} as a solver
|
\item Why ADMM?
|
||||||
|
\item Adaptive Linear Programming?
|
||||||
|
\item How ADMM is adapted to LP decoding
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user