Wrote begining of LP decoding theory

This commit is contained in:
Andreas Tsouchlos 2023-02-16 01:07:21 +01:00
parent 773f3b1109
commit 3886074ee1
4 changed files with 88 additions and 28 deletions

View File

@ -40,16 +40,6 @@
long = frame error rate
}
\DeclareAcronym{LP}{
short = LP,
long = linear programming
}
\DeclareAcronym{LDPC}{
short = LDPC,
long = low-density parity-check
}
%
% M
%
@ -64,6 +54,20 @@
long = maximum likelihood
}
%
%L
%
\DeclareAcronym{LP}{
short = LP,
long = linear programming
}
\DeclareAcronym{LDPC}{
short = LDPC,
long = low-density parity-check
}
%
% P
%

View File

@ -25,11 +25,15 @@ the \ac{ML} decoding problem:%
%
\begin{align*}
\hat{\boldsymbol{c}}_{\text{\ac{MAP}}} &= \argmax_{c \in \mathcal{C}}
f_{\boldsymbol{X} \mid \boldsymbol{Y}} \left( \boldsymbol{x} \mid \boldsymbol{y} \right)\\
f_{\boldsymbol{C} \mid \boldsymbol{Y}} \left( \boldsymbol{c} \mid \boldsymbol{y} \right)\\
\hat{\boldsymbol{c}}_{\text{\ac{ML}}} &= \argmax_{c \in \mathcal{C}}
f_{\boldsymbol{Y} \mid \boldsymbol{X}} \left( \boldsymbol{y} \mid \boldsymbol{x} \right)
.\end{align*}
f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c} \right)
.\end{align*}%
%
\todo{Note about these generally being the same thing, when the a priori probability
is uniformly distributed}%
\todo{Here the two problems are written in terms of $\hat{\boldsymbol{c}}$; below MAP
decoding is applied in terms of $\hat{\boldsymbol{x}}$. Is that a problem?}%
The goal is to arrive at a formulation, where a certain objective function
$f$ has to be minimized under certain constraints:%
%
@ -41,7 +45,7 @@ $f$ has to be minimized under certain constraints:%
In contrast to the established message-passing decoding algorithms,
the viewpoint then changes from observing the decoding process in its
tanner graph representation (as shown in figure \ref{fig:dec:tanner})
into a spacial representation, where the codewords are some of the edges
to a spacial representation, where the codewords are some of the edges
of a hypercube and the goal is to find that point $\boldsymbol{x}$,
\todo{$\boldsymbol{x}$? Or some other variable?}
which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spacial}).
@ -127,6 +131,8 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac
\caption{Spacial representation of a single parity-check code}
\label{fig:dec:spacial}
\end{subfigure}%
\caption{Different representations of the decoding problem}
\end{figure}
@ -135,6 +141,52 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac
\section{LP Decoding using ADMM}%
\label{sec:dec:LP Decoding using ADMM}
\Ac{LP} decoding is a subject area introduced by Feldman et al.
\cite{feldman_paper}. They reframed the decoding problem as an
\textit{integer linear program} and subsequently presented a relaxation into
a \textit{linear program}, lifting the integer requirement.
The optimization method used to solve this problem that is examined in this
work is the \ac{ADMM}.
\todo{With or without 'the'?}
\todo{Why chose ADMM?}
Feldman at al. begin by looking at the \ac{ML} decoding problem%
\footnote{They assume that all codewords are equally likely to be transmitted,
making the \ac{ML} and \ac{MAP} decoding problems essentially equivalent}%
\todo{Dot after footnote?}%
%
\begin{align*}
\hat{\boldsymbol{x}} = \argmax_{\boldsymbol{x} \in
\left\{ \left( -1 \right)^{\boldsymbol{c}}
\text{ : } \boldsymbol{c} \in \mathcal{C} \right\} }
f_{\boldsymbol{Y} \mid \boldsymbol{X}} \left( \boldsymbol{y} \mid \boldsymbol{x} \right)
.\end{align*}
%
\todo{Define $\mathcal{X}$ as $\left\{ \left( -1 \right)
^{\boldsymbol{c}} : \boldsymbol{c}\in \mathcal{C} \right\} $?}%
They suggest that maximizing the likelihood
$f_{\boldsymbol{Y} \mid \boldsymbol{X}}\left( \boldsymbol{y} \mid \boldsymbol{x} \right)$
is equivalent to minimizing the negative log-likelihood.
\ldots
Based on this, they propose their cost function%
\footnote{In this context, \textit{cost function} and \textit{objective function}
mean the same thing}
for the \ac{LP} decoding problem:%
%
\begin{align*}
\sum_{i=1}^{n} \gamma_i c_i,
\hspace{5mm} \gamma_i = \ln\left(
\frac{f_{\boldsymbol{Y} | \boldsymbol{C}}
\left( Y_i = y_i \mid C_i = 0 \right) }
{f_{\boldsymbol{Y} | \boldsymbol{C}}
\left( Y_i = y_i | C_i = 1 \right) } \right) \\
.\end{align*}
%
The
\begin{itemize}
\item Equivalent \ac{ML} optimization problem
\item \Ac{LP} relaxation
@ -305,6 +357,7 @@ The components of the gradient of the code-constraint polynomial can be computed
- \prod_{j\in\mathcal{A}\left( i \right) }x_j \right)
.\end{align*}%
\todo{Only multiplication?}%
\todo{$x_k$: $k$ or some other indexing variable?}%
%
In the case of \ac{AWGN}, the likelihood
$f_{\boldsymbol{Y} \mid \boldsymbol{X}}\left( \boldsymbol{y} \mid \boldsymbol{x} \right)$

View File

@ -24,6 +24,20 @@ Lastly, the optimization methods utilized are described.
\item Probabilistic quantities (random variables, \acp{PDF}, \ldots)
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Preliminaries: Channel Model and Modulation}
\label{sec:theo:Preliminaries: Channel Model and Modulation}
%
% TODOs
%
\begin{itemize}
\item \Ac{AWGN}
\item \Ac{BPSK}
\end{itemize}
%
% Figure showing notation for entire coding / decoding process
%
@ -58,17 +72,6 @@ Lastly, the optimization methods utilized are described.
\caption{Overview of notation}
\label{fig:notation}
\end{figure}
\todo{Move figure to 'Channel and Modulation'}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Preliminaries: Channel Model and Modulation}
\label{sec:theo:Preliminaries: Channel Model and Modulation}
\begin{itemize}
\item \Ac{AWGN}
\item \Ac{BPSK}
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

View File

@ -122,12 +122,12 @@
% - Results summary
%
% 2. Theoretical Background
% 2.2 Notation
% - General remarks on notation (matrices, ...)
% - Probabilistic quantities(random variables, PDFs, ...)
% 2.1 Preliminaries: Channel Model and Modulation
% - AWGN
% - BPSK
% 2.2 Notation
% - General remarks on notation (matrices, PDF, etc.)
% - Diagram from midterm presentation
% 2.3 Channel Coding with LDPC Codes
% - Introduction
% - Binary linear codes