diff --git a/latex/thesis/chapters/decoding_techniques.tex b/latex/thesis/chapters/decoding_techniques.tex index f74f6d4..dfafca7 100644 --- a/latex/thesis/chapters/decoding_techniques.tex +++ b/latex/thesis/chapters/decoding_techniques.tex @@ -30,25 +30,24 @@ the \ac{ML} decoding problem:% f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c} \right) .\end{align*}% % -\todo{Note about these generally being the same thing, when the a priori probability -is uniformly distributed}% -\todo{Here the two problems are written in terms of $\hat{\boldsymbol{c}}$; below MAP -decoding is applied in terms of $\hat{\boldsymbol{x}}$. Is that a problem?}% The goal is to arrive at a formulation, where a certain objective function $f$ has to be minimized under certain constraints:% % \begin{align*} - \text{minimize } f\left( \boldsymbol{x} \right)\\ - \text{subject to \ldots} -.\end{align*} + \text{minimize } f\left( \boldsymbol{c} \right)\\ + \text{subject to $\boldsymbol{c} \in D$} +,\end{align*}% +% +where $D$ is the domain of values attainable for $c$ and represents the +constraints. In contrast to the established message-passing decoding algorithms, the viewpoint then changes from observing the decoding process in its tanner graph representation (as shown in figure \ref{fig:dec:tanner}) -to a spacial representation, where the codewords are some of the edges -of a hypercube and the goal is to find that point $\boldsymbol{x}$, -\todo{$\boldsymbol{x}$? Or some other variable?} -which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spacial}). +to a spacial representation (figure \ref{fig:dec:spacial}), +where the codewords are some of the edges of a hypercube. +The goal is to find that point $\boldsymbol{c}$, +which minimizes the objective function $f$. % % Figure showing decoding space @@ -143,11 +142,11 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac \node[color=KITblue, right=0cm of c110] {$\left( 1, 1, 0 \right) $}; \node[color=KITblue, above=0cm of c011] {$\left( 0, 1, 1 \right) $}; - % x + % c \node[color=KITgreen, fill=KITgreen, - draw, circle, inner sep=0pt, minimum size=4pt] (f) at (0.9, 0.7, 1) {}; - \node[color=KITgreen, right=0cm of f] {$\boldsymbol{x}$}; + draw, circle, inner sep=0pt, minimum size=4pt] (c) at (0.9, 0.7, 1) {}; + \node[color=KITgreen, right=0cm of c] {$\boldsymbol{c}$}; \end{tikzpicture} \caption{Spacial representation of a single parity-check code} @@ -164,7 +163,6 @@ which minimizes the objective function $f$ (as shown in figure \ref{fig:dec:spac \label{sec:dec:LP Decoding} \Ac{LP} decoding is a subject area introduced by Feldman et al. -\todo{Space before citation?} \cite{feldman_paper}. They reframe the decoding problem as an \textit{integer linear program} and subsequently present two relaxations into \textit{linear programs}, one representing a formulation of exact \ac{LP} @@ -179,7 +177,8 @@ making the \ac{ML} and \ac{MAP} decoding problems equivalent.}% % \begin{align} \hat{\boldsymbol{c}} = \argmax_{\boldsymbol{c} \in \mathcal{C}} - f_{\boldsymbol{Y} \mid \boldsymbol{C}} \left( \boldsymbol{y} \mid \boldsymbol{c} \right)% + f_{\boldsymbol{Y} \mid \boldsymbol{C}} + \left( \boldsymbol{y} \mid \boldsymbol{c} \right)% \label{eq:lp:ml} .\end{align}% %