Done with first draft of thesis structure

This commit is contained in:
Andreas Tsouchlos 2023-02-13 14:12:59 +01:00
parent 01c38db5fc
commit 619927925e
10 changed files with 251 additions and 222 deletions

View File

@ -1,44 +0,0 @@
\chapter*{Abstract (Master Theses only)}
This is an abstract. It should be about half a page to a page in length.
It mostly serves as clarification for what to expect when reading the thesis:
What is your topic, what are the most striking results, what's the process of
getting these.
It is \emph{not} an introduction to the topic (that's what the introduction
chapter is for). Now, for a few lines of content:
Polar codes are the first codes to asymptotically achieve channel capacity with low complexity encoders and decoders.
They were first introduced by Erdal Arikan in 2009 \cite{polar:arikan09}.
Channel coding has always been a challenging task because it draws a lot of resources, especially in software implementations.
Software Radio is getting more prominent because it offers several advantages among which are higher flexibility and better maintainability.
Future radio systems are aimed at being run on virtualized servers instead of dedicated hardware in base stations \cite{cloudran:2015}.
Polar codes may be a promising candidate for future radio systems if they can be implemented efficiently in software.
In this thesis the theory behind polar codes and a polar code implementation in GNU Radio is presented.
This implementation is then evaluated regarding parameterization options and their impact on error correction performance.
The evaluation includes a comparison to state-of-the-art \ac{LDPC} codes.
\begin{figure}[h]
\begin{subfigure}[t]{.49\textwidth}
\begin{center}
\def\dist{1.5}
\def\power{3}
\input{figures/polar_nbit_encoder_natural}
\caption{8 bit polar encoder}
\label{abs:polar_8bit_encoder_natural}
\end{center}
\end{subfigure}\,%
\begin{subfigure}[t]{.49\textwidth}
\begin{center}
\def\dist{1.5}
\def\power{3}
\input{figures/polar_nbit_decoder}
\caption{8 bit polar decoder}
\label{abs:polar_8bit_decoder}
\end{center}
\end{subfigure}%
\caption{Polar code encoding and decoding}
\label{abs:encoder-decoder}
\end{figure}
The polar encoder is shown in Fig. \ref{abs:polar_8bit_encoder_natural}.

View File

@ -0,0 +1,30 @@
\chapter{Analysis of Results}%
\label{chapter:Analysis of Results}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{LP Decoding using ADMM}%
\label{sec:ana:LP Decoding using ADMM}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Proximal Decoding}%
\label{sec:ana:Proximal Decoding}
\begin{itemize}
\item Parameter choice
\item FER
\item Improved implementation
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Comparison of BP, Proximal Decoding and LP Decoding using ADMM}%
\label{sec:ana:Comparison of BP, Proximal Decoding and LP Decoding using ADMM}
\begin{itemize}
\item Decoding performance
\item Complexity \& runtime(mention difficulty in reaching conclusive
results when comparing implementations)
\end{itemize}

View File

@ -1,10 +1,8 @@
\chapter{Conclusion}\label{chapter:conclusion}
So you made it!
This is the last part of your thesis.
Tell everyone what happened.
You did something... and you could show that ... followed.
In the end make a personal statement.
Why would one consider this thesis to be useful?
\chapter{Conclusion}%
\label{chapter:conclusion}
\begin{itemize}
\item Summary of results
\item Future work
\end{itemize}

View File

@ -0,0 +1,33 @@
\chapter{Decoding Techniques}%
\label{chapter:decoding_techniques}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Decoding using Optimization Methods}%
\label{sec:dec:Decoding using Optimization Methods}
\begin{itemize}
\item General methodology
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{LP Decoding using ADMM}%
\label{sec:dec:LP Decoding using ADMM}
\begin{itemize}
\item Equivalent ML optimization problem
\item LP relaxation
\item ADMM as a solver
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Proximal Decoding}%
\label{sec:dec:Proximal Decoding}
\begin{itemize}
\item Formulation of optimization problem
\item Proximal gradient method as a solver
\end{itemize}

View File

@ -0,0 +1,8 @@
\chapter{Discussion}%
\label{chapter:discussion}
\begin{itemize}
\item Proximal decoding improvement limitations
\end{itemize}
% - Improvement pitfalls

View File

@ -1,27 +1,10 @@
\chapter{Introduction}
This is the introductory chapter.
It is usually a page or two.
Tell a story about the objectives, explain them briefly and outline the structure of your thesis.
\chapter{Introduction}%
\label{chapter:introduction}
\section{Structuring Your Thesis}
An example structure would be:
\begin{enumerate}
\item Introduction
\item Theoretical basis
\item Your work
\item Measurement results
\item Conclusion
\end{enumerate}
This is just an example, choose a structure that fits the nature of your work.
\section{References}
Citing references is always good. Plagiarizing, however, is strictly forbidden!
\section{Images}
If possible use vector graphics. Only use pixel graphics for photos. TikZ is also an interesting option for creating all sorts of images.
And don't forget \SI{10.0815}{\giga\byte} of data is quite a lot.
This is an example how to use the siunitx package.
\begin{itemize}
\item Problem definition
\item Motivation
\item Results summary
\end{itemize}

View File

@ -0,0 +1,34 @@
\chapter{Methodology and Implementation}%
\label{chapter:methodology_and_implementation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{General implementation process}%
\label{sec:impl:General implementation process}
\begin{itemize}
\item First python using numpy
\item Then C++ using Eigen
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{LP Decoding using ADMM}%
\label{sec:impl:LP Decoding using ADMM}
\begin{itemize}
\item Choice of parameters
\item Selected projection algorithm
\item Adaptive linear programming decoding?
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Proximal Decoding}%
\label{sec:impl:Proximal Decoding}
\begin{itemize}
\item Choice of parameters
\item Road to improved implemenation
\end{itemize}

View File

@ -1,139 +0,0 @@
\acresetall
% This is an example chapter from 'Polar Codes for Software Radio'. Do not use it but delete it! It serves as an example!
% Titles should be in title case (https://en.wikipedia.org/wiki/Title_case)
\chapter{An Example Chapter}\label{chapter:systemmodel}
Polar codes are defined for a specific system model.
The objective of this chapter is to introduce the key concepts.
Notations are introduced and important terms are revisited in order to refer to them.
\section{Key Channel Coding Concepts}
The system model used throughout this thesis follows the remarks in~\cite{Richardson:2008:MCT} and~\cite{polar:arikan09}.
It is intended to define the domain for which polar codes are developed.
The objective of channel coding is to transmit information from a source to a sink over a point-to-point connection with as few errors as possible.
A source wants to transmit binary data $u \in \mathcal{U} = \{0, 1\}$ to a sink where $u$ represents one draw of a binary uniformly distributed random variable.
The source symbols are encoded, transmitted over a channel and decoded afterwards in order to pass an estimate $\hat{u}$ to a sink.
This thesis uses a common notation for vectors which is introduced here shortly.
A variable $x$ may assume any value in an alphabet $\mathcal{X}$, i.e. $x \in \mathcal{X}$.
Multiple variables are combined into a (row) vector $\bm{x}^n = (x_0, \ldots x_{n-1})$ of size $n$ with $\bm{x}^n \in \mathcal{X}^n$.
A subvector of $\bm{x}^n$ is denoted $\bm{x}_i^j = (x_i, \ldots x_{j-1})$ where $0 \leq i \leq j \leq n$.
A vector where $i=j$ is an empty vector.
A vector $\bm{x}^n$ ($n$ even) may be split into even and odd subvectors which are denoted $\bm{x}_{0,\mathrm{e}}^{n} = (x_0, x_2, \ldots x_{n-2})$, $\bm{x}_{0,\mathrm{o}}^{n} = (x_1, x_3, \ldots x_{n-1})$.
This numbering convention is in accordance with~\cite{dijkstra:zerocounting}, where the author makes a strong point for this exact notation and some papers on polar codes follow it too, e.g.~\cite{polar:talvardy:howtoCC}.
\subsection{Encoder}
The encoder takes a frame $\bm{u}^k$ and maps it to a binary codeword $\bm{x}^n$, where $k$ and $n$ denote the vector sizes of a frame and a codeword respectively with $\bm{k} \leq n$.
An ensemble of all valid codewords for an encoder is a code $\mathcal{C}$.
It should be noted that $|\mathcal{C}| = |\mathcal{X}^n|$ must hold in order for the code to be able to represent every possible frame.
Not all possible symbols from $\mathcal{X}^n$ are used for transmission.
The difference between all $2^n$ possible codewords and the $2^k$ used codewords is called redundancy.
With those two values, the code rate is defined as $r = \frac{k}{n}$.
It is a measure of efficient channel usage.
The encoder is assumed to be linear and to perform a one-to-one mapping of frames to codewords.
A code is linear if $\alpha \bm{x} + \alpha^\prime \bm{x}^\prime \in \mathcal{C}$ for $\forall \bm{x}, \bm{x}^\prime \in \mathcal{C}$ and $\forall \alpha, \alpha^\prime \in \mathbb{F}$ hold.
It should be noted that all operations are done over the Galois field GF(2) or $\mathbb{F} = \{0, 1\}$ unless stated otherwise.
Then the expression can be simplified to
\begin{equation}
\bm{x} + \bm{x}^\prime \in \mathcal{C} \quad \textrm{for} \quad \forall \bm{x}, \bm{x}^\prime \in \mathcal{C}.
\end{equation}
A linear combination of two codewords must yield a codeword again.
For linear codes it is possible to find a generator matrix $\bm{G} \in \mathbb{F}^{k \times n}$ and obtain a codeword from a frame with $\bm{x}^n = \bm{u}^k \bm{G}^{k \times n}$.
All linear codes can be transformed into systematic form with $\bm{G} = (\bm{I}_k\ \bm{P})$. Therein, $\bm{I}_k$ is the identity matrix of size $k \times k$.
If $\bm{G}$ is systematic, all elements of a frame $\bm{u}^k$ are also elements of the codeword $\bm{x}^n$.
Also, a parity check matrix $\bm{H} = (-\bm{P}^\top\ \bm{I}_{n-k})$ of size $\dim\bm{H} = (n-k) \times n$ can be obtained from the systematic $\bm{G}$.
The parity check matrix can be used to define the code, as $\bm{H} \bm{x}^\top = \bm{0}^\top$, $\forall \bm{x} \in \mathcal{C}$.
Thus, a parity check matrix can be used to verify correct codeword reception and furthermore error correction may be performed.
A code can be characterized by the minimum distance between any two codewords.
In order to obtain this value we use the Hamming distance.
This distance $d(\bm{v}^n,\bm{x}^n)$ equals the number of positions in $\bm{v}^n$ that differ from $\bm{x}^n$.
The minimum distance of a code is than defined by $d(\mathcal{C}) = \min\{d(\bm{x},\bm{v}): \bm{x},\bm{v} \in \mathcal{C}, \bm{x} \neq \bm{v}\}$.
For linear codes, the minimum distance computation can be simplified by comparing all codewords to the zero codeword $d(\mathcal{C}) = \min\{d(\bm{x},0): \bm{x} \in \mathcal{C}, \bm{x} \neq \bm{0}\}$.
\subsection{Channel Model}\label{sec:channel_model}
Channel coding relies on a generic channel model.
Its input is $x \in \mathcal{X}$ and its distorted output is $y \in \mathcal{Y}$.
A channel is denoted by $W: \mathcal{X} \rightarrow \mathcal{Y}$ along with its transition probability $W(y|x), x \in \mathcal{X}, y \in \mathcal{Y}$.
A \ac{DMC} does not have memory, thus every symbol transmission is independent from any other.
Combined with a binary input alphabet it is called a \ac{BDMC}.
For a symmetric channel, $P(y|1) = P(-y|-1)$ must hold for an output alphabet $y \in \mathcal{Y}, \mathcal{Y} \subset \mathbb{R}$~\cite{Richardson:2008:MCT}.
Assuming symmetry for a \ac{BDMC} leads to a symmetric \ac{BDMC}.
In Sec.~\ref{theory:channels}, several examples of such channels are discussed.
This channel concept may be extended to vector channels.
A vector channel $W^n$ corresponds to $n$ independent uses of a channel $W$ which is denoted as $W^n : \mathcal{X}^n \rightarrow \mathcal{Y}^n$.
Also, vector transition probabilities are denoted $W^n(\bm{y}^n|\bm{x}^n) = \prod_{i=0}^{n-1} W(y_i|x_i)$.
\subsection{Decoder}
A decoder receives a possibly erroneous codeword $\bm{y}$ and checks its validity by asserting $\bm{H} \bm{y}^\top = \bm{0}^\top$, thus performing error detection.
A more sophisticated decoder tries to correct errors by using redundant information transmitted in a codeword.
An optimal decoder strategy is to maximize the \emph{a posteriori} probability.
Given the probability of each codeword $P(\bm{x})$ and the channel transition probability $P(\bm{y}|\bm{x})$, the task at hand is to find the most likely transmitted codeword $\bm{x}$ under the observation $\bm{y}$, $P(\bm{x}|\bm{y})$.
This is denoted
\begin{equation}
\hat{\bm{x}}^{\text{MAP}} = \argmax_{\bm{x} \in \mathcal{C}} P(\bm{x}|\bm{y}) \stackrel{(i)}{=} \argmax_{\bm{x} \in \mathcal{C}} P(\bm{y}|\bm{x}) \frac{P(\bm{x})}{P(\bm{y})} \stackrel{(ii)}{=} \argmax_{\bm{x} \in \mathcal{C}} P(\bm{y}|\bm{x}) P(\bm{x})
\end{equation}
where we have used Bayes' rule in $(i)$ and the simplification in $(ii)$ is due to the fact that $P(\bm{y})$ is constant and does not change when varying $\bm{x}$.
Assume that every codeword is transmitted with identical probability $P(\bm{x}) = P(\bm{v})$, $\forall \bm{x}, \bm{v} \in \mathcal{C}$.
This simplifies the equation and yields the \ac{ML} decoder
\begin{equation}
\hat{\bm{x}}^{\text{ML}} = \argmax_{\bm{x} \in \mathcal{C}} P(\bm{y}|\bm{x})
\end{equation}
which estimates the most likely codeword to be transmitted given a received possibly erroneous codeword~\cite{Richardson:2008:MCT}.
In conclusion, the task at hand is to find a code which inserts redundancy intelligently, so a decoder can use this information to detect and correct transmission errors.
\subsection{Asymptotically Good Codes}\label{theory:repetition_code}
A repetition code is a very simple code which helps clarify certain key concepts in the channel coding domain.
Assume that the encoder and decoder use a repetition code.
For example, a repetition code with $k=1$ and $n = 3$ has two codewords $\mathcal{C} = \{(0,0,0), (1,1,1)\}$.
Thus in this example $r=\frac{1}{3}$.
We can also obtain its generator and parity check matrices as
\begin{equation}
\bm{G} = \begin{pmatrix} 1 & 1 & 1 \end{pmatrix},\qquad \bm{H} = \begin{pmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix}\,.
\end{equation}
The parity check matrix $\bm{H}$ can be used to detect if a transmission error occurred by verifying if $\bm{H} \bm{x}^\top = \bm{0}^\top$.
In the case where an error occurred, an \ac{ML} decoder for a \ac{BSC} carries out a majority decision to estimate the most likely codeword.
Repetition codes shed light on a problem common to a lot of codes.
If the reliability of a code needs to be improved, it comes at the expense of a lower code rate.
Increasing $n$ comes at the expense of decreasing $r = \frac{1}{n}$ because $k=1$ for all repetition codes.
Thus for a very reliable repetition code has a vanishing rate as $\lim_{n \to \infty} r = 0$.
The above results leads to the definition of asymptotically good codes $\mathcal{C}(n_s, k_s, d_s)$ \cite{Friedrichs:2010:error-control-coding}.
Two properties must hold for this class of codes:
\begin{equation}
R = \lim_{s \to \infty} \frac{k_s}{n_s} > 0 \quad \textrm{and} \quad \lim_{s \to \infty} \frac{d_s}{n_s} > 0.
\end{equation}
The code rate must be positive ($>0$) for all codes. Repetition codes do not satisfy this property, for example. Furthermore, the distance between codewords must grow proportionally to the code block size $n$.
\section{Channels}\label{theory:channels}
Several common channel models exist to describe the characteristics of a physical transmission.
Common properties were discussed in Sec.~\ref{sec:channel_model} whereas in this section, the differences are targeted.
The three most important channel models for polar codes are presented, namely the \ac{BSC}, the \ac{BEC} and the \ac{AWGN} channel.
\subsection{AWGN Channel}
An \ac{AWGN} channel as used in this thesis has a binary input alphabet and a continuous output alphabet $\mathcal{Y} = \mathbb{R}$.
Each input symbol is affected by Gaussian noise to yield an output symbol.
\subsection{Capacity and Reliability}
Channels are often characterized by two important measures: their capacity and their reliability.
These measures are introduced in this section. The channel capacity for symmetric \acp{BDMC} with input alphabet $\mathcal{X} = \{0,1\}$ can be calculated by
\begin{equation}
I(W) = \frac{1}{2} \sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}} W(y|x) \log_2 \frac{W(y|x)}{\frac{1}{2} (W(y|0) + W(y|1))},
\end{equation}
where we assume equiprobable channel input symbols $P(X=0) = P(X=1) = \frac{1}{2}$, which is the capacity-achieving input distribution for symmetric \acp{BDMC}.
The capacity defines the highest rate at which a reliable transmission (i.e., with a vanishing error probability after decoding) over a channel $W$ can be realized.
It is also called the Shannon capacity~\cite{sha49} for symmetric channels.
The Bhattacharyya parameter
\begin{equation}
Z(W) = \sum_{y \in \mathcal{Y}} \sqrt{W(y|0) W(y|1)}
\end{equation}
is used to quantify a channel's reliability where a lower value for $Z(W)$ indicates higher reliability.
Also, an upper \ac{ML} decision error bound is given by $Z(W)$~\cite{polar:arikan09}.

View File

@ -0,0 +1,54 @@
\chapter{Theoretical Background}%
\label{chapter:theoretical_background}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Preliminaries: Channel Model and Modulation}
\label{sec:theo:Preliminaries: Channel Model and Modulation}
\begin{itemize}
\item AWGN
\item BPSK
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Notation}
\label{sec:theo:Notation}
\begin{itemize}
\item General remarks on notation (matrices, PDF, etc.)
\item Diagram from midterm presentation
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Channel Coding with LDPC Codes}
\label{sec:theo:Channel Coding with LDPC Codes}
\begin{itemize}
\item Introduction
\item Binary linear codes
\item LDPC codes
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Decoding LDPC Codes using Belief Propagation}
\label{sec:theo:Decoding LDPC Codes using Belief Propagation}
\begin{itemize}
\item Introduction to message passing
\item Overview of BP altorithm
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Optimization Methods}
\label{sec:theo:Optimization Methods}
\begin{itemize}
\item ADMM
\item Proximal gradient method
\end{itemize}

View File

@ -37,7 +37,7 @@
\pgfplotsset{compat=newest}
\usepgfplotslibrary{colorbrewer}
\tikzexternalize[prefix=build/]
%\tikzexternalize[prefix=build/]
%
% Generic packages
@ -78,7 +78,7 @@
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DOCUMENT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Document %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@ -92,9 +92,9 @@
\maketitle
\makestatutorydeclaration
% \makeCCdeclaration
%\makeCCdeclaration
% \include{chapters/abstract}
%\include{chapters/abstract}
%
% Main Document
@ -103,10 +103,82 @@
\cleardoublepage
\pagenumbering{arabic}
% 1. Introduction
% - Problem definition
% - Motivation
% - Results summary
%
% 2. Theoretical Background
% 2.1 Preliminaries: Channel Model and Modulation
% - AWGN
% - BPSK
% 2.2 Notation
% - General remarks on notation (matrices, PDF, etc.)
% - Diagram from midterm presentation
% 2.3 Channel Coding with LDPC Codes
% - Introduction
% - Binary linear codes
% - LDPC codes
% 2.4 Decoding LDPC Codes using Belief Propagation
% - Introduction to message passing
% - Overview of BP algorithm
% 2.5 Optimization Methods
% - ADMM
% - Proximal gradient method
%
% 3. Decoding Techniques
% 3.1 Decoding using Optimization Methods
% - General methodology
% 3.2 LP Decoding using ADMM
% - Equivalent ML optimization problem
% - LP relaxation
% - ADMM as a solver
% 3.3 Proximal Decoding
% - Formulation of optimization problem
% - Proximal gradient method as a solver
%
% 4. Methodology and implementation
% 4.1 General implementation process
% - First python using numpy
% - Then C++ using Eigen
% 4.1 LP Decoding using ADMM
% - Choice of parameters
% - Selected projection algorithm
% - Adaptive linear programming decoding?
% 4.2 Proximal Decoding
% - Choice of parameters
% - Road to improved implementation
%
% 5. Analysis of Results
% 5.1 LP Decoding using ADMM
% 5.2 Proximal Decoding
% - Parameter choice
% - FER
% - Improved implementation
% 5.3 Comparison of BP, Proximal Decoding and LP Decoding using ADMM
% - Decoding performance
% - Complexity & runtime (mention difficulty in reaching conclusive
% results when comparing implementations)
%
% 6. Discussion
% - Proximal decoding improvement limitations
%
% 7. Conclusion
% - Summary of results
% - Future work
\tableofcontents
\cleardoublepage % make sure multipage TOCs are numbered correctly
\include{chapters/introduction}
\include{chapters/systemmodel}
\include{chapters/theoretical_background}
\include{chapters/decoding_techniques}
\include{chapters/methodology_and_implementation}
\include{chapters/analysis_of_results}
\include{chapters/discussion}
\include{chapters/conclusion}
%
@ -114,8 +186,8 @@
%
\appendix
% \listoffigures
% \listoftables
%\listoffigures
%\listoftables
\include{abbreviations}
\printbibliography