diff --git a/latex/thesis/chapters/decoding_techniques.tex b/latex/thesis/chapters/decoding_techniques.tex index 25d9820..b9e0c36 100644 --- a/latex/thesis/chapters/decoding_techniques.tex +++ b/latex/thesis/chapters/decoding_techniques.tex @@ -689,6 +689,13 @@ The resulting formulation of the relaxed optimization problem becomes:% \begin{itemize} \item Why ADMM? + \begin{itemize} + \item Distributed nature, making it a competitor to BP + (which can also be implemented in a distributed manner) + (See original ADMM paper) + \item Computational performance similar to BP has been demnonstrated + (See original ADMM paper) + \end{itemize} \item Adaptive linear programming? \item How ADMM is adapted to LP decoding \end{itemize} diff --git a/latex/thesis/chapters/introduction.tex b/latex/thesis/chapters/introduction.tex index 91c251e..011116b 100644 --- a/latex/thesis/chapters/introduction.tex +++ b/latex/thesis/chapters/introduction.tex @@ -5,6 +5,12 @@ \begin{itemize} \item Problem definition \item Motivation + \begin{itemize} + \item Error floor when decoding with BP (seems to not be persent with LP decoding - + see original ADMM paper introduction) + \item Strong theoretical guarantees that allow for better and better approximations + for ML decoding (See original ADMM peper introduction) + \end{itemize} \item Results summary \end{itemize}