Added notes about motivation for ADMM and LP decoding in general
This commit is contained in:
parent
e38af48845
commit
c4dfcfbb24
@ -689,6 +689,13 @@ The resulting formulation of the relaxed optimization problem becomes:%
|
|||||||
|
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item Why ADMM?
|
\item Why ADMM?
|
||||||
|
\begin{itemize}
|
||||||
|
\item Distributed nature, making it a competitor to BP
|
||||||
|
(which can also be implemented in a distributed manner)
|
||||||
|
(See original ADMM paper)
|
||||||
|
\item Computational performance similar to BP has been demnonstrated
|
||||||
|
(See original ADMM paper)
|
||||||
|
\end{itemize}
|
||||||
\item Adaptive linear programming?
|
\item Adaptive linear programming?
|
||||||
\item How ADMM is adapted to LP decoding
|
\item How ADMM is adapted to LP decoding
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|||||||
@ -5,6 +5,12 @@
|
|||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item Problem definition
|
\item Problem definition
|
||||||
\item Motivation
|
\item Motivation
|
||||||
|
\begin{itemize}
|
||||||
|
\item Error floor when decoding with BP (seems to not be persent with LP decoding -
|
||||||
|
see original ADMM paper introduction)
|
||||||
|
\item Strong theoretical guarantees that allow for better and better approximations
|
||||||
|
for ML decoding (See original ADMM peper introduction)
|
||||||
|
\end{itemize}
|
||||||
\item Results summary
|
\item Results summary
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user