Done with first version of content except conclusion etc.
This commit is contained in:
@@ -20,7 +20,7 @@ differences are interpreted based on their theoretical structure.
|
||||
proximal operators \cite[Sec. 4.4]{proximal_algorithms}.
|
||||
When using \ac{ADMM} as an optimization method to solve the \ac{LP} decoding
|
||||
problem specifically, this is not quite possible because of the multiple
|
||||
constraints.
|
||||
constraints. \todo{Elaborate}
|
||||
In spite of that, the two algorithms still show some striking similarities.
|
||||
|
||||
To see the first of these similarities, the \ac{LP} decoding problem in
|
||||
@@ -227,7 +227,6 @@ return $\tilde{\boldsymbol{c}}$
|
||||
\label{fig:comp:message_passing}
|
||||
\end{figure}%
|
||||
%
|
||||
\todo{Remove figure caption and add algorithm caption}
|
||||
This message passing structure means that both algorithms can be implemented
|
||||
very efficiently, as the update steps can be performed in parallel for all
|
||||
\acp{CN} and for all \acp{VN}, respectively.
|
||||
@@ -249,9 +248,11 @@ respect to $n$ and are heavily parallelisable.
|
||||
|
||||
The decoding performance of the two algorithms is shown in figure
|
||||
\ref{fig:comp:prox_admm_dec} in the form of the \ac{FER}.
|
||||
Shown as well are the performance of the improved proximal decoding
|
||||
altorithm presented in section \ref{sec:prox:Improved Implementation}
|
||||
and, wherever available, the \ac{ML} decoding \ac{FER}.
|
||||
Shown as well is the performance of the improved proximal decoding
|
||||
algorithm presented in section \ref{sec:prox:Improved Implementation}.
|
||||
Additionally, the \ac{FER} resulting from decoding using \ac{BP} and,
|
||||
wherever available, the \ac{ML} decoding \ac{FER} taken from
|
||||
\cite{lautern_channelcodes} are plotted as a reference.
|
||||
The parameters chosen for the proximal and improved proximal decoders are
|
||||
$\gamma=0.05$, $\omega=0.05$, $K=200$, $\eta = 1.5$ and $N=12$.
|
||||
The parameters chosen for \ac{LP} decoding using \ac{ADMM} are $\mu = 5$,
|
||||
@@ -259,7 +260,7 @@ $\rho = 1$, $K=200$, $\epsilon_\text{pri} = 10^{-5}$ and
|
||||
$\epsilon_\text{dual} = 10^{-5}$.
|
||||
For all codes considered in the scope of this work, \ac{LP} decoding using
|
||||
\ac{ADMM} consistently outperforms both proximal decoding and the improved
|
||||
version.
|
||||
version, reaching very similar performance to \ac{BP}.
|
||||
The decoding gain heavily depends on the code, evidently becoming greater for
|
||||
codes with larger $n$ and reaching values of up to $\SI{2}{dB}$.
|
||||
|
||||
@@ -273,7 +274,7 @@ calculations performed in each case.
|
||||
With proximal decoding, the calculations are approximate, leading
|
||||
to the constraints never being quite satisfied.
|
||||
With \ac{LP} decoding using \ac{ADMM}
|
||||
the constraints are fulfilled for each parity check individualy after each
|
||||
the constraints are fulfilled for each parity check individualy, after each
|
||||
iteration of the decoding process.
|
||||
|
||||
The timing requirements of the decoding algorithms are visualized in figure
|
||||
@@ -281,13 +282,14 @@ The timing requirements of the decoding algorithms are visualized in figure
|
||||
The datapoints have been generated by evaluating the metadata from \ac{FER}
|
||||
and \ac{BER} simulations using the parameters mentioned earlier when
|
||||
discussing the decoding performance.
|
||||
While in this case the \ac{LP} decoding using \ac{ADMM} implementation seems
|
||||
to be faster the the proximal decoding and improved proximal decoding
|
||||
implementations, infering some general behavior is difficult.
|
||||
While the \ac{ADMM} implementation seems to be faster the the proximal
|
||||
decoding and improved proximal decoding implementations, infering some
|
||||
general behavior is difficult in this case.
|
||||
This is because of the comparison of actual implementations, making the
|
||||
results dependent on factors such as the grade of optimization of each
|
||||
implementation.
|
||||
Nevertheless, the run time of both implementations is similar and both are
|
||||
Nevertheless, the run time of both the proximal decoding and the \ac{LP}
|
||||
decoding using \ac{ADMM} implementations is similar and both are
|
||||
reasonably performant, owing to the parallelisable structure of the
|
||||
algorithms.
|
||||
%
|
||||
@@ -350,13 +352,16 @@ algorithms.
|
||||
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
|
||||
{res/hybrid/2d_ber_fer_dfr_963965.csv};
|
||||
\addplot[NavyBlue, line width=1pt, mark=*]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
|
||||
%{res/hybrid/2d_ber_fer_dfr_963965.csv};
|
||||
{res/admm/ber_2d_963965.csv};
|
||||
\addplot[Black, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=FER,]
|
||||
{res/generic/fer_ml_9633965.csv};
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt]
|
||||
table [x=SNR, y=FER, col sep=comma]
|
||||
{res/generic/bp_963965.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -383,15 +388,17 @@ algorithms.
|
||||
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
|
||||
{res/hybrid/2d_ber_fer_dfr_bch_31_26.csv};
|
||||
\addplot[NavyBlue, line width=1pt, mark=*]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
|
||||
{res/admm/ber_2d_bch_31_26.csv};
|
||||
\addplot[Black, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma,
|
||||
discard if gt={SNR}{5.5},
|
||||
discard if lt={SNR}{1},
|
||||
]
|
||||
discard if lt={SNR}{1},]
|
||||
{res/generic/fer_ml_bch_31_26.csv};
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt]
|
||||
table [x=SNR, y=FER, col sep=comma]
|
||||
{res/generic/bp_bch_31_26.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -419,9 +426,10 @@ algorithms.
|
||||
discard if gt={SNR}{5.5}]
|
||||
{res/proximal/2d_ber_fer_dfr_20433484.csv};
|
||||
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05},
|
||||
discard if gt={SNR}{5.5}]
|
||||
{res/hybrid/2d_ber_fer_dfr_20433484.csv};
|
||||
\addplot[NavyBlue, line width=1pt, mark=*]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma,
|
||||
discard if not={mu}{3.0},
|
||||
discard if gt={SNR}{5.5}]
|
||||
@@ -430,6 +438,9 @@ algorithms.
|
||||
table [col sep=comma, x=SNR, y=FER,
|
||||
discard if gt={SNR}{5.5}]
|
||||
{res/generic/fer_ml_20433484.csv};
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt]
|
||||
table [x=SNR, y=FER, col sep=comma]
|
||||
{res/generic/bp_20433484.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -456,9 +467,13 @@ algorithms.
|
||||
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
|
||||
{res/hybrid/2d_ber_fer_dfr_20455187.csv};
|
||||
\addplot[NavyBlue, line width=1pt, mark=*]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
|
||||
{res/admm/ber_2d_20455187.csv};
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt,
|
||||
discard if gt={SNR}{5}]
|
||||
table [x=SNR, y=FER, col sep=comma]
|
||||
{res/generic/bp_20455187.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -487,9 +502,13 @@ algorithms.
|
||||
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
|
||||
{res/hybrid/2d_ber_fer_dfr_40833844.csv};
|
||||
\addplot[NavyBlue, line width=1pt, mark=*]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
|
||||
{res/admm/ber_2d_40833844.csv};
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt,
|
||||
discard if gt={SNR}{3}]
|
||||
table [x=SNR, y=FER, col sep=comma]
|
||||
{res/generic/bp_40833844.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -516,9 +535,13 @@ algorithms.
|
||||
\addplot[RedOrange, line width=1pt, mark=triangle, densely dashed]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={gamma}{0.05}]
|
||||
{res/hybrid/2d_ber_fer_dfr_pegreg252x504.csv};
|
||||
\addplot[NavyBlue, line width=1pt, mark=*]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [x=SNR, y=FER, col sep=comma, discard if not={mu}{3.0}]
|
||||
{res/admm/ber_2d_pegreg252x504.csv};
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt]
|
||||
table [x=SNR, y=FER, col sep=comma,
|
||||
discard if gt={SNR}{3}]
|
||||
{res/generic/bp_pegreg252x504.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -544,9 +567,12 @@ algorithms.
|
||||
\addlegendimage{RedOrange, line width=1pt, mark=triangle, densely dashed}
|
||||
\addlegendentry{Improved proximal decoding}
|
||||
|
||||
\addlegendimage{NavyBlue, line width=1pt, mark=*}
|
||||
\addlegendimage{Turquoise, line width=1pt, mark=*}
|
||||
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
|
||||
|
||||
\addlegendimage{RoyalPurple, line width=1pt, mark=*, solid}
|
||||
\addlegendentry{\acs{BP} (20 iterations)}
|
||||
|
||||
\addlegendimage{Black, line width=1pt, mark=*, solid}
|
||||
\addlegendentry{\acs{ML} decoding}
|
||||
\end{axis}
|
||||
@@ -558,4 +584,3 @@ algorithms.
|
||||
\label{fig:comp:prox_admm_dec}
|
||||
\end{figure}
|
||||
|
||||
\todo{Add BP curve}
|
||||
|
||||
@@ -663,20 +663,20 @@ The same is true for the updates of the individual components of $\tilde{\boldsy
|
||||
This representation can be slightly simplified by substituting
|
||||
$\boldsymbol{\lambda}_j = \mu \cdot \boldsymbol{u}_j \,\forall\,j\in\mathcal{J}$:%
|
||||
%
|
||||
\begin{alignat*}{3}
|
||||
\begin{alignat}{3}
|
||||
\tilde{c}_i &\leftarrow \frac{1}{d_i} \left(
|
||||
\sum_{j\in N_v\left( i \right) } \Big( \left( \boldsymbol{z}_j \right)_i
|
||||
- \left( \boldsymbol{u}_j \right)_i \Big)
|
||||
- \frac{\gamma_i}{\mu} \right)
|
||||
\hspace{3mm} && \forall i\in\mathcal{I} \\
|
||||
\hspace{3mm} && \forall i\in\mathcal{I} \label{eq:admm:c_update}\\
|
||||
\boldsymbol{z}_j &\leftarrow \Pi_{\mathcal{P}_{d_j}}\left(
|
||||
\boldsymbol{T}_j\tilde{\boldsymbol{c}} + \boldsymbol{u}_j \right)
|
||||
\hspace{3mm} && \forall j\in\mathcal{J} \\
|
||||
\hspace{3mm} && \forall j\in\mathcal{J} \label{eq:admm:z_update}\\
|
||||
\boldsymbol{u}_j &\leftarrow \boldsymbol{u}_j
|
||||
+ \boldsymbol{T}_j\tilde{\boldsymbol{c}}
|
||||
- \boldsymbol{z}_j
|
||||
\hspace{3mm} && \forall j\in\mathcal{J}
|
||||
.\end{alignat*}
|
||||
\hspace{3mm} && \forall j\in\mathcal{J} \label{eq:admm:u_update}
|
||||
.\end{alignat}
|
||||
%
|
||||
|
||||
The reason \ac{ADMM} is able to perform so well is due to the relocation of the constraints
|
||||
@@ -732,7 +732,7 @@ computing the projection operation $\Pi_{\mathcal{P}_{d_j}} \left( \cdot \right)
|
||||
onto each check polytope. Various different methods to perform this projection
|
||||
have been proposed (e.g., in \cite{original_admm}, \cite{efficient_lp_dec_admm},
|
||||
\cite{lautern}).
|
||||
The method chosen here is the one presented in \cite{lautern}.
|
||||
The method chosen here is the one presented in \cite{original_admm}.
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
@@ -793,6 +793,7 @@ Defining%
|
||||
\boldsymbol{s} := \sum_{j\in\mathcal{J}} \boldsymbol{T}_j^\text{T}
|
||||
\left( \boldsymbol{z}_j - \boldsymbol{u}_j \right)
|
||||
\end{align*}%
|
||||
\todo{Rename $\boldsymbol{D}$}%
|
||||
%
|
||||
the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
|
||||
%
|
||||
@@ -801,7 +802,6 @@ the $\tilde{\boldsymbol{c}}$ update can then be rewritten as%
|
||||
\left( \boldsymbol{s} - \frac{1}{\mu}\boldsymbol{\gamma} \right)
|
||||
.\end{align*}
|
||||
%
|
||||
|
||||
This modified version of the decoding process is depicted in algorithm \ref{alg:admm:mod}.
|
||||
|
||||
\begin{genericAlgorithm}[caption={\ac{LP} decoding using \ac{ADMM} algorithm with rewritten
|
||||
@@ -835,137 +835,37 @@ return $\tilde{\boldsymbol{c}}$
|
||||
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Results}%
|
||||
\label{sec:lp:Results}
|
||||
\section{Analysis and Simulation Results}%
|
||||
\label{sec:lp:Analysis and Simulation Results}
|
||||
|
||||
In this section, \ac{LP} decoding using \ac{ADMM} is examined based on
|
||||
simulation results for various codes.
|
||||
First, the effect of the different parameters and how their values should be
|
||||
chosen is investigated.
|
||||
Subsequently, the decoding performance is observed and compared to that of
|
||||
\ac{BP}.
|
||||
Finally, the computational performance of the implementation and time
|
||||
complexity of the algorithm are studied.
|
||||
|
||||
%\begin{figure}[H]
|
||||
% \centering
|
||||
%
|
||||
% \begin{tikzpicture}
|
||||
% \begin{axis}[
|
||||
% colormap/viridis,
|
||||
% xlabel={$E_b / N_0$}, ylabel={$\mu$}, zlabel={\acs{BER}},
|
||||
% view={75}{30},
|
||||
% zmode=log,
|
||||
% ]
|
||||
% \addplot3[
|
||||
% surf,
|
||||
% mesh/rows=14, mesh/cols=18
|
||||
% ]
|
||||
% table [col sep=comma, x=SNR, y=mu, z=BER]
|
||||
% {res/admm/ber_2d_20433484.csv};
|
||||
% \end{axis}
|
||||
% \end{tikzpicture}
|
||||
%\end{figure}
|
||||
%
|
||||
%\begin{figure}[H]
|
||||
% \centering
|
||||
%
|
||||
% \begin{tikzpicture}
|
||||
% \begin{axis}[
|
||||
% colormap/viridis,
|
||||
% xlabel={$E_b / N_0 2$}, ylabel={$\mu$}, zlabel={\acs{BER}},
|
||||
% view={75}{30},
|
||||
% zmode=log,
|
||||
% ]
|
||||
% \addplot3[
|
||||
% surf,
|
||||
% mesh/rows=14, mesh/cols=18
|
||||
% ]
|
||||
% table [col sep=comma, x=SNR, y=mu, z=BER]
|
||||
% {/home/andreas/git/ba_sw/scripts/admm/sim_results/ber_2d_20433484.csv};
|
||||
% \end{axis}
|
||||
% \end{tikzpicture}
|
||||
%\end{figure}
|
||||
\subsection{Choice of Parameters}
|
||||
|
||||
The first two parameters to be investigated are the penalty parameter $\mu$
|
||||
and the over-relaxation parameter $\rho$. \todo{Are these their actual names?}
|
||||
A first approach to get some indication of the values that might be chosen
|
||||
for these parameters is to look at how the decoding performance depends
|
||||
on them.
|
||||
The \ac{FER} is plotted as a function of $\mu$ and $\rho$ in figure
|
||||
\ref{fig:admm:mu_rho}, for three different \acp{SNR}.
|
||||
When varying $\mu$, $\rho$ is set to a constant value of 1 and when varying
|
||||
$\rho$, $\mu$ is set to 5.
|
||||
The behavior that can be observed is very similar to that of the
|
||||
parameter $\gamma$ in proximal decoding, analyzed in section
|
||||
\ref{sec:prox:Analysis and Simulation Results}.
|
||||
A single optimal value giving optimal performance does not exist; rather,
|
||||
as long as the value is chosen within a certain range, the performance is
|
||||
approximately equally good.
|
||||
|
||||
\textbf{Game Plan}
|
||||
|
||||
\begin{enumerate}
|
||||
\item Determine parameters
|
||||
\item Make non-regular admm implementation use indexing instead of matrix vector
|
||||
multiplication and simulate the pegreg, bch and 204.55.187 codes (parameter choice)
|
||||
\item Computational performance
|
||||
\item Comparison of proximal and admm (decoding performance and computational performance)
|
||||
\item Find different codewords
|
||||
\item Examine weird behavior when c is allowed to be negative
|
||||
\item BP as comparison
|
||||
\item Combination of proximal and BP
|
||||
\end{enumerate}
|
||||
|
||||
|
||||
\begin{itemize}
|
||||
\item Choice of Parameters (Take decomposition paper as guide)
|
||||
\begin{itemize}
|
||||
\item mu
|
||||
\item K
|
||||
\item rho
|
||||
\item epsilon pri / epslion dual
|
||||
\end{itemize}
|
||||
\item Decoding Performance
|
||||
\begin{itemize}
|
||||
\item FER and BER similar
|
||||
\item DFR and FER pratically identical -> FER may be due to DFR
|
||||
\item Compare to BP
|
||||
\end{itemize}
|
||||
\item Convergence Behavior
|
||||
\begin{itemize}
|
||||
\item Plot average error
|
||||
\item Find out if converging to pseudocodeword or not converging.
|
||||
How does pseudocodeword and not pseudocodeword relate to rounding and clipping?
|
||||
\end{itemize}
|
||||
\item Computational Performance
|
||||
\begin{itemize}
|
||||
\item Linear? Difficult to verify due to difference in adjustments
|
||||
in the implementation for different codes
|
||||
\end{itemize}
|
||||
|
||||
\end{itemize}
|
||||
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
|
||||
ymode=log,
|
||||
width=0.6\textwidth,
|
||||
height=0.45\textwidth,
|
||||
legend style={at={(0.5,-0.57)},anchor=south},
|
||||
]
|
||||
\addplot[RedOrange, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=FER,
|
||||
discard if gt={SNR}{2.2},
|
||||
]
|
||||
{res/admm/fer_paper_margulis.csv};
|
||||
\addlegendentry{\acs{ADMM} (Barman et al.)}
|
||||
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
|
||||
table [col sep=comma, x=SNR, y=FER,]
|
||||
{res/admm/ber_margulis264013203.csv};
|
||||
\addlegendentry{\acs{ADMM} (Own results)}
|
||||
\addplot[RoyalPurple, line width=1pt, mark=triangle]
|
||||
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
|
||||
{res/generic/fer_bp_mackay_margulis.csv};
|
||||
\addlegendentry{\acs{BP} (Barman et al.)}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Comparison of datapoints from Barman et al. with own simulation results%
|
||||
\protect\footnotemark{}}
|
||||
\label{fig:admm:results}
|
||||
\end{figure}%
|
||||
%
|
||||
\footnotetext{``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
|
||||
\cite[\text{Margulis2640.1320.3}]{mackay_enc}; $K=200, \mu = 3.3, \rho=1.9,
|
||||
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
|
||||
}%
|
||||
%
|
||||
|
||||
|
||||
\begin{figure}[H]
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{subfigure}[c]{0.48\textwidth}
|
||||
@@ -1029,6 +929,7 @@ return $\tilde{\boldsymbol{c}}$
|
||||
\begin{axis}[hide axis,
|
||||
xmin=10, xmax=50,
|
||||
ymin=0, ymax=0.4,
|
||||
legend columns=3,
|
||||
legend style={draw=white!15!black,legend cell align=left}]
|
||||
|
||||
\addlegendimage{ForestGreen, line width=1pt, densely dashed, mark=*}
|
||||
@@ -1041,83 +942,72 @@ return $\tilde{\boldsymbol{c}}$
|
||||
\end{tikzpicture}
|
||||
\end{subfigure}
|
||||
|
||||
\caption{asf}
|
||||
\caption{Dependence of the decoding performance on the parameters $\mu$ and $\rho$.}
|
||||
\label{fig:admm:mu_rho}
|
||||
\end{figure}%
|
||||
%
|
||||
%\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
|
||||
% \cite[\text{204.33.484}]{mackay_enc}; $K=200, \rho=1, \epsilon_\text{pri} = 10^{-5},
|
||||
% \epsilon_\text{dual} = 10^{-5}$
|
||||
%}%
|
||||
%
|
||||
|
||||
\begin{figure}[H]
|
||||
To aid in the choice of the parameters, an additional criterion can be used:
|
||||
the number of iterations performed for a decoding operation.
|
||||
This is directly related to the time needed to decode a received vector
|
||||
$\boldsymbol{y}$, which the aim is to minimize.
|
||||
Figure \ref{fig:admm:mu_rho_iterations} shows the average number of iterations
|
||||
over $\SI{1000}{}$ decodings, as a function of $\rho$.
|
||||
This time the \ac{SNR} is kept constant at $\SI{4}{dB}$ and the parameter
|
||||
$\mu$ is varied.
|
||||
It is visible that choosing a large value for $\rho$ as well as a small value
|
||||
for $\mu$ minimizes the average number of iterations and thus the average
|
||||
runtime of the decoding process.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{subfigure}[c]{0.48\textwidth}
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\mu$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=mu, y=k_avg,
|
||||
discard if not={rho}{0.5000000000000001},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\rho = 0.5$}
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=mu, y=k_avg,
|
||||
discard if not={rho}{1.1000000000000003},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\rho = 1.1$}
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=mu, y=k_avg,
|
||||
discard if not={rho}{1.9000000000000004},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\rho = 1.9$}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\end{subfigure}%
|
||||
\hfill%
|
||||
\begin{subfigure}[c]{0.48\textwidth}
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\mu = 2$}
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\mu = 5$}
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\mu = 9$}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\end{subfigure}%
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=0.6\textwidth,
|
||||
height=0.45\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\mu = 9$}
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\mu = 5$}
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addlegendentry{$\mu = 2$}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{asf}
|
||||
\caption{Dependence of the average number of iterations required on $\mu$ and $\rho$
|
||||
for $E_b / N_0 = \SI{4}{dB}$.}
|
||||
\label{fig:admm:mu_rho_iterations}
|
||||
\end{figure}%
|
||||
|
||||
To get an estimate for the parameter $K$, the average error during decoding
|
||||
can be used.
|
||||
This is shown in figure \ref{fig:admm:avg_error} as an average of
|
||||
$\SI{100000}{}$ decodings.
|
||||
Similarly to the results in section
|
||||
\ref{sec:prox:Analysis and Simulation Results}, a dip is visible around the
|
||||
$20$ iteration mark.
|
||||
This is due to the fact that as the number of iterations increases
|
||||
more and more decodings converge, leaving only the mistaken ones to be
|
||||
averaged.
|
||||
The point at which the wrong decodings start to become dominant and the
|
||||
decoding performance does not increase any longer is largely independent of
|
||||
the \ac{SNR}, allowing the value of $K$ to be chosen without considering the
|
||||
\ac{SNR}.
|
||||
|
||||
\begin{figure}[H]
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
@@ -1159,7 +1049,7 @@ return $\tilde{\boldsymbol{c}}$
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Average error for $\SI{100000}{}$ decodings\protect\footnotemark{}}
|
||||
\label{fig:}
|
||||
\label{fig:admm:avg_error}
|
||||
\end{figure}%
|
||||
%
|
||||
\footnotetext{(3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
|
||||
@@ -1168,4 +1058,460 @@ return $\tilde{\boldsymbol{c}}$
|
||||
}%
|
||||
%
|
||||
|
||||
The last two parameters remaining to be examined are the tolerances for the
|
||||
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
|
||||
$\epsilon_\text{dual}$.
|
||||
These are considered as having the same value.
|
||||
The effect of their value on the decoding performance is visualized in figure
|
||||
\ref{fig:admm:epsilon} for a (3,6) regular \ac{LDPC} code with $n=204, k=102$
|
||||
\cite[\text{204.33.484}]{mackay_enc}.
|
||||
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
|
||||
kept constant, with $K=200$, $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$.
|
||||
A lower value for the tolerance initially leads to a dramatic decrease in the
|
||||
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\epsilon$}, ylabel={\acs{FER}},
|
||||
ymode=log,
|
||||
xmode=log,
|
||||
x dir=reverse,
|
||||
width=0.6\textwidth,
|
||||
height=0.45\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=epsilon, y=FER,
|
||||
discard if not={SNR}{3.0},]
|
||||
{res/admm/fer_epsilon_20433484.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
|
||||
$\epsilon_\text{dual}$ on the \acs{FER}}
|
||||
\label{fig:admm:epsilon}
|
||||
\end{figure}%
|
||||
|
||||
|
||||
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
|
||||
small and large, respectively, to reduce the average runtime of the decoding
|
||||
process, while keeping them within a certain range as to not compromise the
|
||||
decoding performance.
|
||||
The maximum number of iterations $K$ performed can be chosen independantly
|
||||
of the \ac{SNR}.
|
||||
Finally, relatively small values should be given to the parameters
|
||||
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
|
||||
possible error rate.
|
||||
|
||||
\subsection{Decoding Performance}
|
||||
|
||||
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
|
||||
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
|
||||
\cite{original_admm} are compared to the results from the simulations
|
||||
conducted in the context of this thesis.
|
||||
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
|
||||
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
|
||||
the same as in \cite{original_admm};
|
||||
the two \ac{FER} curves are practically identical.
|
||||
Also shown is the curve resulting from \ac{BP} decoding, performing
|
||||
1000 iterations.
|
||||
The two algorithms perform relatively similarly, coming within $\SI{0.5}{dB}$
|
||||
of one another.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
|
||||
ymode=log,
|
||||
width=0.6\textwidth,
|
||||
height=0.45\textwidth,
|
||||
legend style={at={(0.5,-0.57)},anchor=south},
|
||||
legend cell align={left},
|
||||
]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=FER,
|
||||
discard if gt={SNR}{2.2},
|
||||
]
|
||||
{res/admm/fer_paper_margulis.csv};
|
||||
\addlegendentry{\acs{ADMM} (Barman et al.)}
|
||||
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
|
||||
table [col sep=comma, x=SNR, y=FER,]
|
||||
{res/admm/ber_margulis264013203.csv};
|
||||
\addlegendentry{\acs{ADMM} (Own results)}
|
||||
\addplot[RoyalPurple, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
|
||||
{res/generic/fer_bp_mackay_margulis.csv};
|
||||
\addlegendentry{\acs{BP} (Barman et al.)}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Comparison of datapoints from Barman et al. with own simulation results.
|
||||
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
|
||||
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
|
||||
\label{fig:admm:results}
|
||||
\end{figure}%
|
||||
%
|
||||
|
||||
\footnotetext{; $K=200, \mu = 3.3, \rho=1.9,
|
||||
\epsilon_{\text{pri}} = 10^{-5}, \epsilon_{\text{dual}} = 10^{-5}$
|
||||
}%
|
||||
%
|
||||
In figure \ref{fig:admm:ber_fer}, the \ac{BER} and \ac{FER} for \ac{LP} decoding
|
||||
using\ac{ADMM} and \ac{BP} are shown for a (3, 6) regular \ac{LDPC} code with
|
||||
$n=204$.
|
||||
To ensure comparability, in both cases the number of iterations was set to
|
||||
$K=200$.
|
||||
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
|
||||
$\epsilon = 10^{-5}$ and $\epsilon=10^{-5}$.
|
||||
Comparing figures \ref{fig:admm:results} and \ref{fig:admm:ber_fer} it is
|
||||
apparent that the difference in decoding performance depends on the code being
|
||||
considered.
|
||||
More simulation results are presented in figure \ref{fig:comp:prox_admm_dec}
|
||||
in section \ref{sec:comp:res}.
|
||||
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{subfigure}[c]{0.48\textwidth}
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\mu$}, ylabel={\acs{BER}},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
ymax=1.5, ymin=3e-7,
|
||||
]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=BER,
|
||||
discard if not={mu}{5.0},
|
||||
discard if gt={SNR}{4.5}]
|
||||
{res/admm/ber_2d_20433484.csv};
|
||||
\addplot[RoyalPurple, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=BER,
|
||||
discard if gt={SNR}{4.5}]
|
||||
{/home/andreas/bp_20433484.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\end{subfigure}%
|
||||
\hfill%
|
||||
\begin{subfigure}[c]{0.48\textwidth}
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={\acs{FER}},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
ymax=1.5, ymin=3e-7,
|
||||
]
|
||||
\addplot[Turquoise, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=FER,
|
||||
discard if not={mu}{5.0},
|
||||
discard if gt={SNR}{4.5}]
|
||||
{res/admm/ber_2d_20433484.csv};
|
||||
\addplot[RoyalPurple, line width=1pt, mark=*]
|
||||
table [col sep=comma, x=SNR, y=FER,
|
||||
discard if gt={SNR}{4.5}]
|
||||
{/home/andreas/bp_20433484.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\end{subfigure}%
|
||||
|
||||
\begin{subfigure}[t]{\textwidth}
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[hide axis,
|
||||
xmin=10, xmax=50,
|
||||
ymin=0, ymax=0.4,
|
||||
legend columns=3,
|
||||
legend style={draw=white!15!black,legend cell align=left}]
|
||||
|
||||
\addlegendimage{Turquoise, line width=1pt, mark=*}
|
||||
\addlegendentry{\acs{LP} decoding using \acs{ADMM}}
|
||||
\addlegendimage{RoyalPurple, line width=1pt, mark=*}
|
||||
\addlegendentry{BP (20 iterations)}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\end{subfigure}
|
||||
|
||||
\caption{Comparison of the decoding performance of \acs{LP} decoding using
|
||||
\acs{ADMM} and \acs{BP}. (3,6) regular \ac{LDPC} code with $n = 204$, $k = 102$
|
||||
\cite[\text{204.33.484}]{mackay_enc}}
|
||||
\label{fig:admm:ber_fer}
|
||||
\end{figure}%
|
||||
|
||||
In summary, the decoding performance of \ac{LP} decoding using \ac{ADMM} comes
|
||||
close to that of \ac{BP}, their difference staying in the range of
|
||||
approximately $\SI{0.5}{dB}$, depending on the code in question.
|
||||
|
||||
\subsection{Computational Performance}
|
||||
\label{subsec:admm:comp_perf}
|
||||
|
||||
In terms of time complexity, the three steps of the decoding algorithm
|
||||
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
|
||||
considered.
|
||||
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
|
||||
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
|
||||
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
|
||||
algorithm employed.
|
||||
Since for the implementation completed for this work the projection algorithm
|
||||
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
|
||||
also has linear time complexity.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[grid=both,
|
||||
xlabel={$n$}, ylabel={Time per frame (s)},
|
||||
width=0.6\textwidth,
|
||||
height=0.45\textwidth,
|
||||
legend style={at={(0.5,-0.42)},anchor=south},
|
||||
legend cell align={left},]
|
||||
|
||||
\addplot[NavyBlue, only marks, mark=triangle*]
|
||||
table [col sep=comma, x=n, y=spf]
|
||||
{res/admm/fps_vs_n.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
|
||||
\label{fig:admm:time}
|
||||
\end{figure}%
|
||||
|
||||
Simulation results from a range of different codes can be used to verify this
|
||||
analysis.
|
||||
Figure \ref{fig:admm:time} shows the average time needed to decode one
|
||||
frame as a function of its length.
|
||||
\todo{List codes used}
|
||||
The results are necessarily skewed because the codes considered vary not only
|
||||
in their length, but also in their construction scheme and rate.
|
||||
Additionally, different optimization opportunities arise depending on the
|
||||
length of a code, since for smaller codes dynamic memory allocation can be
|
||||
completely omitted.
|
||||
This may explain why the datapoint at $n=504$ is higher then would be expected
|
||||
with linear behavior.
|
||||
Nonetheless, the simulation results roughly match the expected behavior
|
||||
following from the theoretical considerations.
|
||||
|
||||
|
||||
\textbf{Game Plan}
|
||||
|
||||
\begin{itemize}
|
||||
\item Choice of Parameters (Take decomposition paper as guide)
|
||||
\begin{itemize}
|
||||
\item epsilon pri / epslion dual
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{subfigure}[t]{0.48\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_963965.csv};
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_963965.csv};
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_963965.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=96, k=48$
|
||||
\cite[\text{96.3.965}]{mackay_enc}}
|
||||
\end{subfigure}%
|
||||
\hfill
|
||||
\begin{subfigure}[t]{0.48\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_bch_31_26.csv};
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_bch_31_26.csv};
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_bch_31_26.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\caption{BCH code with $n=31, k=26$\\[2\baselineskip]}
|
||||
\end{subfigure}
|
||||
|
||||
\vspace{3mm}
|
||||
|
||||
\begin{subfigure}[t]{0.48\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_20433484.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=204, k=102$
|
||||
\cite[\text{204.33.484}]{mackay_enc}}
|
||||
\end{subfigure}%
|
||||
\hfill
|
||||
\begin{subfigure}[t]{0.48\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_20455187.csv};
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_20455187.csv};
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_20455187.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\caption{$\left( 5, 10 \right)$-regular \ac{LDPC} code with $n=204, k=102$
|
||||
\cite[\text{204.55.187}]{mackay_enc}}
|
||||
\end{subfigure}%
|
||||
|
||||
\vspace{3mm}
|
||||
|
||||
\begin{subfigure}[t]{0.48\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_40833844.csv};
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_40833844.csv};
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_40833844.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\caption{$\left( 3, 6 \right)$-regular \ac{LDPC} code with $n=408, k=204$
|
||||
\cite[\text{408.33.844}]{mackay_enc}}
|
||||
\end{subfigure}%
|
||||
\hfill
|
||||
\begin{subfigure}[t]{0.48\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
grid=both,
|
||||
xlabel={$\rho$}, ylabel={Average \# of iterations},
|
||||
ymode=log,
|
||||
width=\textwidth,
|
||||
height=0.75\textwidth,
|
||||
]
|
||||
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{9.0},]
|
||||
{res/admm/mu_rho_kavg_pegreg252x504.csv};
|
||||
\addplot[RedOrange, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{5.0},]
|
||||
{res/admm/mu_rho_kavg_pegreg252x504.csv};
|
||||
\addplot[ForestGreen, line width=1pt, densely dashed, mark=*]
|
||||
table [col sep=comma, x=rho, y=k_avg,
|
||||
discard if not={mu}{2.0},]
|
||||
{res/admm/mu_rho_kavg_pegreg252x504.csv};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
\caption{LDPC code (progressive edge growth construction) with $n=504, k=252$
|
||||
\cite[\text{PEGReg252x504}]{mackay_enc}}
|
||||
\end{subfigure}%
|
||||
|
||||
\vspace{5mm}
|
||||
|
||||
\begin{subfigure}[t]{\textwidth}
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[hide axis,
|
||||
xmin=10, xmax=50,
|
||||
ymin=0, ymax=0.4,
|
||||
legend style={draw=white!15!black,legend cell align=left}]
|
||||
\addlegendimage{NavyBlue, line width=1.5pt, densely dashed, mark=*}
|
||||
\addlegendentry{$\mu = 9$};
|
||||
\addlegendimage{RedOrange, line width=1.5pt, densely dashed, mark=*}
|
||||
\addlegendentry{$\mu = 5$};
|
||||
\addlegendimage{ForestGreen, line width=1.5pt, densely dashed, mark=*}
|
||||
\addlegendentry{$\mu = 2$};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
\end{subfigure}
|
||||
|
||||
\caption{Dependence of the \ac{BER} on the value of the parameter $\gamma$ for various codes}
|
||||
\label{fig:prox:results_3d_multiple}
|
||||
\end{figure}
|
||||
|
||||
@@ -300,7 +300,7 @@ the gradient can be written as%
|
||||
\begin{align*}
|
||||
\nabla h\left( \tilde{\boldsymbol{x}} \right) =
|
||||
4\left( \tilde{\boldsymbol{x}}^{\circ 3} - \tilde{\boldsymbol{x}} \right)
|
||||
+ 2\tilde{\boldsymbol{x}}^{\circ -1} \circ \boldsymbol{H}^\text{T}
|
||||
+ 2\tilde{\boldsymbol{x}}^{\circ \left( -1 \right) } \circ \boldsymbol{H}^\text{T}
|
||||
\boldsymbol{v}
|
||||
,\end{align*}
|
||||
%
|
||||
@@ -359,6 +359,7 @@ while the newly generated ones are shown with dashed lines.
|
||||
xlabel={$E_b / N_0$ (dB)}, ylabel={BER},
|
||||
ymode=log,
|
||||
legend style={at={(0.5,-0.7)},anchor=south},
|
||||
legend cell align={left},
|
||||
width=0.6\textwidth,
|
||||
height=0.45\textwidth,
|
||||
ymax=1.2, ymin=0.8e-4,
|
||||
@@ -397,8 +398,10 @@ while the newly generated ones are shown with dashed lines.
|
||||
\addlegendentry{$\gamma = 0.05$ (Own results)}
|
||||
|
||||
\addplot [RoyalPurple, mark=*, line width=1pt]
|
||||
table [x=SNR, y=BP, col sep=comma] {res/proximal/ber_paper.csv};
|
||||
\addlegendentry{BP (Wadayama et al.)}
|
||||
table [x=SNR, y=BER, col sep=comma,
|
||||
discard if gt={SNR}{3.5}]
|
||||
{res/generic/bp_20433484.csv};
|
||||
\addlegendentry{BP (20 iterations)}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -1209,7 +1212,7 @@ $\SI{2.80}{GHz}$ and utilizing all cores.
|
||||
height=0.45\textwidth,
|
||||
legend cell align={left},]
|
||||
|
||||
\addplot[RedOrange, only marks, mark=*]
|
||||
\addplot[RedOrange, only marks, mark=square*]
|
||||
table [col sep=comma, x=n, y=spf]
|
||||
{res/proximal/fps_vs_n.csv};
|
||||
\end{axis}
|
||||
@@ -1509,12 +1512,12 @@ theoretical considerations.
|
||||
legend style={at={(0.05,0.77)},anchor=south west},
|
||||
legend cell align={left},]
|
||||
|
||||
\addplot[RedOrange, only marks, mark=*]
|
||||
\addplot[RedOrange, only marks, mark=square*]
|
||||
table [col sep=comma, x=n, y=spf]
|
||||
{res/proximal/fps_vs_n.csv};
|
||||
\addlegendentry{proximal}
|
||||
|
||||
\addplot[RoyalPurple, only marks, mark=triangle*]
|
||||
\addplot[Gray, only marks, mark=*]
|
||||
table [col sep=comma, x=n, y=spf]
|
||||
{res/hybrid/fps_vs_n.csv};
|
||||
\addlegendentry{improved ($N = 12$)}
|
||||
|
||||
Reference in New Issue
Block a user