Moved figures around and fixed caption

This commit is contained in:
Andreas Tsouchlos 2023-04-24 22:55:55 +02:00
parent ca345d7d5b
commit dd15a2affd
3 changed files with 1119 additions and 349 deletions

View File

@ -880,7 +880,7 @@ A single optimal value giving optimal performance does not exist; rather,
as long as the value is chosen within a certain range, the performance is
approximately equally good.
\begin{figure}[h]
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.48\textwidth}
@ -974,8 +974,9 @@ The values chosen for the rest of the parameters are the same as before.
It is visible that choosing a large value for $\rho$ as well as a small value
for $\mu$ minimizes the average number of iterations and thus the average
run time of the decoding process.
The same behavior can be observed when looking at various%
%
\begin{figure}[h]
\begin{figure}[H]
\centering
\begin{tikzpicture}
@ -1010,10 +1011,235 @@ run time of the decoding process.
\label{fig:admm:mu_rho_iterations}
\end{figure}%
%
The same behavior can be observed when looking at various different codes,
as shown in figure \ref{fig:admm:mu_rho_multiple}.
\noindent different codes, as shown in figure \ref{fig:admm:mu_rho_multiple}.
To get an estimate for the maximum number of iterations $K$ necessary,
the average error during decoding can be used.
This is shown in figure \ref{fig:admm:avg_error} as an average of
$\SI{100000}{}$ decodings.
$\mu$ is set to 5 and $\rho$ is set to $1$ and the rest of the parameters are
again chosen as $\epsilon_\text{pri}=10^{-5}$ and
$\epsilon_\text{dual}=10^{-5}$.
Similarly to the results in section \ref{subsec:prox:choice}, a dip is
visible around the $20$ iteration mark.
This is due to the fact that as the number of iterations increases,
more and more decodings converge, leaving only the mistaken ones to be
averaged.
The point at which the wrong decodings start to become dominant and the
decoding performance does not increase any longer is largely independent of
the \ac{SNR}, allowing the maximum number of iterations to be chosen without
considering the \ac{SNR}.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
width=0.6\textwidth,
height=0.45\textwidth,
xlabel={Iteration}, ylabel={Average $\left\Vert \hat{\boldsymbol{c}}
- \boldsymbol{c} \right\Vert$}
]
\addplot[ForestGreen, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{1.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{1}{dB}$}
\addplot[RedOrange, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{2.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{2}{dB}$}
\addplot[NavyBlue, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{3.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{3}{dB}$}
\addplot[RoyalPurple, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{4.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{4}{dB}$}
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{100000}{}$ decodings. (3,6)
regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:avg_error}
\end{figure}%
The last two parameters remaining to be examined are the tolerances for the
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$.
These are both set to the same value $\epsilon$.
The effect of their value on the decoding performance is visualized in figure
\ref{fig:admm:epsilon}.
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
kept constant, with $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$ and
performing a maximum of 200 iterations.
A lower value for the tolerance initially leads to a dramatic decrease in the
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\epsilon$}, ylabel={\acs{FER}},
ymode=log,
xmode=log,
x dir=reverse,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=epsilon, y=FER,
discard if not={SNR}{3.0},]
{res/admm/fer_epsilon_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$ on the \acs{FER}. (3,6) regular \ac{LDPC} code with
$n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:epsilon}
\end{figure}%
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
small and large, respectively, to reduce the average runtime of the decoding
process, while keeping them within a certain range as to not compromise the
decoding performance.
The maximum number of iterations performed can be chosen independently
of the \ac{SNR}.
Finally, small values should be given to the parameters
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
possible error rate.
\subsection{Decoding Performance}
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
\cite{original_admm} are compared to the results from the simulations
conducted in the context of this thesis.
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
the same as in \cite{original_admm}.
The two \ac{FER} curves are practically identical.
Also shown is the curve resulting from \ac{BP} decoding, performing
1000 iterations.
The two algorithms perform relatively similarly, staying within $\SI{0.5}{dB}$
of one another.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
legend cell align={left},
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results.
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
\begin{figure}[h]
In figure \ref{fig:admm:bp_multiple}, \ac{FER} curves for \ac{LP} decoding
using \ac{ADMM} and \ac{BP} are shown for various codes.
To ensure comparability, in all cases the number of iterations was set to
$K=200$.
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
$\epsilon_\text{pri} = 10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$.
Comparing the simulation results for the different codes, it is apparent that
the difference in decoding performance depends on the code being
considered.
For all codes considered here, however, the performance of \ac{LP} decoding
using \ac{ADMM} comes close to that of \ac{BP}, again staying withing
approximately $\SI{0.5}{dB}$.
\subsection{Computational Performance}
\label{subsec:admm:comp_perf}
In terms of time complexity, the three steps of the decoding algorithm
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
considered.
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
algorithm employed.
Since for the implementation completed for this work the projection algorithm
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
also has linear time complexity.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
\label{fig:admm:time}
\end{figure}%
Simulation results from a range of different codes can be used to verify this
analysis.
Figure \ref{fig:admm:time} shows the average time needed to decode one
frame as a function of its length.
The codes used for this consideration are the same as in section \ref{subsec:prox:comp_perf}
The results are necessarily skewed because these vary not only
in their length, but also in their construction scheme and rate.
Additionally, different optimization opportunities arise depending on the
length of a code, since for smaller codes dynamic memory allocation can be
completely omitted.
This may explain why the datapoint at $n=504$ is higher then would be expected
with linear behavior.
Nonetheless, the simulation results roughly match the expected behavior
following from the theoretical considerations.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.48\textwidth}
@ -1207,187 +1433,12 @@ as shown in figure \ref{fig:admm:mu_rho_multiple}.
\end{subfigure}
\caption{Dependence of the \ac{BER} on the value of the parameter $\gamma$ for various codes}
\caption{Dependence of average number of iterations required on the parameters
$\mu$ and $\rho$ for various codes}
\label{fig:admm:mu_rho_multiple}
\end{figure}
To get an estimate for the maximum number of iterations $K$ necessary,
the average error during decoding can be used.
This is shown in figure \ref{fig:admm:avg_error} as an average of
$\SI{100000}{}$ decodings.
$\mu$ is set to 5 and $\rho$ is set to $1$ and the rest of the parameters are
again chosen as $\epsilon_\text{pri}=10^{-5}$ and
$\epsilon_\text{dual}=10^{-5}$.
Similarly to the results in section \ref{subsec:prox:choice}, a dip is
visible around the $20$ iteration mark.
This is due to the fact that as the number of iterations increases,
more and more decodings converge, leaving only the mistaken ones to be
averaged.
The point at which the wrong decodings start to become dominant and the
decoding performance does not increase any longer is largely independent of
the \ac{SNR}, allowing the maximum number of iterations to be chosen without
considering the \ac{SNR}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
width=0.6\textwidth,
height=0.45\textwidth,
xlabel={Iteration}, ylabel={Average $\left\Vert \hat{\boldsymbol{c}}
- \boldsymbol{c} \right\Vert$}
]
\addplot[ForestGreen, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{1.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{1}{dB}$}
\addplot[RedOrange, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{2.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{2}{dB}$}
\addplot[NavyBlue, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{3.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{3}{dB}$}
\addplot[RoyalPurple, line width=1pt]
table [col sep=comma, x=k, y=err,
discard if not={SNR}{4.0},
discard if gt={k}{100}]
{res/admm/avg_error_20433484.csv};
\addlegendentry{$E_b / N_0 = \SI{4}{dB}$}
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{100000}{}$ decodings. (3,6)
regular \ac{LDPC} code with $n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:avg_error}
\end{figure}%
The last two parameters remaining to be examined are the tolerances for the
stopping criterion of the algorithm, $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$.
These are both set to the same value $\epsilon$.
The effect of their value on the decoding performance is visualized in figure
\ref{fig:admm:epsilon}.
All parameters except $\epsilon_\text{pri}$ and $\epsilon_\text{dual}$ are
kept constant, with $\mu=5$, $\rho=1$ and $E_b / N_0 = \SI{4}{dB}$ and
performing a maximum of 200 iterations.
A lower value for the tolerance initially leads to a dramatic decrease in the
\ac{FER}, this effect fading as the tolerance becomes increasingly lower.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$\epsilon$}, ylabel={\acs{FER}},
ymode=log,
xmode=log,
x dir=reverse,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot[NavyBlue, line width=1pt, densely dashed, mark=*]
table [col sep=comma, x=epsilon, y=FER,
discard if not={SNR}{3.0},]
{res/admm/fer_epsilon_20433484.csv};
\end{axis}
\end{tikzpicture}
\caption{Effect of the value of the parameters $\epsilon_\text{pri}$ and
$\epsilon_\text{dual}$ on the \acs{FER}. (3,6) regular \ac{LDPC} code with
$n=204, k=102$ \cite[\text{204.33.484}]{mackay_enc}}
\label{fig:admm:epsilon}
\end{figure}%
In conclusion, the parameters $\mu$ and $\rho$ should be chosen comparatively
small and large, respectively, to reduce the average runtime of the decoding
process, while keeping them within a certain range as to not compromise the
decoding performance.
The maximum number of iterations performed can be chosen independently
of the \ac{SNR}.
Finally, small values should be given to the parameters
$\epsilon_{\text{pri}}$ and $\epsilon_{\text{dual}}$ to achieve the lowest
possible error rate.
\subsection{Decoding Performance}
In figure \ref{fig:admm:results}, the simulation results for the ``Margulis''
\ac{LDPC} code ($n=2640$, $k=1320$) presented by Barman et al. in
\cite{original_admm} are compared to the results from the simulations
conducted in the context of this thesis.
The parameters chosen were $\mu=3.3$, $\rho=1.9$, $K=1000$,
$\epsilon_\text{pri}=10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$,
the same as in \cite{original_admm}.
The two \ac{FER} curves are practically identical.
Also shown is the curve resulting from \ac{BP} decoding, performing
1000 iterations.
The two algorithms perform relatively similarly, staying within $\SI{0.5}{dB}$
of one another.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={$E_b / N_0 \left( \text{dB} \right) $}, ylabel={\acs{FER}},
ymode=log,
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.57)},anchor=south},
legend cell align={left},
]
\addplot[Turquoise, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER,
discard if gt={SNR}{2.2},
]
{res/admm/fer_paper_margulis.csv};
\addlegendentry{\acs{ADMM} (Barman et al.)}
\addplot[NavyBlue, densely dashed, line width=1pt, mark=triangle]
table [col sep=comma, x=SNR, y=FER,]
{res/admm/ber_margulis264013203.csv};
\addlegendentry{\acs{ADMM} (Own results)}
\addplot[RoyalPurple, line width=1pt, mark=*]
table [col sep=comma, x=SNR, y=FER, discard if gt={SNR}{2.2},]
{res/generic/fer_bp_mackay_margulis.csv};
\addlegendentry{\acs{BP} (Barman et al.)}
\end{axis}
\end{tikzpicture}
\caption{Comparison of datapoints from Barman et al. with own simulation results.
``Margulis'' \ac{LDPC} code with $n = 2640$, $k = 1320$
\cite[\text{Margulis2640.1320.3}]{mackay_enc}\protect\footnotemark{}}
\label{fig:admm:results}
\end{figure}%
%
In figure \ref{fig:admm:bp_multiple}, \ac{FER} curves for \ac{LP} decoding
using \ac{ADMM} and \ac{BP} are shown for various codes.
To ensure comparability, in all cases the number of iterations was set to
$K=200$.
The values of the other parameters were chosen as $\mu = 5$, $\rho = 1$,
$\epsilon_\text{pri} = 10^{-5}$ and $\epsilon_\text{dual}=10^{-5}$.
Comparing the simulation results for the different codes, it is apparent that
the difference in decoding performance depends on the code being
considered.
For all codes considered here, however, the performance of \ac{LP} decoding
using \ac{ADMM} comes close to that of \ac{BP}, again staying withing
approximately $\SI{0.5}{dB}$.
\begin{figure}[h]
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.48\textwidth}
@ -1581,54 +1632,3 @@ approximately $\SI{0.5}{dB}$.
and \ac{BP} for various codes}
\label{fig:admm:bp_multiple}
\end{figure}
\subsection{Computational Performance}
\label{subsec:admm:comp_perf}
In terms of time complexity, the three steps of the decoding algorithm
in equations (\ref{eq:admm:c_update}) - (\ref{eq:admm:u_update}) have to be
considered.
The $\tilde{\boldsymbol{c}}$- and $\boldsymbol{u}_j$-update steps are
$\mathcal{O}\left( n \right)$ \cite[Sec. III. C.]{original_admm}.
The complexity of the $\boldsymbol{z}_j$-update step depends on the projection
algorithm employed.
Since for the implementation completed for this work the projection algorithm
presented in \cite{original_admm} is used, the $\boldsymbol{z}_j$-update step
also has linear time complexity.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[grid=both,
xlabel={$n$}, ylabel={Time per frame (s)},
width=0.6\textwidth,
height=0.45\textwidth,
legend style={at={(0.5,-0.42)},anchor=south},
legend cell align={left},]
\addplot[NavyBlue, only marks, mark=triangle*]
table [col sep=comma, x=n, y=spf]
{res/admm/fps_vs_n.csv};
\end{axis}
\end{tikzpicture}
\caption{Timing requirements of the \ac{LP} decoding using \ac{ADMM} implementation}
\label{fig:admm:time}
\end{figure}%
Simulation results from a range of different codes can be used to verify this
analysis.
Figure \ref{fig:admm:time} shows the average time needed to decode one
frame as a function of its length.
The codes used for this consideration are the same as in section \ref{subsec:prox:comp_perf}
The results are necessarily skewed because these vary not only
in their length, but also in their construction scheme and rate.
Additionally, different optimization opportunities arise depending on the
length of a code, since for smaller codes dynamic memory allocation can be
completely omitted.
This may explain why the datapoint at $n=504$ is higher then would be expected
with linear behavior.
Nonetheless, the simulation results roughly match the expected behavior
following from the theoretical considerations.

File diff suppressed because it is too large Load Diff

View File

@ -222,7 +222,7 @@
\include{chapters/comparison}
% \include{chapters/discussion}
\include{chapters/conclusion}
\include{chapters/appendix}
% \include{chapters/appendix}
%\listoffigures