Added average error figure and changed figure positioning to h

This commit is contained in:
Andreas Tsouchlos 2023-04-11 17:28:46 +02:00
parent 5dbe90e1d4
commit c074d3e034
9 changed files with 8089 additions and 12 deletions

View File

@ -357,7 +357,7 @@ using a \ac{BP} decoder, as a reference.
The results from Wadayama et al. are shown with solid lines,
while the newly generated ones are shown with dashed lines.
\begin{figure}[H]
\begin{figure}[h]
\centering
\begin{tikzpicture}
@ -434,12 +434,11 @@ Evidently, while the decoding performance does depend on the value of
$\gamma$, there is no single optimal value offering optimal performance, but
rather a certain interval in which it stays largely unchanged.
When examining a number of different codes (figure
\ref{fig:prox:results_3d_multiple}), \todo{Move figure to appendix?}
it is apparent that while the exact
\ref{fig:prox:results_3d_multiple}), it is apparent that while the exact
landscape of the graph depends on the code, the general behaviour is the same
in each case.
\begin{figure}[H]
\begin{figure}[h]
\centering
\begin{tikzpicture}
@ -509,7 +508,7 @@ similar values of the two step sizes.
Again, this consideration applies to a multitude of different codes, as
depicted in figure \ref{fig:prox:gamma_omega_multiple}.
\begin{figure}[H]
\begin{figure}[h]
\centering
\begin{tikzpicture}
@ -528,7 +527,7 @@ depicted in figure \ref{fig:prox:gamma_omega_multiple}.
point meta min=-5.7,
point meta max=-0.5,
colorbar style={
title={$E_b / N_0$},
title={\acs{BER}},
ytick={-5,-4,...,-1},
yticklabels={$10^{-5}$,$10^{-4}$,$10^{-3}$,$10^{-2}$,$10^{-1}$}
}]
@ -558,18 +557,60 @@ the average error is inspected.
This time $\gamma$ and $\omega$ are held constant and the average error is
observed during each iteration of the decoding process for a number of
different \acp{SNR}.
The plots have been generated by averaging the error over TODO decodings.
The plots have been generated by averaging the error over $\SI{500000}{}$ decodings.
As some decodings go one for more iterations than others, the number of values
which are averaged for each datapoints vary.
This explains the bump visible around $k=\text{TODO}$, since after
this point more and more correct decodings converge and stop iterating,
This explains the dip visible in all curves around $k=20$, since after
this point more and more correct decodings stop iterating,
leaving more and more faulty ones to be averaged.
A this point the decline in the average error stagnates, rendering an
increase in $K$ counterproductive as it only raises the average timing
requirements of the decoding process.
The higher the \ac{SNR}, the fewer decodings are present at each iteration
to average, since a solution is found earlier.
This explains the decreasing smootheness of the lines as the \ac{SNR} rises.
Remarkably, the \ac{SNR} seems to not have any impact on the number of
iterations necessary to reach the point at which the average error
stabilizes.
Furthermore, the improvement in decoding performance stagnates at a particular
point, rendering an increase in $K$ counterproductive as it only raises the
average timing requirements of the decoding process.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{axis}[
grid=both,
xlabel={Iterations},
ylabel={Average $\lVert \boldsymbol{c}-\boldsymbol{\hat{c}} \rVert$},
legend pos=outer north east,
width=0.6\textwidth,
height=0.45\textwidth,
]
\addplot [ForestGreen, mark=none, line width=1pt]
table [col sep=comma, discard if not={omega}{0.0774263682681127}, x=k, y=err]
{res/proximal/2d_avg_error_20433484_1db.csv};
\addlegendentry{$E_b / N_0 = \SI{1}{dB}$}
\addplot [NavyBlue, mark=none, line width=1pt]
table [col sep=comma, discard if not={omega}{0.0774263682681127}, x=k, y=err]
{res/proximal/2d_avg_error_20433484_3db.csv};
\addlegendentry{$E_b / N_0 = \SI{3}{dB}$}
\addplot [RedOrange, mark=none, line width=1pt]
table [col sep=comma, discard if not={omega}{0.052233450742668434}, x=k, y=err]
{res/proximal/2d_avg_error_20433484_5db.csv};
\addlegendentry{$E_b / N_0 = \SI{5}{dB}$}
\addplot [RoyalPurple, mark=none, line width=1pt]
table [col sep=comma, discard if not={omega}{0.052233450742668434}, x=k, y=err]
{res/proximal/2d_avg_error_20433484_8db.csv};
\addlegendentry{$E_b / N_0 = \SI{8}{dB}$}
\end{axis}
\end{tikzpicture}
\caption{Average error for $\SI{500000}{}$ decodings\protect\footnotemark{}}
\end{figure}%
%
\footnotetext{(3,6) regular \ac{LDPC} code with n = 204, k = 102
\cite[\text{204.33.484}]{mackay_enc}; $\gamma=0.05, \omega = 0.05, K=200, \eta=1.5$
}%
%
Changing the parameter $\eta$ does not appear to have a significant effect on
the decoding performance when keeping the value within a reasonable window

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
{
"duration": 397.47346705402015,
"name": "2d_avg_error_20433484",
"platform": "Linux-6.1.6-arch1-3-x86_64-with-glibc2.36",
"SNR": 1,
"K": 200,
"end_time": "2023-01-24 23:34:34.589607"
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
{
"duration": 297.83671288599726,
"name": "2d_avg_error_20433484",
"platform": "Linux-6.1.6-arch1-3-x86_64-with-glibc2.36",
"SNR": 3,
"K": 200,
"end_time": "2023-01-24 23:26:41.141614"
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
{
"duration": 555.086431461008,
"name": "2d_avg_error_20433484",
"platform": "Linux-6.1.6-arch1-3-x86_64-with-glibc2.36",
"SNR": 5,
"K": 200,
"end_time": "2023-01-24 23:17:21.185610"
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
{
"duration": 870.158950668003,
"name": "2d_avg_error_20433484",
"platform": "Linux-6.1.6-arch1-3-x86_64-with-glibc2.36",
"SNR": 8,
"K": 200,
"end_time": "2023-01-24 23:06:24.712365"
}