Add whole decoding line to max_iter plot
This commit is contained in:
@@ -1167,7 +1167,13 @@ reimplementation in Rust to achieve higher simulation speeds due to
|
||||
the compiled nature of the language.
|
||||
We reimplemented both the window splitting and the decoders themselves.
|
||||
|
||||
% Simulation setup
|
||||
% Global experimental setup
|
||||
% - Code
|
||||
% - # SE rounds
|
||||
% - Noise model
|
||||
% - Per-round LER as figure of merit
|
||||
% - Detector definition
|
||||
% - # simulated error frames
|
||||
|
||||
We chose to carry out our simulations on \ac{bb} codes, as they have
|
||||
recently emerged as particularly promising candidates for practical
|
||||
@@ -1194,13 +1200,60 @@ generated by simulating at least $200$ logical error events.
|
||||
\subsection{Belief Propagation}
|
||||
\label{subsec:Belief Propagation}
|
||||
|
||||
% Intro
|
||||
% Local experimental setup
|
||||
% - BP variant
|
||||
|
||||
We begin our investigation by using \ac{bp} with no further
|
||||
We began our investigation by using \ac{bp} with no further
|
||||
modifications as the inner decoder.
|
||||
We chose the min-sum variant of \ac{bp} due to its low computational complexity.
|
||||
|
||||
% Whole decoding as a lower bound on the error rate
|
||||
% [Thread] Get impression for max gain
|
||||
% - More global = better -> Compare windowed vs. whole
|
||||
|
||||
% [Description] Figure 4.8
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Windowed (cold start) vs whole decoding
|
||||
% - (?) Semilog y axis
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.8
|
||||
% - Larger window -> better, because more global decoding
|
||||
% - Diminishing returns as the window becomes larger
|
||||
% - As expected, whole works best
|
||||
|
||||
% [Thread] First comparison with warm start
|
||||
% - Compare performance of warm start to cold start
|
||||
|
||||
% [Description] Figure 4.9
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Warm vs cold start
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.9
|
||||
% - Generally better performance with warm start, as expected
|
||||
% - It is surprising that warm start performs better than whole
|
||||
|
||||
% [Thread] Warm start is better than whole due to more effective iterations
|
||||
|
||||
% [Description] Figure 4.10
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Warm vs cold start
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.10
|
||||
% -
|
||||
|
||||
We initially wanted to gain an impression for the performance gain we could
|
||||
expect from a modification to the sliding-window decoding procedure.
|
||||
@@ -1529,6 +1582,15 @@ though the process is less global.
|
||||
|
||||
\addlegendentryexpanded{$W = \W$}
|
||||
}
|
||||
|
||||
\addplot+[mark=*, solid, mark options={fill=black}, black]
|
||||
table[
|
||||
col sep=comma, x=max_iter,
|
||||
y=LER_per_round,
|
||||
]
|
||||
{res/sim/max_iter/SyndromeMinSumDecoder/p_0.0025/LERs.csv};
|
||||
|
||||
\addlegendentry{Whole}
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user