Complete text for first two figures
This commit is contained in:
@@ -1201,82 +1201,23 @@ generated by simulating at least $200$ logical error events.
|
||||
\label{subsec:Belief Propagation}
|
||||
|
||||
% Local experimental setup
|
||||
% - BP variant
|
||||
|
||||
We began our investigation by using \ac{bp} with no further
|
||||
modifications as the inner decoder.
|
||||
We chose the min-sum variant of \ac{bp} due to its low computational complexity.
|
||||
|
||||
% [Thread] Get impression for max gain
|
||||
% - More global = better -> Compare windowed vs. whole
|
||||
|
||||
% [Description] Figure 4.8
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Windowed (cold start) vs whole decoding
|
||||
% - (?) Semilog y axis
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.8
|
||||
% - Larger window -> better, because more global decoding
|
||||
% - Diminishing returns as the window becomes larger
|
||||
% - As expected, whole works best
|
||||
|
||||
% [Thread] First comparison with warm start
|
||||
% - Compare performance of warm start to cold start
|
||||
|
||||
% [Description] Figure 4.9
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Warm vs cold start
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.9
|
||||
% - Generally better performance with warm start, as expected
|
||||
% - It is surprising that warm start performs better than whole
|
||||
|
||||
% [Thread] Warm start is better than whole due to more effective iterations
|
||||
|
||||
% [Description] Figure 4.10
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Warm vs cold start
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.10
|
||||
% -
|
||||
|
||||
We initially wanted to gain an impression for the performance gain we could
|
||||
expect from a modification to the sliding-window decoding procedure.
|
||||
To this end, we began by analyzing the decoding performance of the
|
||||
original process, without our warm-start modification.
|
||||
We will call this \emph{cold-start} decoding in the following.
|
||||
We examined the decoding performance for different window sizes $W$
|
||||
and compared it against the performance when decoding on the whole
|
||||
detector error matrix at once, i.e., without windowing.
|
||||
|
||||
\Cref{fig:whole_vs_cold} depicts the results of this analysis.
|
||||
\red{[Write more about the experimental setup (200 BP iterations,
|
||||
fixed step size, what else?)]}
|
||||
\red{[Describe the plot (whole decoding in black, (?) list different
|
||||
window sizes and colors/markers, what else?)]}
|
||||
We can see that a larger window results in a lower overall error rate.
|
||||
This seems sensible, because the lower the window size, the more
|
||||
locally the decoding is performed.
|
||||
While this allows us to leverage the time-like structure of the
|
||||
circuitry more strongly and further reduce the latency, it is
|
||||
expected to lower the performance, since \red{[find something to say here]}.
|
||||
Decoding the whole detector error matrix globally with no windowing
|
||||
provides the best performance.
|
||||
Because we expected more global decoding to work better (the inner
|
||||
decoder then has access to a larger portion of the long-range
|
||||
correlations encoded in the detector error matrix before any commit
|
||||
is made) we initially decided to use decoding on the whole detector
|
||||
error matrix as a proxy for the attainable decoding performance.
|
||||
|
||||
\begin{figure}[t]
|
||||
\centering
|
||||
@@ -1329,33 +1270,64 @@ provides the best performance.
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of the $\llbracket
|
||||
144,12,12 \rrbracket$ \ac{bb} code under min-sum decoding
|
||||
($200$ iterations) for different window sizes.
|
||||
The step size was fixed to $F=1$, $12$ rounds of syndrome
|
||||
extraction were performed and the noise model is
|
||||
standard circuit-based depolarizing noise.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\label{fig:whole_vs_cold}
|
||||
\end{figure}
|
||||
|
||||
% Initial results of warm-start decoding
|
||||
% [Experimental parameters] Figure 4.7
|
||||
|
||||
As a next step, we additionally generated error rate curves using our
|
||||
warm-start modification.
|
||||
\red{[Again 200 BP iterations, etc.]}
|
||||
\Cref{fig:whole_vs_cold_vs_warm} shows the numerical results from
|
||||
this experiment.
|
||||
The cold-start results from the previous graph are now plotted in
|
||||
dashed lines, while the new warm-start results are plotted with solid lines.
|
||||
We can see that the decoding performance has been improved overall.
|
||||
\Cref{fig:whole_vs_cold} shows the simulation results for this initial
|
||||
investigation.
|
||||
The three colored curves correspond to cold-start sliding-window
|
||||
decoding with window sizes $W \in \{3, 4, 5\}$, all with the step
|
||||
size fixed to $F = 1$, while the black curve gives the per-round
|
||||
\ac{ler} obtained when decoding on the whole detector error matrix at once.
|
||||
In all cases, the inner \ac{bp} decoder was allowed a maximum of
|
||||
$200$ iterations, and the physical error rate was swept from
|
||||
$p = 0.001$ to $p = 0.004$ in steps of $0.0005$.
|
||||
|
||||
% Unexpected: Warm-start better than whole
|
||||
% [Description] Figure 4.7
|
||||
|
||||
Additionally, we can see some initially unexpected behavior: The warm-start
|
||||
sliding window decoding with $W=5$ performs better than decoding
|
||||
under consideration of the whole detector error matrix at once, even
|
||||
though the process is less global.
|
||||
Across the entire range of physical error rates, all curves exhibit
|
||||
the expected monotonic increase in logical error rate with increasing
|
||||
physical noise.
|
||||
The $W = 3$ decoder consistently yields the highest LER, performing
|
||||
roughly an order of magnitude worse than the baseline at low physical
|
||||
error rates.
|
||||
Increasing the window size to $W = 4$ substantially closes this gap,
|
||||
and the $W = 5$ curve nearly coincides with the whole-block decoder
|
||||
across the full range of physical error rates.
|
||||
|
||||
% [Interpretation] Figure 4.7
|
||||
|
||||
This behavior is consistent with the intuition behind sliding-window decoding.
|
||||
The detector error matrix encodes correlations between detection
|
||||
events that span the full syndrome extraction history, so errors
|
||||
lying in the commit region of an early window are in general
|
||||
constrained by check nodes that only become visible in subsequent windows.
|
||||
Larger windows expose the inner decoder to more of these constraints
|
||||
before any commit is made, leading to better-informed decisions and a
|
||||
lower per-round \ac{ler}.
|
||||
Decoding the whole matrix at once represents the limiting case of
|
||||
this trend and, as expected, achieves the strongest performance.
|
||||
The fact that the $W = 5$ curve is already very close to the
|
||||
whole-block decoder indicates that the marginal benefit of enlarging
|
||||
the window saturates after a certain point.
|
||||
From a practical standpoint, the choice of $W$ thus represents a
|
||||
trade-off between decoding latency and accuracy: larger windows
|
||||
delay the start of decoding by requiring more syndrome extraction
|
||||
rounds to be collected upfront, while the diminishing returns above
|
||||
$W = 4$ suggest that growing the window much further yields little
|
||||
additional accuracy in return.
|
||||
|
||||
% [Thread] First comparison with warm start
|
||||
|
||||
Next, we additionally generated error rate curves for warm-start
|
||||
sliding-window decoding to assess how much of the gap between
|
||||
cold-start and whole-block decoding can be recovered by our modification.
|
||||
We chose the same window sizes as before, so that the warm- and
|
||||
cold-start curves can be compared directly at matching values of $W$.
|
||||
|
||||
\begin{figure}[t]
|
||||
\centering
|
||||
@@ -1429,53 +1401,107 @@ though the process is less global.
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of cold and warm-start
|
||||
decoding under the $\llbracket 144,12,12 \rrbracket$ \ac{bb}
|
||||
code for different window sizes.
|
||||
Decoding was performed using the min-sum algorithm ($200$
|
||||
iterations).
|
||||
The step size was fixed to $F=1$, $12$ rounds of syndrome
|
||||
extraction were performed and the noise model is
|
||||
standard circuit-based depolarizing noise.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\label{fig:whole_vs_cold_vs_warm}
|
||||
\end{figure}
|
||||
|
||||
% [Experimental parameters] Figure 4.8
|
||||
|
||||
\Cref{fig:whole_vs_cold_vs_warm} extends the previous comparison by
|
||||
additionally including the warm-start variant of sliding-window decoding.
|
||||
The dashed colored curves reproduce the cold-start results from
|
||||
\Cref{fig:whole_vs_cold}, while the solid colored curves show the
|
||||
corresponding warm-start runs for the same window sizes
|
||||
$W \in \{3, 4, 5\}$.
|
||||
The remaining experimental parameters are unchanged:
|
||||
the step size is fixed to $F = 1$,
|
||||
the inner \ac{bp} decoder is allowed up to $200$ iterations per
|
||||
window invocation, the black curve again gives the whole-block
|
||||
reference, and the physical error rate is swept from $p = 0.001$ to
|
||||
$p = 0.004$ in steps of $0.0005$.
|
||||
|
||||
% [Description] Figure 4.8
|
||||
|
||||
For each window size, the warm-start variant consistently outperforms
|
||||
its cold-start counterpart, with the dashed curves lying above the
|
||||
corresponding solid curves across the entire range of physical error rates.
|
||||
The performance gap between the two approaches is most pronounced for
|
||||
the largest window ($W = 5$) and gradually narrows as the window size decreases.
|
||||
Additionally, the gap between the cold- and warm-start curves
|
||||
generally widens as the physical error rate decreases.
|
||||
|
||||
% [Interpretation] Figure 4.8
|
||||
|
||||
The improvement of warm-start over cold-start decoding matches the
|
||||
motivation for the modification:
|
||||
By reusing already existing messages from the previous window in the
|
||||
overlap region, the next window invocation has additional information
|
||||
at its disposal about the reliability of the \acp{vn} and \acp{cn}.
|
||||
The widening of the gap towards larger window sizes is consistent
|
||||
with this picture, since with $F$ fixed to $1$ the overlap between
|
||||
consecutive windows spans $W - F = W - 1$ syndrome rounds, so larger
|
||||
$W$ implies that more messages are carried over and a larger fraction
|
||||
of the next window starts in a warm state.
|
||||
% TODO: Possibly insert explanation for higher gain at lowre error rates
|
||||
A perhaps surprising observation is that the warm-start curve for
|
||||
$W = 5$ actually lies below the whole-block reference across the
|
||||
entire range of physical error rates, even though warm-start
|
||||
sliding-window decoding is, by construction, more local than
|
||||
whole-block decoding.
|
||||
A possible explanation for this effect is discussed in the following.
|
||||
|
||||
% [Thread] Warm start is better than whole due to more effective iterations
|
||||
|
||||
A possible explanation for this surprising behavior lies in the
|
||||
number of \ac{bp} iterations effectively spent on the \acp{vn}
|
||||
inside the overlap region.
|
||||
Each \ac{vn} in such an overlap is processed by multiple consecutive
|
||||
window invocations, and because every new window resumes from the
|
||||
messages left over by its predecessor, these invocations effectively
|
||||
accumulate iterations on the same \acp{vn} rather than restarting
|
||||
from scratch.
|
||||
The whole-block decoder, by contrast, performs only a single run of
|
||||
at most $200$ iterations on the entire detector error matrix, so
|
||||
each of its \acp{vn} receives at most that many iterations.
|
||||
It seems this larger effective iteration budget on the overlap
|
||||
regions can outweigh the loss of globality incurred by windowing.
|
||||
|
||||
A natural way to test this hypothesis is to raise the maximum number
|
||||
of \ac{bp} iterations of the whole-block decoder until its per-round
|
||||
\ac{ler} saturates.
|
||||
If the above interpretation is correct, the resulting saturation
|
||||
level should constitute a lower bound that no windowed scheme,
|
||||
irrespective of the initialization, can beat, since by construction
|
||||
whole-block decoding has access to the full set of constraints
|
||||
available to any window.
|
||||
|
||||
% [Description] Figure 4.9
|
||||
% - Parameters
|
||||
% - # BP iterations
|
||||
% - W,F
|
||||
% - Physical error rates
|
||||
% - Warm vs cold start
|
||||
% - Figure description
|
||||
% - TODO:
|
||||
|
||||
% [Interpretation] Figure 4.9
|
||||
% -
|
||||
|
||||
% At some later point
|
||||
\content{When looking at max iterations: Callback to diminishing
|
||||
returns with growing window size: More iterations more beneficial
|
||||
than larger window (+1 for warm-start)}
|
||||
|
||||
\begin{figure}[t]
|
||||
\centering
|
||||
\begin{tikzpicture}
|
||||
\begin{axis}[
|
||||
width=\figwidth,
|
||||
height=\figheight,
|
||||
ymode=log,
|
||||
% xmode=log,
|
||||
legend style={
|
||||
cells={anchor=west},
|
||||
cells={align=left},
|
||||
},
|
||||
enlargelimits=false,
|
||||
ymin=1e-3, ymax=1e-1,
|
||||
grid=both,
|
||||
legend pos = north east,
|
||||
xtick={32,512,1024,2048,4096},
|
||||
% xtick={0.001,0.0015,...,0.004},
|
||||
xticklabels =
|
||||
{$32$,$512$,$1{,}024$,,$2{,}048$,,$3{,}072$,,$4{,}096$},
|
||||
xtick={32, 512, 1024, 1536, 2048, 2560, 3072, 3584, 4096},
|
||||
xticklabel style={/pgf/number format/fixed},
|
||||
xticklabel style={/pgf/number format/precision=4},
|
||||
% x tick label style={rotate=45, anchor=north east,
|
||||
% inner sep=1mm},
|
||||
scaled x ticks=false,
|
||||
xlabel={Number of BP iterations},
|
||||
ylabel={Per-round-LER},
|
||||
extra description/.code={
|
||||
\node[rotate=90, anchor=south]
|
||||
at ([xshift=10mm]current axis.east)
|
||||
{Warm s. (---), Cold s. (- - -)};
|
||||
},
|
||||
]
|
||||
\def\spyxmin{32}
|
||||
\def\spyxmax{512}
|
||||
\def\spyymin{5e-3}
|
||||
\def\spyymax{7e-2}
|
||||
|
||||
\newcommand{\plotcurves}{%
|
||||
\foreach \W/\col/\mark in
|
||||
{3/KITred/triangle,4/KITblue/diamond,5/KITorange/square} {
|
||||
\edef\temp{\noexpand
|
||||
@@ -1489,12 +1515,11 @@ though the process is less global.
|
||||
}
|
||||
\temp
|
||||
}
|
||||
|
||||
\foreach \W/\col/\mark in
|
||||
{3/KITred/triangle*,4/KITblue/diamond*,5/KITorange/square*} {
|
||||
\edef\temp{\noexpand
|
||||
\addplot+[mark=\mark, solid, mark
|
||||
options={fill=\col}, \col]
|
||||
options={fill=\col}, \col, forget plot]
|
||||
table[
|
||||
col sep=comma, x=max_iter,
|
||||
y=LER_per_round,
|
||||
@@ -1502,33 +1527,176 @@ though the process is less global.
|
||||
{res/sim/max_iter/WindowingSyndromeMinSumDecoder/p_0.0025/pass_soft_info_True/F_1/W_\W/LERs.csv};
|
||||
}
|
||||
\temp
|
||||
|
||||
\addlegendentryexpanded{$W = \W$}
|
||||
}
|
||||
\addplot+[mark=*, solid, mark options={fill=black}, black,
|
||||
forget plot]
|
||||
table[col sep=comma, x=max_iter, y=LER_per_round]
|
||||
{res/sim/max_iter/SyndromeMinSumDecoder/p_0.0025/LERs.csv};
|
||||
}
|
||||
|
||||
\addplot+[mark=*, solid, mark options={fill=black}, black]
|
||||
table[
|
||||
col sep=comma, x=max_iter,
|
||||
y=LER_per_round,
|
||||
\begin{axis}[
|
||||
name=main,
|
||||
width=\figwidth,
|
||||
height=\figheight,
|
||||
ymode=log,
|
||||
enlargelimits=false,
|
||||
ymin=1e-3, ymax=1e-1,
|
||||
grid=both,
|
||||
legend pos=north east,
|
||||
xtick={32, 512, 1024, 1536, 2048, 2560, 3072, 3584, 4096},
|
||||
xticklabels={$32$,$512$,$1{,}024$,,$2{,}048$,,$3{,}072$,,$4{,}096$},
|
||||
xticklabel style={/pgf/number format/fixed},
|
||||
scaled x ticks=false,
|
||||
xlabel={Number of BP iterations},
|
||||
ylabel={Per-round-LER},
|
||||
extra description/.code={
|
||||
\node[rotate=90, anchor=south]
|
||||
at ([xshift=10mm]current axis.east)
|
||||
{Warm s. (---), Cold s. (- - -)};
|
||||
},
|
||||
]
|
||||
{res/sim/max_iter/SyndromeMinSumDecoder/p_0.0025/LERs.csv};
|
||||
|
||||
\plotcurves
|
||||
|
||||
\addlegendimage{KITred, mark=triangle*}
|
||||
\addlegendentry{$W = 3$}
|
||||
\addlegendimage{KITblue, mark=diamond*}
|
||||
\addlegendentry{$W = 4$}
|
||||
\addlegendimage{KITorange, mark=square*}
|
||||
\addlegendentry{$W = 5$}
|
||||
\addlegendimage{black, mark=*}
|
||||
\addlegendentry{Whole}
|
||||
|
||||
\node[draw=black, fit={(axis cs:\spyxmin,\spyymin) (axis
|
||||
cs:\spyxmax,\spyymax)}, inner sep=0pt, name=spybox] {};
|
||||
|
||||
\end{axis}
|
||||
|
||||
\begin{axis}[
|
||||
name=inset,
|
||||
at={(main.north)},
|
||||
anchor=south,
|
||||
xshift=0mm, yshift=6mm,
|
||||
width=6.5cm, height=4.875cm,
|
||||
ymode=log,
|
||||
enlargelimits=false,
|
||||
xmin=\spyxmin, xmax=\spyxmax,
|
||||
ymin=\spyymin, ymax=\spyymax,
|
||||
xtick={32,128,256, 512},
|
||||
yticklabels={\empty},
|
||||
xticklabels={\empty},
|
||||
grid=both,
|
||||
axis background/.style={fill=white},
|
||||
]
|
||||
|
||||
\plotcurves
|
||||
\end{axis}
|
||||
|
||||
\draw (spybox.north east) -- (inset.south west);
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of cold and warm-start
|
||||
decoding under the $\llbracket 144,12,12 \rrbracket$ \ac{bb}
|
||||
code for different step sizes.
|
||||
Decoding was performed using the min-sum algorithm ($200$
|
||||
iterations).
|
||||
The window size was fixed to $W=5$, $12$ rounds of syndrome
|
||||
extraction were performed and the noise model is
|
||||
standard circuit-based depolarizing noise.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
% \begin{figure}[t]
|
||||
% \centering
|
||||
% \begin{tikzpicture}[spy using outlines={circle, magnification=2,
|
||||
% connect spies}]
|
||||
%
|
||||
% \begin{axis}[
|
||||
% width=\figwidth,
|
||||
% height=\figheight,
|
||||
% ymode=log,
|
||||
% % xmode=log,
|
||||
% legend style={
|
||||
% cells={anchor=west},
|
||||
% cells={align=left},
|
||||
% },
|
||||
% enlargelimits=false,
|
||||
% ymin=1e-3, ymax=1e-1,
|
||||
% grid=both,
|
||||
% legend pos = north east,
|
||||
% xtick={32,512,1024,2048,4096},
|
||||
% % xtick={0.001,0.0015,...,0.004},
|
||||
% xticklabels =
|
||||
% {$32$,$512$,$1{,}024$,,$2{,}048$,,$3{,}072$,,$4{,}096$},
|
||||
% xtick={32, 512, 1024, 1536, 2048, 2560, 3072, 3584, 4096},
|
||||
% xticklabel style={/pgf/number format/fixed},
|
||||
% xticklabel style={/pgf/number format/precision=4},
|
||||
% % x tick label style={rotate=45, anchor=north east,
|
||||
% % inner sep=1mm},
|
||||
% scaled x ticks=false,
|
||||
% xlabel={Number of BP iterations},
|
||||
% ylabel={Per-round-LER},
|
||||
% extra description/.code={
|
||||
% \node[rotate=90, anchor=south]
|
||||
% at ([xshift=10mm]current axis.east)
|
||||
% {Warm s. (---), Cold s. (- - -)};
|
||||
% },
|
||||
% ]
|
||||
%
|
||||
% \foreach \W/\col/\mark in
|
||||
% {3/KITred/triangle,4/KITblue/diamond,5/KITorange/square} {
|
||||
% \edef\temp{\noexpand
|
||||
% \addplot+[mark=\mark, densely dashed,
|
||||
% forget plot, \col]
|
||||
% table[
|
||||
% col sep=comma, x=max_iter,
|
||||
% y=LER_per_round,
|
||||
% ]
|
||||
% {res/sim/max_iter/WindowingSyndromeMinSumDecoder/p_0.0025/pass_soft_info_False/F_1/W_\W/LERs.csv};
|
||||
% }
|
||||
% \temp
|
||||
% }
|
||||
%
|
||||
% \foreach \W/\col/\mark in
|
||||
% {3/KITred/triangle*,4/KITblue/diamond*,5/KITorange/square*} {
|
||||
% \edef\temp{\noexpand
|
||||
% \addplot+[mark=\mark, solid, mark
|
||||
% options={fill=\col}, \col]
|
||||
% table[
|
||||
% col sep=comma, x=max_iter,
|
||||
% y=LER_per_round,
|
||||
% ]
|
||||
% {res/sim/max_iter/WindowingSyndromeMinSumDecoder/p_0.0025/pass_soft_info_True/F_1/W_\W/LERs.csv};
|
||||
% }
|
||||
% \temp
|
||||
%
|
||||
% \addlegendentryexpanded{$W = \W$}
|
||||
% }
|
||||
%
|
||||
% \addplot+[mark=*, solid, mark options={fill=black}, black]
|
||||
% table[
|
||||
% col sep=comma, x=max_iter,
|
||||
% y=LER_per_round,
|
||||
% ]
|
||||
% {res/sim/max_iter/SyndromeMinSumDecoder/p_0.0025/LERs.csv};
|
||||
%
|
||||
% \addlegendentry{Whole}
|
||||
%
|
||||
% \coordinate (spypoint) at (axis cs:250,1e-2);
|
||||
% \coordinate (magnifyglass) at (axis cs:2048,0.5);
|
||||
%
|
||||
% \end{axis}
|
||||
% \spy [black, size=4cm] on (spypoint)
|
||||
% in node[fill=white] at (magnifyglass);
|
||||
%
|
||||
% \end{tikzpicture}
|
||||
%
|
||||
% \caption{
|
||||
% Comparison of the decoding performance of cold and warm-start
|
||||
% decoding under the $\llbracket 144,12,12 \rrbracket$ \ac{bb}
|
||||
% code for different step sizes.
|
||||
% Decoding was performed using the min-sum algorithm ($200$
|
||||
% iterations).
|
||||
% The window size was fixed to $W=5$, $12$ rounds of syndrome
|
||||
% extraction were performed and the noise model is
|
||||
% standard circuit-based depolarizing noise.
|
||||
% }
|
||||
% \end{figure}
|
||||
|
||||
\begin{figure}[t]
|
||||
\centering
|
||||
\begin{subfigure}{0.48\textwidth}
|
||||
@@ -1674,14 +1842,7 @@ though the process is less global.
|
||||
\end{subfigure}
|
||||
|
||||
\caption{
|
||||
Comparison of cold and warm-start sliding-window
|
||||
min-sum decoding for the $\llbracket 144, 12, 12 \rrbracket$
|
||||
\ac{bb} code
|
||||
under circuit-level noise.
|
||||
$12$ rounds of syndrome extraction were performed and
|
||||
standard circuit-based depolarizing noise was chosen as the
|
||||
noise model.
|
||||
The physical error probabilty was fixed to $0.0025$.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
@@ -1830,16 +1991,7 @@ though the process is less global.
|
||||
\end{subfigure}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of cold and warm-start
|
||||
decoding under the $\llbracket 144,12,12 \rrbracket$ \ac{bb}.
|
||||
Decoding was performed using the \ac{bpgd} algorithm with
|
||||
$T=1$ and no limit on the number of outer iterations.
|
||||
The information used for the warm-start intialization
|
||||
included both the messages on the Tanner graph and decimation
|
||||
information.
|
||||
$12$ rounds of syndrome extraction were performed and
|
||||
standard circuit-based depolarizing noise was chosen as the
|
||||
noise model.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
@@ -1990,19 +2142,7 @@ though the process is less global.
|
||||
\end{subfigure}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of cold and warm-start
|
||||
decoding for the $\llbracket 144,12,12 \rrbracket$ \ac{bb}
|
||||
under circuit-level noise.
|
||||
Decoding was performed using the \ac{bpgd} algorithm with
|
||||
$T=1$.
|
||||
The number of iterations refers to the outer \ac{bpgd}
|
||||
iterations, i.e., the number of decimations.
|
||||
The information used for the warm-start intialization
|
||||
included only the messages on the Tanner graph.
|
||||
$12$ rounds of syndrome extraction were performed and
|
||||
standard circuit-based depolarizing noise was chosen as the
|
||||
noise model.
|
||||
The physical error probabilty was fixed to $0.0025$.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
@@ -2147,16 +2287,7 @@ though the process is less global.
|
||||
\end{subfigure}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of cold and warm-start
|
||||
decoding for the $\llbracket 144,12,12 \rrbracket$ \ac{bb}
|
||||
under circuit-level noise.
|
||||
Decoding was performed using the \ac{bpgd} algorithm with
|
||||
$T=1$ and no limit on the number of outer iterations.
|
||||
The information used for the warm-start intialization
|
||||
included only the messages on the Tanner graph.
|
||||
$12$ rounds of syndrome extraction were performed and
|
||||
standard circuit-based depolarizing noise was chosen as the
|
||||
noise model.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
@@ -2307,19 +2438,7 @@ though the process is less global.
|
||||
\end{subfigure}
|
||||
|
||||
\caption{
|
||||
Comparison of the decoding performance of cold and warm-start
|
||||
decoding for the $\llbracket 144,12,12 \rrbracket$ \ac{bb}
|
||||
under circuit-level noise.
|
||||
Decoding was performed using the \ac{bpgd} algorithm with
|
||||
$T=1$.
|
||||
The number of iterations refers to the outer \ac{bpgd}
|
||||
iterations, i.e., the number of decimations.
|
||||
The information used for the warm-start intialization
|
||||
included only the messages on the Tanner graph.
|
||||
$12$ rounds of syndrome extraction were performed and
|
||||
standard circuit-based depolarizing noise was chosen as the
|
||||
noise model.
|
||||
The physical error probabilty was fixed to $0.0025$.
|
||||
\red{\lipsum[2]}
|
||||
}
|
||||
\end{figure}
|
||||
|
||||
|
||||
@@ -4,4 +4,5 @@
|
||||
\content{Softer way of decimating VNs}
|
||||
\content{Systematic study on using different inner decoders (AED,
|
||||
SED, BPGD, ...)}
|
||||
\content{Investigate SC-LDPC window decoding wave-like effects}
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
\usepackage{nicematrix}
|
||||
\usepackage{colortbl}
|
||||
\usepackage{cleveref}
|
||||
\usepackage{lipsum}
|
||||
|
||||
\usetikzlibrary{calc, positioning, arrows, fit}
|
||||
\usetikzlibrary{external}
|
||||
|
||||
Reference in New Issue
Block a user