Write conclusion to BP investigation. BP investigation now done
This commit is contained in:
@@ -1914,37 +1914,50 @@ low physical error rates similarly mirrors the patterns already
|
||||
observed in
|
||||
\Cref{fig:whole_vs_cold_vs_warm,fig:bp_w_over_iter}.
|
||||
|
||||
% TODO: Rephrase
|
||||
The coincidence of all three cold-start curves at
|
||||
$n_\text{iter} = 32$ is a direct consequence of the cold-start initialization.
|
||||
With each new window starting from a uniform prior regardless of $F$,
|
||||
the per-window decoding problem is essentially the same for every
|
||||
step size, and the corresponding \acp{ler} agree as long as the
|
||||
inner decoder has too few iterations to propagate information
|
||||
beyond the local syndrome structure within a single window.
|
||||
This is also the regime in which the warm-start advantage is most
|
||||
valuable, and indeed it is where the warm-start curves spread out
|
||||
most strongly with $F$.
|
||||
|
||||
% TODO: Rephrase
|
||||
A noteworthy methodological point is that, in contrast to the window
|
||||
size $W$, the step size $F$ has no effect on decoding latency:
|
||||
the time at which the inner decoder for a given window can begin
|
||||
running is determined solely by when the syndromes for the rounds
|
||||
In contrast to the window size $W$, the step size $F$ has no effect
|
||||
on decoding latency.
|
||||
The time at which the inner decoder for a given window can begin
|
||||
decoding is determined solely by when the syndromes for the rounds
|
||||
covered by that window have been collected, which is independent of
|
||||
how much the window overlaps with its predecessor.
|
||||
Similarly, assuming the decoder is fast enough to keep up with the
|
||||
incoming syndrome measurements corresponding to the \acp{cn} of
|
||||
subsequent windows, the time at which decoding is complete depends only
|
||||
on the amount of time spent on decoding the very last window.
|
||||
A smaller $F$ thus only costs additional total compute and not
|
||||
additional latency, which is favorable for a warm-start
|
||||
sliding-window implementation:
|
||||
the regime in which the warm-start modification helps most --- large
|
||||
overlap and therefore small $F$ --- is precisely the regime in which
|
||||
the cost of that overlap shows up only in the compute budget and not
|
||||
in the latency budget.
|
||||
sliding-window implementation.
|
||||
This is especially favorable for our warm-start modification, as it
|
||||
works best where the overlap is largest, i.e., for low values of $F$.
|
||||
|
||||
% At some later point
|
||||
\content{When looking at max iterations: Callback to diminishing
|
||||
returns with growing window size: More iterations more beneficial
|
||||
than larger window (+1 for warm-start)}
|
||||
% Conclusion of BP investigation
|
||||
|
||||
We conclude our investigation into the performance of warm-start
|
||||
sliding-window decoding under plain \ac{bp} by summarizing our findings.
|
||||
The warm-start modification raises the number of \ac{bp} iterations
|
||||
effectively spent on the \acp{vn} in an overlap region by reusing the
|
||||
messages from the previous window invocation instead of restarting
|
||||
from scratch.
|
||||
This explains why decoding performance improved monotonically with
|
||||
the size of the overlap, and consequently why both larger window
|
||||
sizes $W$ and smaller step sizes $F$ yielded lower per-round \acp{ler}.
|
||||
The warm-start gain over cold-start was most pronounced at low
|
||||
per-window iteration budgets,
|
||||
% and at low physical error rates, the
|
||||
% regimes
|
||||
the regime in which each additional iteration carries proportionally
|
||||
more information.
|
||||
Additionally, we would like to note that the warm-start modification
|
||||
incurs no computational cost relative to cold-start decoding.
|
||||
It changes neither the decoding latency nor the total compute, since
|
||||
both schemes process the same windows for the same number of
|
||||
iterations and differ only in the initialization of the \ac{bp}
|
||||
messages of each new window.
|
||||
We also observed that plain \ac{bp} did not saturate even at $4096$
|
||||
iterations, which we attribute to the short cycles in the underlying
|
||||
Tanner graph.
|
||||
This motivates the next subsection, in which we replace the inner
|
||||
\ac{bp} decoder by its guided-decimation variant.
|
||||
|
||||
%%%%%%%%%%%%%%%%
|
||||
\subsection{Belief Propagation with Guided Decimation}
|
||||
|
||||
Reference in New Issue
Block a user