Add text for second BPGD plot

This commit is contained in:
2026-05-03 03:10:21 +02:00
parent 9ca2698d38
commit 0016df0004

View File

@@ -2253,7 +2253,7 @@ opposite of the corresponding dependence under plain \ac{bp}
rather than helps, even though smaller $F$ implies a larger overlap rather than helps, even though smaller $F$ implies a larger overlap
in both cases. in both cases.
This inversion provides the clue to what is going wrong. This inversion provides a clue to what is going wrong.
Recall from Recall from
\Cref{subsec:Warm-Start Belief Propagation with Guided Decimation Decoding} \Cref{subsec:Warm-Start Belief Propagation with Guided Decimation Decoding}
that the warm start for \ac{bpgd} carries over not only the \ac{bp} that the warm start for \ac{bpgd} carries over not only the \ac{bp}
@@ -2281,6 +2281,19 @@ Decreasing $F$ at fixed $W$, by contrast, enlarges only the overlap
without enlarging the window, so the freezing effect is no longer without enlarging the window, so the freezing effect is no longer
offset and warm-start performance worsens with smaller $F$. offset and warm-start performance worsens with smaller $F$.
% [Thread] Test hypothesis by carying number of iterations
The hypothesis from the previous paragraph is straightforward to test.
If the warm-start regression in \Cref{fig:bpgd_wf} is indeed caused by
the decimation state being carried across the window boundary, then
reducing the maximum number of inner \ac{bp} iterations
$n_\text{iter}$ should reduce the maximum number of \acp{vn} that can
be decimated before window $\ell$ commits, and the warm-start
performance should approach that of warm-start under plain \ac{bp} as
$n_\text{iter}$ is lowered.
We therefore now vary $n_\text{iter}$ at fixed window parameters and
fixed physical error rate.
\begin{figure}[t] \begin{figure}[t]
\centering \centering
\hspace*{-6mm} \hspace*{-6mm}
@@ -2353,6 +2366,7 @@ offset and warm-start performance worsens with smaller $F$.
\caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing \caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing
elit, sed do eiusmod tempor incididunt}} elit, sed do eiusmod tempor incididunt}}
\label{fig:bpgd_iter_W}
\end{subfigure}% \end{subfigure}%
\hfill% \hfill%
\begin{subfigure}{0.48\textwidth} \begin{subfigure}{0.48\textwidth}
@@ -2425,13 +2439,107 @@ offset and warm-start performance worsens with smaller $F$.
\caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing \caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing
elit, sed do eiusmod tempor incididunt}} elit, sed do eiusmod tempor incididunt}}
\label{fig:bpgd_iter_F}
\end{subfigure} \end{subfigure}
\caption{ \caption{
\red{\lipsum[2]} \red{\lipsum[2]}
} }
\label{fig:bpgd_iter}
\end{figure} \end{figure}
% [Experimental parameters] Figure 4.11
\Cref{fig:bpgd_iter} shows the per-round \ac{ler} of \ac{bpgd}
sliding-window decoding as a function of the maximum number of inner
\ac{bp} iterations $n_\text{iter}$.
The dashed colored curves correspond to cold-start sliding-window
decoding and the solid colored curves to warm-start, again carrying
over both the \ac{bp} messages and the channel \acp{llr} on the
overlap region.
The physical error rate is fixed at $p = 0.0025$ and the iteration
budget is swept over $n_\text{iter} \in \{32, 128, 256, 512, 1024,
1536, 2048, 2560, 3072, 3584, 4096\}$.
\Cref{fig:bpgd_iter_W} sweeps over the window size with
$W \in \{3, 4, 5\}$ at fixed step size $F = 1$, and
\Cref{fig:bpgd_iter_F} sweeps over the step size with
$F \in \{1, 2, 3\}$ at fixed window size $W = 5$.
% [Description] Figure 4.11
For low iteration budgets, all curves in both panels behave similarly
to the plain-\ac{bp} curves in
\Cref{fig:bp_w_over_iter,fig:bp_f_over_iter}.
The per-round \ac{ler} decreases gradually with $n_\text{iter}$, and
the warm-start curves lie below their cold-start counterparts at
matching window parameters.
As $n_\text{iter}$ continues to grow, however, the cold-start curves
undergo a sharp drop, after which they lie roughly an order of
magnitude below the warm-start curves, and eventually settle into a
flat plateau.
The warm-start curves first reach a minimum at an intermediate
iteration count, then turn upwards, and finally also approach a
plateau, albeit at a substantially higher per-round \ac{ler}.
The warm-start curves are also less smooth than the cold-start ones
at certain points.
In \Cref{fig:bpgd_iter_W}, the iteration count at which the
cold-start curves drop sharply increases with the window size, from
roughly $n_\text{iter} \approx 2000$ for $W = 3$, to
$\approx 2500$ for $W = 4$, to $\approx 3000$ for $W = 5$.
The corresponding warm-start curves reach their minima at
approximately the same iteration counts, and from there onwards begin
to worsen.
At the largest sampled iteration budget, the cold-start curves have
plateaued at per-round \acp{ler} of order $10^{-3}$ while the
warm-start curves have grown to per-round \acp{ler} above
$4 \times 10^{-2}$.
In \Cref{fig:bpgd_iter_F}, the cold-start curves drop sharply at
roughly the same iteration count for all three step sizes, while
the warm-start curves now show a clear reordering as $n_\text{iter}$
grows.
At low iteration budgets the warm-start ordering matches the
cold-start ordering, with $F = 1$ best and $F = 3$ worst, but at the
largest iteration budget this ordering is fully inverted: warm-start
$F = 1$ is now the worst and $F = 3$ the best.
% [Interpretation] Figure 4.11
The cold-start behavior matches the preliminary investigation that
motivated our choice of $n_\text{iter} = 5000$ in \Cref{fig:bpgd_wf}.
At low iteration budgets the inner decoder has not yet had time to
decimate a substantial fraction of the \acp{vn}, so its behavior
remains close to that of plain \ac{bp}.
Once the iteration budget is large enough for the decimation effects
to become pronounced, the per-round \ac{ler} drops sharply and
\ac{bpgd} delivers its intended performance gain.
Once every \ac{vn} in a window has been decimated, no further
iterations can change the outcome, which is why each cold-start curve
reaches a flat plateau.
The warm-start curves exhibit the same two regimes, but with the
opposite outcome in the second one, which is exactly what the
hypothesis from the previous paragraph predicts.
At low $n_\text{iter}$, decimation has not yet taken hold and the
warm-start initialization carries forward only the \ac{bp} messages
in any meaningful sense, so the warm-start variant outperforms its
cold-start counterpart for the same reason as in the plain-\ac{bp}
investigation.
As $n_\text{iter}$ grows past the point where decimation begins to
matter, the decimation information carried over starts to impede the
decoding performance.
The same mechanism explains the inversion of the step-size ordering
in \Cref{fig:bpgd_iter_F}.
At low iteration budgets, the ordering is set by the same overlap
argument as for plain \ac{bp}: smaller $F$ implies a larger overlap
between consecutive windows, more shared messages, and therefore
better warm-start performance.
At large iteration budgets, the ordering is set by the premature hard
decisions of the \acp{vn}.
We do not have a definitive explanation for the roughness visible in some
of the warm-start curves and limit ourselves to noting it.
\begin{figure}[t] \begin{figure}[t]
\centering \centering
\hspace*{-6mm} \hspace*{-6mm}