From 6e53ed5d1bf40a17e7c9257f38169c8d15283e72 Mon Sep 17 00:00:00 2001 From: Andreas Tsouchlos Date: Sun, 3 May 2026 04:00:05 +0200 Subject: [PATCH] Complete results chapter text --- src/thesis/chapters/4_decoding_under_dems.tex | 142 +++++++++++++++++- 1 file changed, 140 insertions(+), 2 deletions(-) diff --git a/src/thesis/chapters/4_decoding_under_dems.tex b/src/thesis/chapters/4_decoding_under_dems.tex index a7fdcf9..58eb116 100644 --- a/src/thesis/chapters/4_decoding_under_dems.tex +++ b/src/thesis/chapters/4_decoding_under_dems.tex @@ -2035,6 +2035,9 @@ For the underlying \ac{bp} step we use the \ac{spa} variant rather than the min-sum approximation employed in \Cref{subsec:Belief Propagation}, since this made the implementation of the guided decimation more straightforward. +Furthermore, we set $T=1$, as this eases the +computational requirements and \cite{yao_belief_2024} showed that most of +the gain can be achieved even for low values of $T$. \begin{figure}[t] \centering @@ -2518,8 +2521,8 @@ iterations can change the outcome, which is why each cold-start curve reaches a flat plateau. The warm-start curves exhibit the same two regimes, but with the -opposite outcome in the second one, which is exactly what the -hypothesis from the previous paragraph predicts. +opposite outcome in the second one, which is exactly what our earlier +hypothesis predicts. At low $n_\text{iter}$, decimation has not yet taken hold and the warm-start initialization carries forward only the \ac{bp} messages in any meaningful sense, so the warm-start variant outperforms its @@ -2540,6 +2543,17 @@ decisions of the \acp{vn}. We do not have a definitive explanation for the roughness visible in some of the warm-start curves and limit ourselves to noting it. +% [Thread] Turn to previous way of warm-start + +The natural consequence of the previous diagnosis is to drop the +problematic part of the warm-start initialization for \ac{bpgd} and +to carry over only the \ac{bp} messages on the edges of the overlap +region, as in \Cref{fig:messages_tanner}, while leaving the channel +\acp{llr} of the next window in their original cold-start state. +Note that some information about the previous window's decimation +state is still implicitly carried over through the \ac{bp} messages, +since the decimation decisions were made based on the messages themselves. + \begin{figure}[t] \centering \hspace*{-6mm} @@ -2610,6 +2624,7 @@ of the warm-start curves and limit ourselves to noting it. \caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt}} + \label{fig:bpgd_msg_W} \end{subfigure}% \hfill% \begin{subfigure}{0.5\textwidth} @@ -2680,13 +2695,71 @@ of the warm-start curves and limit ourselves to noting it. \caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt}} + \label{fig:bpgd_msg_F} \end{subfigure} \caption{ \red{\lipsum[2]} } + \label{fig:bpgd_msg} \end{figure} +% [Experimental parameters] Figure 4.12 + +\Cref{fig:bpgd_msg} repeats the experiment of \Cref{fig:bpgd_wf} +with the modified warm-start procedure that carries over only the +\ac{bp} messages. +All other experimental parameters are unchanged: the maximum number +of inner \ac{bp} iterations is $n_\text{iter} = 5000$, and the +physical error rate is swept from $p = 0.001$ to $p = 0.004$ in steps +of $0.0005$. +The cold-start curves (dashed) are identical to those in +\Cref{fig:bpgd_wf}. +The warm-start curves are shown with solid lines. +\Cref{fig:bpgd_msg_W} sweeps over the window size with +$W \in \{3, 4, 5\}$ at fixed step size $F = 1$, and +\Cref{fig:bpgd_msg_F} sweeps over the step size with +$F \in \{1, 2, 3\}$ at fixed window size $W = 5$. + +% [Description] Figure 4.12 + +The warm-start curves now lie below their cold-start counterparts +across both panels and across the entire physical error rate range, +in contrast to \Cref{fig:bpgd_wf}. +In \Cref{fig:bpgd_msg_W}, larger window sizes again yield lower +per-round \acp{ler} for both warm- and cold-start, and the warm-start +advantage over cold-start is more pronounced for $W \in \{4, 5\}$ +than for $W = 3$, where the warm- and cold-start curves nearly coincide. +In \Cref{fig:bpgd_msg_F}, smaller step sizes again yield lower +per-round \acp{ler} for both warm- and cold-start, and the warm-start +advantage over cold-start is most pronounced for $F = 1$ and shrinks +as $F$ grows. + +% [Description] Interpretation 4.12 + +Removing the channel \acp{llr} from the warm-start initialization lifts +the warm-start regression observed in \Cref{fig:bpgd_wf}, +and warm-start now consistently outperforms cold-start. +The dependence on the window size and the step size also recovers +the qualitative behavior we observed for plain \ac{bp} in +\Cref{fig:whole_vs_cold_vs_warm,fig:bp_f_over_p}: a larger overlap +between consecutive windows, achieved either by enlarging $W$ or by +decreasing $F$, both improves the absolute decoding performance and +increases the warm-start advantage over cold-start. +This is consistent with the original effective-iterations picture. +Without the premature hard decisions from carried-over decimation +information, the warm-start initialization once again amounts to +additional \ac{bp} iterations on the \acp{vn} of the overlap region, +and the larger the overlap, the more such effective iterations are gained. + +% [Thread] As before, view max iter behavior + +Finally, we repeat the iteration-budget sweep of \Cref{fig:bpgd_iter} +with the message-only warm-start procedure. +This serves both to verify that the premature hard decision effect +does not reappear at any iteration count and to compare the warm- and +cold-start curves across the entire range of $n_\text{iter}$ available to us. + \begin{figure}[t] \centering \hspace*{-6mm} @@ -2759,6 +2832,7 @@ of the warm-start curves and limit ourselves to noting it. \caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt}} + \label{fig:bpgd_msg_iter_W} \end{subfigure}% \hfill% \begin{subfigure}{0.48\textwidth} @@ -2831,10 +2905,74 @@ of the warm-start curves and limit ourselves to noting it. \caption{\red{Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt}} + \label{fig:bpgd_msg_iter_F} \end{subfigure} \caption{ \red{\lipsum[2]} } + \label{fig:bpgd_msg_iter} \end{figure} +% [Experimental parameters] Figure 4.13 + +\Cref{fig:bpgd_msg_iter} repeats the experiment of +\Cref{fig:bpgd_iter} with the modified warm-start procedure that +carries over only the \ac{bp} messages. +All other experimental parameters are unchanged: the physical error +rate is fixed at $p = 0.0025$ and the iteration budget is swept over +$n_\text{iter} \in \{32, 128, 256, 512, 1024, 1536, 2048, 2560, +3072, 3584, 4096\}$. +The cold-start curves (dashed) are identical to those in +\Cref{fig:bpgd_iter}. +\Cref{fig:bpgd_msg_iter_W} sweeps over the window size with +$W \in \{3, 4, 5\}$ at fixed step size $F = 1$, and +\Cref{fig:bpgd_msg_iter_F} sweeps over the step size with +$F \in \{1, 2, 3\}$ at fixed window size $W = 5$. + +% [Description] Figure 4.13 + +The warm-start curves now again lie consistently below their cold-start +counterparts across both panels and across the entire range of +$n_\text{iter}$, contrary to \Cref{fig:bpgd_iter}. +The warm-start curves furthermore track the overall shape of the +corresponding cold-start curves closely, including the iteration +count at which they drop sharply and the level at which they plateau. +The warm-start improvement over cold-start grows with the window size +in \Cref{fig:bpgd_msg_iter_W} and shrinks with the step size in +\Cref{fig:bpgd_msg_iter_F}, with the largest gap visible at $W = 5$ +and at $F = 1$, respectively. + +% [Interpretation] Figure 4.13 + +These observations match our expectations. +With only the \ac{bp} messages carried over, the warm-start +initialization no longer freezes any \acp{vn} in the next window +The dependence of this benefit on $W$ and $F$ also recovers the +pattern observed for plain \ac{bp} in +\Cref{fig:whole_vs_cold_vs_warm,fig:bp_f_over_p}: +larger overlap, achieved by larger $W$ or smaller $F$, yields more +effective extra iterations and therefore a larger warm-start gain. + +% BPGD conclusion + +We conclude our investigation into the performance of warm-start +sliding-window decoding under \ac{bpgd} by summarizing our findings. +Warm-starting the inner decoder still provides a consistent +performance gain when the inner decoder is upgraded from plain +\ac{bp} to its guided-decimation variant, but only if some care is +taken in choosing what to carry over. +Passing the channel \acp{llr} along with the \ac{bp} messages, +as suggested by naively carrying over the warm-start idea to \ac{bpgd}, +leads to premature hard decisions on \acp{vn} in the overlap region. +This leads to warm-start initialization actually worsening the +performance compared to cold-start initialization. +Restricting the warm start to the \ac{bp} messages alone removes +this effect and recovers a consistent warm-start improvement over +cold-start that follows the same behavior as for plain \ac{bp} with +regard to overlap. +A second observation specific to \ac{bpgd} is that its iteration +requirements are substantially larger than those of plain \ac{bp}: +the per-round \ac{ler} drops sharply only once the iteration budget +is on the order of the number of \acp{vn} in each window. +