Switch figures
This commit is contained in:
@@ -72,7 +72,7 @@ problem into many smaller ones that can be solved more efficiently.
|
|||||||
% warm-start decoding. Or just go into warm-start decoding)
|
% warm-start decoding. Or just go into warm-start decoding)
|
||||||
We will start by briefly reviewing the existing work related to
|
We will start by briefly reviewing the existing work related to
|
||||||
sliding-window decoding,
|
sliding-window decoding,
|
||||||
before focusing on one specific incarnation.
|
before focusing on one specific realization.
|
||||||
We will then introduce a modification to the existing algorithm and
|
We will then introduce a modification to the existing algorithm and
|
||||||
perform numerical simulations to evaluate it.
|
perform numerical simulations to evaluate it.
|
||||||
|
|
||||||
@@ -110,100 +110,6 @@ Each of these windows is then decoded separately.
|
|||||||
\subsection{Review of Existing Literature}
|
\subsection{Review of Existing Literature}
|
||||||
\label{subsec:Review of Existing Literature}
|
\label{subsec:Review of Existing Literature}
|
||||||
|
|
||||||
% Description of the figure
|
|
||||||
|
|
||||||
\Cref{fig:literature} gives an overview over the existing body of work
|
|
||||||
related to sliding-window decoding.
|
|
||||||
The papers \cite{huang_improved_2023} and \cite{huang_increasing_2024} are
|
|
||||||
lumped together, as they share the same content;
|
|
||||||
one is simply preprint published earlier.
|
|
||||||
We will only refer to \cite{huang_increasing_2024} in the following.
|
|
||||||
\cite{kang_quits_2025} is somewhat special in that the authors focus
|
|
||||||
more on the introduction of a new simluator framework they call
|
|
||||||
QUITS, rather than the performance of sliding-window decoding itself.
|
|
||||||
\cite{gong_toward_2024} and \cite{kang_quits_2025} have made their
|
|
||||||
software freely available online%
|
|
||||||
\footnote{
|
|
||||||
https://github.com/mkangquantum/quits
|
|
||||||
}%
|
|
||||||
\footnote{
|
|
||||||
https://github.com/gongaa/SlidingWindowDecoder
|
|
||||||
}.
|
|
||||||
A final thing to note is that \cite{dennis_topological_2002} never
|
|
||||||
explicitly mention sliding windows, they call their scheme
|
|
||||||
``overlapping recovery''.
|
|
||||||
|
|
||||||
% Topological vs QLDPC
|
|
||||||
|
|
||||||
Research has focused on two categories of \ac{qec} codes, topological
|
|
||||||
and \ac{qldpc} codes.
|
|
||||||
Most of the work on topological codes has treated surface codes,
|
|
||||||
with the exception of \cite{kuo_fault-tolerant_2024} where toric
|
|
||||||
codes were considered.
|
|
||||||
With regard to \ac{qldpc} codes, in \cite{huang_increasing_2024}
|
|
||||||
they examine \emph{hypergraph product} (\acs{hgp}) and
|
|
||||||
\emph{lifted-product} (\acs{lp}) codes.
|
|
||||||
HGP codes are constructed from the product of two classical codes,
|
|
||||||
while LP codes generalize this construction by additionally applying
|
|
||||||
a lift to reduce the qubit overhead.
|
|
||||||
In \cite{kang_quits_2025}, \emph{balanced product codes} (\acs{bpc})
|
|
||||||
are additionally considered.
|
|
||||||
Finally, in \cite{gong_toward_2024} the authors explore \ac{bb} codes.
|
|
||||||
|
|
||||||
% Sequential vs parallel
|
|
||||||
|
|
||||||
After having divided the whole circuit into separate windows, the question
|
|
||||||
arises of how exactly to realize the decoding.
|
|
||||||
There are two main approaches, with differing mechanisms of reducing
|
|
||||||
the latency.
|
|
||||||
Some papers decode the sliding windows in a parallel fashion.
|
|
||||||
The benefit in this case is the option to more effectively utilize
|
|
||||||
classical hardware for decoding.
|
|
||||||
Others choose a sequential approach.
|
|
||||||
Here, decoding can start earlier, as there is no need to wait for the
|
|
||||||
syndrome measurements of all windows before beginning with the decoding.
|
|
||||||
With the exception of \cite{dennis_topological_2002}, literature
|
|
||||||
treating topological codes has mostly focused on parallel decoding
|
|
||||||
while literature treating \ac{qldpc} codes has wholely considered
|
|
||||||
sequential decoding.
|
|
||||||
|
|
||||||
% Deep-dive into QLDPC methods
|
|
||||||
|
|
||||||
\renewcommand{\arraystretch}{1.1}
|
|
||||||
\setlength{\tabcolsep}{12pt}
|
|
||||||
\begin{table}[t]
|
|
||||||
\centering
|
|
||||||
\caption{Experimental conditions for papers related to \ac{qldpc} codes.}
|
|
||||||
\vspace*{3mm}
|
|
||||||
\label{table:experimental_conditions}
|
|
||||||
\begin{tabular}{l|ccc}
|
|
||||||
% tex-fmt: off
|
|
||||||
Publication & Code & Noise Model & Decoder \\ \hline
|
|
||||||
\hspace{-2.5mm}\cite{huang_improved_2023},\cite{huang_increasing_2024} & \acs{hgp}, \acs{lp} & Phenomenological noise & \acs{bp} + \acs{osd} \\
|
|
||||||
\hspace{-2.5mm}\cite{gong_toward_2024} & \acs{bb} & Circuit-level noise & \acs{bp} + \acs{gdg} \\
|
|
||||||
\hspace{-2.5mm}\cite{kang_quits_2025} & \acs{hgp}, \acs{lp}, \acs{bpc} & Circuit-level noise & \acs{bp} + \ac{osd}
|
|
||||||
% tex-fmt: on
|
|
||||||
\end{tabular}
|
|
||||||
\end{table}
|
|
||||||
|
|
||||||
For this work, the publications treating \ac{qldpc} codes are
|
|
||||||
especially interesting.
|
|
||||||
The experimental conditions for these are summarized in
|
|
||||||
\Cref{table:experimental_conditions}.
|
|
||||||
As we noted above, \ac{hgp} and \ac{lp} codes are considered in
|
|
||||||
\cite{huang_increasing_2024},
|
|
||||||
\ac{hgp}, \ac{lp} and \ac{bpc} codes are considered in \cite{kang_quits_2025},
|
|
||||||
and \ac{bb} codes are considered in \cite{gong_toward_2024}.
|
|
||||||
The employed noise models also differ;
|
|
||||||
\cite{huang_increasing_2024} use phenomenological noise, while
|
|
||||||
\cite{gong_toward_2024} and \cite{kang_quits_2025} use circuit-level noise.
|
|
||||||
Finally, \cite{gong_toward_2024} introduce their own variation of
|
|
||||||
\ac{bpgd}, \ac{bp} with \ac{gdg}, while \cite{huang_increasing_2024}
|
|
||||||
and \cite{kang_quits_2025} use \ac{bp} + \ac{osd}.
|
|
||||||
We would additionally like to note that only in
|
|
||||||
\cite{gong_toward_2024} and \cite{kang_quits_2025} do the authors
|
|
||||||
explicitly work with the \ac{dem} formalism.
|
|
||||||
|
|
||||||
\begin{figure}[t]
|
\begin{figure}[t]
|
||||||
\centering
|
\centering
|
||||||
|
|
||||||
@@ -290,6 +196,103 @@ explicitly work with the \ac{dem} formalism.
|
|||||||
\label{fig:literature}
|
\label{fig:literature}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
|
% Some general notes
|
||||||
|
|
||||||
|
\Cref{fig:literature} gives an overview over the existing body of work
|
||||||
|
related to sliding-window decoding.
|
||||||
|
The papers \cite{huang_improved_2023} and \cite{huang_increasing_2024} are
|
||||||
|
lumped together, as they share the same content;
|
||||||
|
one is simply preprint published earlier.
|
||||||
|
We will only refer to \cite{huang_increasing_2024} in the following.
|
||||||
|
\cite{kang_quits_2025} is somewhat special in that the authors focus
|
||||||
|
more on the introduction of a new simluator framework they call
|
||||||
|
QUITS, rather than the performance of sliding-window decoding itself.
|
||||||
|
\cite{gong_toward_2024} and \cite{kang_quits_2025} have made their
|
||||||
|
software freely available online%
|
||||||
|
\footnote{
|
||||||
|
https://github.com/mkangquantum/quits
|
||||||
|
}%
|
||||||
|
\footnote{
|
||||||
|
https://github.com/gongaa/SlidingWindowDecoder
|
||||||
|
}.
|
||||||
|
A final thing to note is that \cite{dennis_topological_2002} never
|
||||||
|
explicitly mention sliding windows, they call their scheme
|
||||||
|
``overlapping recovery''.
|
||||||
|
|
||||||
|
% Topological vs QLDPC
|
||||||
|
|
||||||
|
Research has focused on two categories of \ac{qec} codes, topological
|
||||||
|
and \ac{qldpc} codes.
|
||||||
|
Most of the work on topological codes has treated surface codes,
|
||||||
|
with the exception of \cite{kuo_fault-tolerant_2024} where toric
|
||||||
|
codes were considered.
|
||||||
|
With regard to \ac{qldpc} codes, in \cite{huang_increasing_2024}
|
||||||
|
they examine \emph{hypergraph product} (\acs{hgp}) and
|
||||||
|
\emph{lifted-product} (\acs{lp}) codes.
|
||||||
|
HGP codes are constructed from the product of two classical codes,
|
||||||
|
while LP codes generalize this construction by additionally applying
|
||||||
|
a lift to reduce the qubit overhead.
|
||||||
|
In \cite{kang_quits_2025}, \emph{balanced product codes} (\acs{bpc})
|
||||||
|
are additionally considered.
|
||||||
|
Like HGP codes, BPC codes are derived from a product construction,
|
||||||
|
but exploit an additional symmetry to yield fewer physical qubits for
|
||||||
|
the same code parameters.
|
||||||
|
Finally, in \cite{gong_toward_2024} the authors explore \ac{bb} codes.
|
||||||
|
|
||||||
|
% Sequential vs parallel
|
||||||
|
|
||||||
|
After having divided the whole circuit into separate windows, the question
|
||||||
|
arises of how exactly to realize the decoding.
|
||||||
|
There are two main approaches, with differing mechanisms of reducing
|
||||||
|
the latency.
|
||||||
|
Some papers decode the sliding windows in a parallel fashion.
|
||||||
|
The benefit in this case is the option to more effectively utilize
|
||||||
|
classical hardware for decoding.
|
||||||
|
Others choose a sequential approach.
|
||||||
|
Here, decoding can start earlier, as there is no need to wait for the
|
||||||
|
syndrome measurements of all windows before beginning with the decoding.
|
||||||
|
With the exception of \cite{dennis_topological_2002}, literature
|
||||||
|
treating topological codes has mostly focused on parallel decoding
|
||||||
|
while literature treating \ac{qldpc} codes has wholely considered
|
||||||
|
sequential decoding.
|
||||||
|
|
||||||
|
% Deep-dive into QLDPC methods
|
||||||
|
|
||||||
|
For this work, the publications treating \ac{qldpc} codes are
|
||||||
|
especially interesting.
|
||||||
|
The experimental conditions for these are summarized in
|
||||||
|
\Cref{table:experimental_conditions}.
|
||||||
|
As we noted above, \ac{hgp} and \ac{lp} codes are considered in
|
||||||
|
\cite{huang_increasing_2024},
|
||||||
|
\ac{hgp}, \ac{lp} and \ac{bpc} codes are considered in \cite{kang_quits_2025},
|
||||||
|
and \ac{bb} codes are considered in \cite{gong_toward_2024}.
|
||||||
|
The employed noise models also differ;
|
||||||
|
\cite{huang_increasing_2024} use phenomenological noise, while
|
||||||
|
\cite{gong_toward_2024} and \cite{kang_quits_2025} use circuit-level noise.
|
||||||
|
Finally, \cite{gong_toward_2024} introduce their own variation of
|
||||||
|
\ac{bpgd}, \ac{bp} with \ac{gdg}, while \cite{huang_increasing_2024}
|
||||||
|
and \cite{kang_quits_2025} use \ac{bp} + \ac{osd}.
|
||||||
|
We would additionally like to note that only in
|
||||||
|
\cite{gong_toward_2024} and \cite{kang_quits_2025} do the authors
|
||||||
|
explicitly work with the \ac{dem} formalism.
|
||||||
|
|
||||||
|
\renewcommand{\arraystretch}{1.1}
|
||||||
|
\setlength{\tabcolsep}{12pt}
|
||||||
|
\begin{table}[t]
|
||||||
|
\centering
|
||||||
|
\caption{Experimental conditions for papers related to \ac{qldpc} codes.}
|
||||||
|
\vspace*{3mm}
|
||||||
|
\label{table:experimental_conditions}
|
||||||
|
\begin{tabular}{l|ccc}
|
||||||
|
% tex-fmt: off
|
||||||
|
Publication & Code & Noise Model & Decoder \\ \hline
|
||||||
|
\hspace{-2.5mm}\cite{huang_improved_2023},\cite{huang_increasing_2024} & \acs{hgp}, \acs{lp} & Phenomenological noise & \acs{bp} + \acs{osd} \\
|
||||||
|
\hspace{-2.5mm}\cite{gong_toward_2024} & \acs{bb} & Circuit-level noise & \acs{bp} + \acs{gdg} \\
|
||||||
|
\hspace{-2.5mm}\cite{kang_quits_2025} & \acs{hgp}, \acs{lp}, \acs{bpc} & Circuit-level noise & \acs{bp} + \ac{osd}
|
||||||
|
% tex-fmt: on
|
||||||
|
\end{tabular}
|
||||||
|
\end{table}
|
||||||
|
|
||||||
% \red{
|
% \red{
|
||||||
% Existing work
|
% Existing work
|
||||||
% \begin{itemize}
|
% \begin{itemize}
|
||||||
@@ -381,12 +384,6 @@ error matrix is divided into overlapping windows.
|
|||||||
The algorithm detailed here follows \cite{kang_quits_2025}, whose
|
The algorithm detailed here follows \cite{kang_quits_2025}, whose
|
||||||
work is in turn based on \cite{huang_increasing_2024}.
|
work is in turn based on \cite{huang_increasing_2024}.
|
||||||
|
|
||||||
\red{
|
|
||||||
\begin{itemize}
|
|
||||||
\item QUITS views sliding-window decoding more separately
|
|
||||||
\end{itemize}
|
|
||||||
}
|
|
||||||
|
|
||||||
\content{Possibly go into the fact that current sliding-window
|
\content{Possibly go into the fact that current sliding-window
|
||||||
approaches don't differentiate clearly between the sliding-window
|
approaches don't differentiate clearly between the sliding-window
|
||||||
part and the decoder part. This work aims to extend the
|
part and the decoder part. This work aims to extend the
|
||||||
@@ -394,9 +391,6 @@ work is in turn based on \cite{huang_increasing_2024}.
|
|||||||
different decoder parts. Combine this with QUITS modular structure
|
different decoder parts. Combine this with QUITS modular structure
|
||||||
for sliding window decoding}
|
for sliding window decoding}
|
||||||
|
|
||||||
We build on the approach taken by \cite{huang_increasing_2024} and
|
|
||||||
\cite{gong_toward_2024}.
|
|
||||||
|
|
||||||
% High-level overview of Sliding-Window decoding
|
% High-level overview of Sliding-Window decoding
|
||||||
|
|
||||||
\content{Benefits of sliding-window decoding (lower latency due to
|
\content{Benefits of sliding-window decoding (lower latency due to
|
||||||
@@ -427,7 +421,7 @@ with processing'' some VNs)}
|
|||||||
\content{4. Decode next window}
|
\content{4. Decode next window}
|
||||||
\content{(?) Explicitly mention we don't reuse existing messages?}
|
\content{(?) Explicitly mention we don't reuse existing messages?}
|
||||||
|
|
||||||
\begin{figure}[H]
|
\begin{figure}[t]
|
||||||
\centering
|
\centering
|
||||||
|
|
||||||
\hspace*{-114mm}%
|
\hspace*{-114mm}%
|
||||||
|
|||||||
Reference in New Issue
Block a user