Incorporate Jonathan's corrections to the introduction

This commit is contained in:
2026-05-04 16:31:31 +02:00
parent f1a5aaf3f8
commit 72acea0321
2 changed files with 22 additions and 17 deletions

View File

@@ -17,7 +17,7 @@ factorization \cite{shor_algorithms_1994}.
Similar to the way classical computers are built from bits and gates, Similar to the way classical computers are built from bits and gates,
quantum computers are built from \emph{qubits} and \emph{quantum gates}. quantum computers are built from \emph{qubits} and \emph{quantum gates}.
Because of quantum entanglement, it is not enough to consider the Because of quantum entanglement, it does not suffice to consider the
qubits individually, we also have to consider correlations between them. qubits individually, we also have to consider correlations between them.
For a system of $n$ qubits, this makes the state space grow with For a system of $n$ qubits, this makes the state space grow with
$2^n$ instead of linearly with $n$, as would be the case for a classical system $2^n$ instead of linearly with $n$, as would be the case for a classical system
@@ -30,12 +30,11 @@ what provides them with their power \cite[Sec.~2.1]{roffe_decoding_2020}.
Realizing algorithms that leverage these quantum-mechanical effects Realizing algorithms that leverage these quantum-mechanical effects
requires hardware that can execute long quantum computations reliably. requires hardware that can execute long quantum computations reliably.
This poses a problem, because the qubits making up current devices This poses a problem, because the qubits making up current devices
are difficult to sufficiently isolate from their environment consistently interact with their environment \cite[Sec.~1]{roffe_quantum_2019}.
\cite[Sec.~1]{roffe_quantum_2019}. This interaction acts as a continuous small-scale measurement, an
Their interaction with the environment acts as a continuous small-scale effect we call \emph{decoherence} of the stored quantum state, which
measurement, an effect we call \emph{decoherence} of the stored quantum results in errors on the qubits.
state. Decoherence is the reason large systems do not exhibit visible quantum
Decoherence is the reason large systems don't exhibit visible quantum
properties at human scales \cite[Sec.~1]{gottesman_stabilizer_1997}. properties at human scales \cite[Sec.~1]{gottesman_stabilizer_1997}.
% Intro to QEC % Intro to QEC
@@ -45,8 +44,8 @@ It addresses the issue by encoding the information of $k$
\emph{logical qubits} into a larger number $n>k$ of \emph{physical \emph{logical qubits} into a larger number $n>k$ of \emph{physical
qubits}, in close analogy to classical channel coding qubits}, in close analogy to classical channel coding
\cite[Sec.~1]{roffe_quantum_2019}. \cite[Sec.~1]{roffe_quantum_2019}.
The redundancy introduced this way can then be used to restore The redundancy introduced this way can then be used to detect and
the quantum state, should it be disturbed. correct a corrupted the quantum state.
The quantum setting imposes some important constraints that do not exist in the The quantum setting imposes some important constraints that do not exist in the
classical case, however \cite[Sec.~2.4]{roffe_quantum_2019}: classical case, however \cite[Sec.~2.4]{roffe_quantum_2019}:
\begin{itemize} \begin{itemize}
@@ -54,7 +53,7 @@ classical case, however \cite[Sec.~2.4]{roffe_quantum_2019}:
\item In addition to the bit-flip errors we know from the \item In addition to the bit-flip errors we know from the
classical setting, qubits are subject to \emph{phase-flips}. classical setting, qubits are subject to \emph{phase-flips}.
\item We are not allowed to directly measure the encoded qubits, \item We are not allowed to directly measure the encoded qubits,
as that would disturb their quantum states. as that would collapse their quantum states.
\end{itemize} \end{itemize}
We can deal with the first constraint by not duplicating information, instead We can deal with the first constraint by not duplicating information, instead
spreading the quantum state across the physical qubits spreading the quantum state across the physical qubits
@@ -74,8 +73,8 @@ subsequent decoding process on the measured syndrome.
Another difference between \ac{qec} and classical channel coding is Another difference between \ac{qec} and classical channel coding is
the resource constraints. the resource constraints.
For \ac{qec}, low latency matters more than low overall computational For \ac{qec}, achieving low latency matters more than having a low
complexity, due to the backlog problem overall computational complexity, due to the backlog problem
\cite[Sec.~II.G.3.]{terhal_quantum_2015}: Certain gates turn \cite[Sec.~II.G.3.]{terhal_quantum_2015}: Certain gates turn
single-qubit errors into multi-qubit ones, so errors must be single-qubit errors into multi-qubit ones, so errors must be
corrected beforehand. corrected beforehand.
@@ -83,7 +82,7 @@ A \ac{qec} system that is too slow accumulates a backlog at these points,
causing exponential slowdown. causing exponential slowdown.
Several code constructions have been proposed for \ac{qec} codes over the years. Several code constructions have been proposed for \ac{qec} codes over the years.
Topological codes such as surface codes have been the industry Topological codes, such as surface codes, have been the industry
standard for experimental applications for a long time standard for experimental applications for a long time
\cite[Sec.~I]{koutsioumpas_colour_2025}, due to their \cite[Sec.~I]{koutsioumpas_colour_2025}, due to their
reliance on only local connections between qubits reliance on only local connections between qubits
@@ -116,15 +115,15 @@ focusing only on the relationship between possible errors
and their effects on the syndrome \cite[Sec.~1.4.3]{higgott_practical_2024}. and their effects on the syndrome \cite[Sec.~1.4.3]{higgott_practical_2024}.
A \emph{detector error matrix} is generated from the circuit, which is A \emph{detector error matrix} is generated from the circuit, which is
used for decoding instead of the original check matrix. used for decoding instead of the original check matrix.
Decoding under a \ac{dem} poses a challenge with respect to the The detector error matrix is much larger than the
latency constraint.
This is because the detector error matrix is much larger than the
check matrix of the underlying code, since it needs to represent many check matrix of the underlying code, since it needs to represent many
more error locations. more error locations.
For example, in our experiments using the $\llbracket 144,12,12 For example, in our experiments using the $\llbracket 144,12,12
\rrbracket$ \ac{bb} code with $12$ syndrome measurement rounds, the \rrbracket$ \ac{bb} code with $12$ syndrome measurement rounds, the
number of \acp{vn} grew from $144$ to $9504$ and the number of number of \acp{vn} grew from $144$ to $9504$ and the number of
\acp{cn} grew from $72$ to $1008$. \acp{cn} grew from $72$ to $1008$.
Therefore, decoding under a \ac{dem} poses a challenge with respect to the
latency constraint.
To keep the latency of \ac{dem} decoding manageable, one approach is To keep the latency of \ac{dem} decoding manageable, one approach is
\emph{sliding-window decoding}. \emph{sliding-window decoding}.
@@ -154,7 +153,7 @@ We propose \emph{warm-start sliding-window decoding}, in which the
\ac{bp} messages from the overlap region of the previous window are \ac{bp} messages from the overlap region of the previous window are
reused to initialize \ac{bp} in the current window in place of the reused to initialize \ac{bp} in the current window in place of the
standard cold-start initialization. standard cold-start initialization.
We formulate the warm start first for plain \ac{bp} and then for We formulate the warm start for standard \ac{bp} and for
\ac{bpgd}, a variant of \ac{bp} with better convergence properties \ac{bpgd}, a variant of \ac{bp} with better convergence properties
for \ac{qec} codes. for \ac{qec} codes.
The decoders are evaluated by Monte Carlo simulation on the The decoders are evaluated by Monte Carlo simulation on the
@@ -166,6 +165,7 @@ low-latency operation.
% Outline of the Thesis % Outline of the Thesis
This thesis is structured as follows:
\Cref{ch:Fundamentals} reviews the fundamentals of classical and \Cref{ch:Fundamentals} reviews the fundamentals of classical and
quantum error correction. quantum error correction.
On the classical side, it covers binary linear block codes, On the classical side, it covers binary linear block codes,

View File

@@ -123,6 +123,11 @@
% \listoftables % \listoftables
% \include{abbreviations} % \include{abbreviations}
% \cleardoublepage
% \phantomsection
% \addcontentsline{toc}{chapter}{List of Abbreviations}
% \printacronyms
\bibliography{lib/cel-thesis/IEEEabrv,src/thesis/bibliography} \bibliography{lib/cel-thesis/IEEEabrv,src/thesis/bibliography}
\end{document} \end{document}