LLM review

This commit is contained in:
2026-04-10 09:05:24 +02:00
parent fc9dcbe11e
commit 9edd80cf28
2 changed files with 15 additions and 16 deletions

View File

@@ -55,7 +55,7 @@
\DeclareAcronym{cn}{
short=CN,
long=chek node
long=check node
}
\DeclareAcronym{ber}{

View File

@@ -4,7 +4,7 @@
\Ac{qec} is a field of research combining ``classical''
communications engineering and quantum information science.
This chapter provides the relevant theoretical background on both of
these topics and subsequently introduces the the fundamentals of \ac{qec}.
these topics and subsequently introduces the fundamentals of \ac{qec}.
% TODO: Is an explanation of BP with guided decimation needed in this chapter?
% TODO: Is an explanation of OSD needed chapter?
@@ -15,9 +15,8 @@ these topics and subsequently introduces the the fundamentals of \ac{qec}.
% TODO: Maybe rephrase: The core concept is not the realization, its's the
% thing itself
The core concept underpinning error correcting codes is the
realization that the introduction of a finite amount of redundancy
to information before its transmission can leed to a considerably
reduced error rate.
realization that introducing a finite amount of redundancy to
information before transmission can considerably reduce the error rate.
Specifically, Shannon proved in 1948 that for any channel, a block
code can be found that achieves arbitrarily small probability of
error at any communication rate up to the capacity of the channel
@@ -42,7 +41,7 @@ algorithm.
% TODO: Do I need a specific reference for the expanded Hilbert space thing?
One particularly important class of coding schemes is that of binary
linear block codes.
The information to be protected takes the form of a sequence of of
The information to be protected takes the form of a sequence of
binary symbols, which is split into separate blocks.
Each block is encoded, transmitted, and decoded separately.
The encoding step introduces redundancy by mapping input messages
@@ -62,7 +61,7 @@ We call the set of all codewords $\mathcal{C}$ the \textit{code}
During the encoding process, a mapping from $\mathbb{F}_2^k$
onto $\mathcal{C} \subset \mathbb{F}_2^n$ takes place.
The input messages are mapped onto an expanded vector space, where
they are ``further appart'', giving rise to the error correcting
they are ``further apart'', giving rise to the error correcting
properties of the code.
This notion of the distance between two codewords $\bm{x}_1$ and
$\bm{x}_2$ can be expressed using the \textit{Hamming distance} $d(\bm{x}_1,
@@ -77,7 +76,7 @@ We define the \textit{minimum distance} of a code $\mathcal{C}$ as
%
We can signify that a binary linear block code has information length
$k$, block length $n$ and minimum distance $d_\text{min}$ using the
notation $[n,k,d_\text{dmin}]$ \cite[Sec.~1.3]{macwilliams_theory_1977}.
notation $[n,k,d_\text{min}]$ \cite[Sec.~1.3]{macwilliams_theory_1977}.
%
% Parity checks, H, and the syndrome
@@ -201,7 +200,7 @@ whereas modern codes are suitable for iterative soft-decision
decoding \cite[Preface]{ryan_channel_2009}. The iterative decoding algorithms
in question are generally defined in terms of message passing on the
\textit{Tanner graph} of the code. The Tanner graph is a bipartite
graph that constitues an alternative representation of the \ac{pcm}.
graph that constitutes an alternative representation of the \ac{pcm}.
We define two types of nodes: \acp{vn}, corresponding to codeword
bits, and \acp{cn}, corresponding to individual parity checks.
We then construct the Tanner graph by connecting each \ac{cn} to
@@ -282,11 +281,11 @@ Mathematically, we represent a \ac{vn} using the index $i \in
1 : n \right]$ and a \ac{cn} using the index $j \in \mathcal{J}
:= \left[ 1 : m \right]$.
We can then encode the information contained in the graph by defining
the neighborhood of a varialbe node $i$ as
$\mathcal{N}_\text{V} (i) = \left\{ i \in \mathcal{I} : \bm{H}_{j,i}
the neighborhood of a variable node $i$ as
$\mathcal{N}_\text{V} (i) = \left\{ j \in \mathcal{J} : \bm{H}_{j,i}
= 1 \right\}$
and that of a check node $j$ as
$\mathcal{N}_\text{C} (j) = \left\{ j \in \mathcal{J} : \bm{H}_{j,i}
$\mathcal{N}_\text{C} (j) = \left\{ i \in \mathcal{I} : \bm{H}_{j,i}
= 1 \right\}$.
%
@@ -385,12 +384,12 @@ Broadly, there are two kinds of \ac{ldpc} codes, \textit{regular} and
Regular codes are characterized by the fact that the weights, i.e.,
the numbers of ones, of their rows and columns are constant
\cite[Sec.~5.1.1]{ryan_channel_2009}.
Already during their introduction, regular \ac{ldpc} codes where shown to have
Already during their introduction, regular \ac{ldpc} codes were shown to have
a minimum distance scaling linearly with the block length $n$ for
large values \cite[Ch.~2,~Theorem~1]{gallager_low_1960},
which leads to them not exhibiting an error floor under \ac{ml} decoding.
Irregular codes, on the other hand, generally do exhibit an error floor,
their redeming quality being the ability to reach near-capacity
their redeeming quality being the ability to reach near-capacity
performance in the waterfall region \cite[Intro.]{costello_spatially_2014}.
\subsection{Spatially-Coupled LDPC Codes}
@@ -532,7 +531,7 @@ This is precisely the effect that leads to the good performance of
% Introduction
\ac{ldpc} codes are generally decoded using efficient iterative
algorithms, something that is possilbe due to their sparsity
algorithms, something that is possible due to their sparsity
\cite[Sec.~5.3]{ryan_channel_2009}.
The algorithm originally proposed alongside LDPC codes for this
purpose by Gallager in 1960 is now known as the \ac{spa}
@@ -544,7 +543,7 @@ The core idea of the resulting algorithm is to view \acp{cn} as
representing single-parity check codes and \acp{vn} as representing
repetition codes.
The algorithm alternates between consolidating soft information about
the \acp{vn} in the \acp{cn}, and consolidating soft information abou
the \acp{vn} in the \acp{cn}, and consolidating soft information about
the \acp{cn} in the \acp{vn}.
To this end, messages are passed back and forth along the edges of
the Tanner graph.