Finish fist draft of binary linear block codes

This commit is contained in:
2026-03-22 22:11:35 +01:00
parent e5dc0bc074
commit 29968c8c4d
3 changed files with 156 additions and 73 deletions

View File

@@ -1434,3 +1434,28 @@ We study the performance of medium-length quantum LDPC (QLDPC) codes in the depo
pages = {379--423},
file = {Full Text PDF:/home/andreas/workspace/work/hiwi/Zotero/storage/EQLCJ99K/Shannon - 1948 - A mathematical theory of communication.pdf:application/pdf},
}
@book{ryan_channel_2009,
title = {Channel {Codes}: {Classical} and {Modern}},
isbn = {978-1-139-48301-8},
shorttitle = {Channel {Codes}},
abstract = {Channel coding lies at the heart of digital communication and data storage, and this detailed introduction describes the core theory as well as decoding algorithms, implementation details, and performance analyses. In this book, Professors Ryan and Lin provide clear information on modern channel codes, including turbo and low-density parity-check (LDPC) codes. They also present detailed coverage of BCH codes, Reed-Solomon codes, convolutional codes, finite geometry codes, and product codes, providing a one-stop resource for both classical and modern coding techniques. Assuming no prior knowledge in the field of channel coding, the opening chapters begin with basic theory to introduce newcomers to the subject. Later chapters then extend to advanced topics such as code ensemble performance analyses and algebraic code design. 250 varied and stimulating end-of-chapter problems are also included to test and enhance learning, making this an essential resource for students and practitioners alike.},
language = {en},
publisher = {Cambridge University Press},
author = {Ryan, William and Lin, Shu},
month = sep,
year = {2009},
keywords = {/unread, Computers / Networking / General, Mathematics / Applied, Technology \& Engineering / Electrical, Technology \& Engineering / Electronics / General, Technology \& Engineering / Signals \& Signal Processing, Technology \& Engineering / Telecommunications},
}
@book{macwilliams_theory_1977,
title = {The {Theory} of {Error}-correcting {Codes}},
isbn = {978-0-444-85010-2},
language = {en},
publisher = {Elsevier},
author = {MacWilliams, Florence Jessie and Sloane, Neil James Alexander},
year = {1977},
note = {Google-Books-ID: nv6WCJgcjxcC},
keywords = {/unread, Computers / Information Theory},
file = {PDF:/home/andreas/workspace/work/hiwi/Zotero/storage/25RR5P4A/MacWilliams and Sloane - 1977 - The Theory of Error-correcting Codes.pdf:application/pdf},
}

View File

@@ -17,3 +17,13 @@
short=LDPC,
long=low-density parity-check
}
\DeclareAcronym{ml}{
short=ML,
long=maximum likelihood
}
\DeclareAcronym{pcm}{
short=PCM,
long=parity-check matrix
}

View File

@@ -6,6 +6,8 @@ communications engineering and quantum information science.
This chapter provides the relevant theoretical background on both of
these topics and subsequently introduces the the fundamentals of \ac{qec}.
% TODO: Is an explanation of BP with guided decimation needed in this chapter?
% TODO: Is an explanation of OSD needed chapter?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Classical Error Correction}
\label{sec:Classical Error Correction}
@@ -28,98 +30,144 @@ first considering binary linear block codes in general and then \ac{ldpc} and
Finally, we pivot to the decoding process, specifically the \ac{bp}
algorithm.
% TODO: Is an explanation of BP with guided decimation needed here?
% TODO: Is an explanation of OSD needed here?
% TODO: Use subsubsections?
\subsection{Binary Linear Block Codes}
\red{
\begin{itemize}
\item Note that binary linear codes are not the only coding
scheme out there
\end{itemize}
}
%
% Codewords, n, k, rate
%
Binary linear block codes split the information to be protected into
separate blocks.
% TODO: Do I need a specific reference for the expanded Hilbert space thing?
One particularly important class of coding schemes is that of binary
linear block codes.
The information to be protected takes the form of a sequence of of
binary symbols, which is split into separate blocks.
Each block is encoded, transmitted, and decoded separately.
The encoding step introduces redundancy by mapping \textit{data words} of
length $k \in \mathbb{N}$ onto \textit{codewords} of length $n \in \mathbb{N}$
with $n > k$.
The number of words stays the same, but they are mapped into an
expanded Hilbert space where they are ``further appart'', giving rise
to the error correcting properties of the coding scheme.
The encoding step introduces redundancy by mapping input messages
$\bm{u} \in \mathbb{F}_2^k$ of length $k \in \mathbb{N}$ (called the
\textit{information length}) onto \textit{codewords} $\bm{x} \in
\mathbb{F}_2^n$ of length $n \in \mathbb{N}$ (called the
\textit{block length}) with $n > k$.
A measure of the amount of introduced redundancy is the \textit{code
rate} $R = k/n$.
We call the set of all codewords $\mathcal{C}$ the \textit{code}
\cite[Section 3.1]{ryan_channel_2009}.
%
% d_min and the [] Notation
%
During the encoding process, a mapping from $\mathbb{F}_2^k$
onto $\mathcal{C} \subset \mathbb{F}_2^n$ takes place.
The input messages are mapped onto an expanded vector space, where
they are ``further appart'', giving rise to the error correcting
properties of the code.
This notion of the distance between two codewords $\bm{x}_1$ and
$\bm{x}_2$ can be expressed using the \textit{Hamming distance} $d(\bm{x}_1,
\bm{x}_2)$, which is defined as the number of positions in which they differ.
We define the \textit{minimum distance} of a code $\mathcal{C}$ as
\begin{align*}
d_\text{min} = \min \left\{ d(\bm{x}_1, \bm{x}_2) : \bm{x}_1,
\bm{x}_2 \in \mathcal{C}, \bm{x}_1 \neq \bm{x}_2 \right\}
.
\end{align*}
We can signify that a binary linear block code has information length
$k$, block length $n$ and minimum distance $d_\text{min}$ using the
notation $[n,k,d_\text{dmin}]$ \cite[Section 1.3]{macwilliams_theory_1977}.
%
% Parity checks, H, and the syndrome
%
A particularly elegant way of describing the subspace $C$ of
$\mathbb{F}_2^n$ that the codewords make up is the notion of
\textit{parity checks}.
Since $\lvert \mathcal{C} \rvert = 2^k$ and $\lvert \mathbb{F}_2^n
\rvert = 2^n$, we could introduce $n-k$ conditions to constrain the
additional degrees of freedom.
These conditions, called parity checks, take the form of equations
over $\mathbb{F}_2^n$, linking the individual positions of each codeword.
We can arrange the coefficients of these equations in the
\textit{parity check matrix} (\acs{pcm}) $\bm{H} \in
\mathbb{F}_2^{(n-k) \times n}$ and equivalently define the code as
\cite[Section 3.1]{ryan_channel_2009}
\begin{align*}
\mathcal{C} = \left\{ \bm{x} \in \mathbb{F}_2^n :
\bm{H}\bm{x}^\text{T} = \bm{0} \right\}
.%
\end{align*}
The \textit{syndrome} $\bm{s} = \bm{H} \bm{v}^\text{T}$ describes
which parity checks a candidate codeword $\bm{v} \in \mathbb{F}_2^n$ violates.
The representation using the \ac{pcm} has the benefit of providing a
description of the code, the memory complexity of which doesn't grow
exponentially with $n$, in contrast to keeping track of all codewords directly.
%
% The decoding problem
%
Figure \ref{fig:Diagram of a transmission system} visualizes the
entire communication process.
A data word $\bm{u}\in \mathbb{F}_2^k$ is mapped onto a codeword $\bm{x}
\in \mathbb{F}_2^n$. This is passed on to a transmitter, which
entire communication process \cite[Section 1.1]{ryan_channel_2009}.
An input message $\bm{u}\in \mathbb{F}_2^k$ is mapped onto a codeword $\bm{x}
\in \mathbb{F}_2^n$. This is passed on to a modulator, which
interacts with the physical channel.
A receiver acquires the message $\bm{y} \in \mathbb{R}^n$ and
forwards it to a decoder.
A demodulator processes the received message and forwards the result
$\bm{y} \in \mathbb{R}^n$ to a decoder.
Finally, the decoder is responsible for obtaining an estimate
$\hat{\bm{u}} \in \mathbb{F}_2^k$ of the original codeword from the
$\hat{\bm{u}} \in \mathbb{F}_2^k$ of the original input message from the
received message.
\vspace{10mm}
For linear codes, this encoding process can be described as%
%
This is done by first finding an estimate $\hat{\bm{x}}$ of the sent
codeword and undoing the encoding.
The decoding problem that we generally attempt to solve thus consists
in finding the best estimate $\hat{\bm{x}}$ given $\bm{y}$.
One approach is to use the \ac{ml} criterion \cite[Section
1.4]{ryan_channel_2009}
\begin{align*}
\bm{x} = \bm{u}\bm{G},
\end{align*}%
\hat{\bm{u}}_\text{ML} = \arg\max_{\bm{x} \in \mathcal{C}}
P(\bm{Y} = \bm{y} \vert \bm{X} = \bm{x})
.
\end{align*}
Finally, we differentiate between \textit{soft decision} decoding, where
$\bm{y} \in \mathbb{R}^n$ and \textit{hard decision} decoding, where
$\bm{y} \in \mathbb{F}_2^n$ \cite[Section 1.5.1.3]{ryan_channel_2009}.
%
using a generator matrix $\bm{G} \in \mathbb{F}_2^{k\times n}$.
% We call the set of all codewords the codespace $\mathcal{C}$.
% $\mathcal{C} = \left\{ \bm{x} \in \mathbb{F}_2^n :
% \bm{H}\bm{x}^\text{T} = \bm{0} \right\}$.
\begin{figure}[H]
\begin{figure}[h]
\centering
\begin{tikzpicture}
\node[] (in) {$\bm{u}$};
\node[rectangle, draw=black, right=of in] (enc) {Encoder};
\node[rectangle, draw=black, right=of enc] (tra) {Transmitter};
\node[rectangle, draw=black, right=of tra] (cha) {Channel};
\node[rectangle, draw=black, right=of cha] (rec) {Receiver};
\node[rectangle, draw=black, right=of rec] (dec) {Decoder};
\node[right=of dec] (out) {$\hat{\bm{u}}$};
\tikzset{
box/.style={
rectangle, draw=black, minimum width=17mm, minimum height=8mm,
},
}
\draw[-{latex}] (in) -- (enc);
\draw[-{latex}] (enc) -- (tra);
\draw[-{latex}] (tra) -- (cha);
\draw[-{latex}] (cha) -- (rec);
\draw[-{latex}] (rec) -- (dec);
\draw[-{latex}] (dec) -- (out);
\begin{tikzpicture}
[
node distance = 2mm and 7mm,
]
\node (in) {};
\node[box, right=of in] (enc) {Encoder};
\node[box, minimum width=23mm, right=of enc] (mod) {Modulator};
\node[box, below right=of mod] (cha) {Channel};
\node[box, minimum width=23mm, below left=of cha] (dem) {Demodulator};
\node[box, left=of dem] (dec) {Decoder};
\node[left=of dec] (out) {};
\draw[-{latex}] (in) -- (enc) node[midway, above] {$\bm{u}$};
\draw[-{latex}] (enc) -- (mod) node[midway, above] {$\bm{x}$};
\draw[-{latex}] (mod) -| (cha);
\draw[-{latex}] (cha) |- (dem);
\draw[-{latex}] (dem) -- (dec) node[midway, above] {$\bm{y}$};
\draw[-{latex}] (dec) -- (out) node[midway, above] {$\hat{\bm{u}}$};
\end{tikzpicture}
\caption{Diagram of a transmission system}
\caption{Overview of a transmission system.}
\label{fig:Diagram of a transmission system}
\end{figure}
%
\red{
\textbf{Topics to cover}
\begin{itemize}
\item Parity checks and describing a code by $H$ rather than
$\mathcal{C}$
\item G, H, notation
\item Minimum distance, [] - notation
\item The syndrome
\item The decoding problem
\item Soft vs. Hard information
\end{itemize}
}
\red{
\textbf{General Notes:}
\begin{itemize}
\item Make sure all coding concepts used later on have been
introduced (e.g., code rate?)
\end{itemize}
}
%
% Hard vs. soft information
%
\subsection{Low-Density Parity-Check Codes}