Finish fist draft of binary linear block codes
This commit is contained in:
@@ -6,6 +6,8 @@ communications engineering and quantum information science.
|
||||
This chapter provides the relevant theoretical background on both of
|
||||
these topics and subsequently introduces the the fundamentals of \ac{qec}.
|
||||
|
||||
% TODO: Is an explanation of BP with guided decimation needed in this chapter?
|
||||
% TODO: Is an explanation of OSD needed chapter?
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Classical Error Correction}
|
||||
\label{sec:Classical Error Correction}
|
||||
@@ -28,98 +30,144 @@ first considering binary linear block codes in general and then \ac{ldpc} and
|
||||
Finally, we pivot to the decoding process, specifically the \ac{bp}
|
||||
algorithm.
|
||||
|
||||
% TODO: Is an explanation of BP with guided decimation needed here?
|
||||
% TODO: Is an explanation of OSD needed here?
|
||||
|
||||
% TODO: Use subsubsections?
|
||||
\subsection{Binary Linear Block Codes}
|
||||
|
||||
\red{
|
||||
\begin{itemize}
|
||||
\item Note that binary linear codes are not the only coding
|
||||
scheme out there
|
||||
\end{itemize}
|
||||
}
|
||||
%
|
||||
% Codewords, n, k, rate
|
||||
%
|
||||
|
||||
Binary linear block codes split the information to be protected into
|
||||
separate blocks.
|
||||
% TODO: Do I need a specific reference for the expanded Hilbert space thing?
|
||||
One particularly important class of coding schemes is that of binary
|
||||
linear block codes.
|
||||
The information to be protected takes the form of a sequence of of
|
||||
binary symbols, which is split into separate blocks.
|
||||
Each block is encoded, transmitted, and decoded separately.
|
||||
The encoding step introduces redundancy by mapping \textit{data words} of
|
||||
length $k \in \mathbb{N}$ onto \textit{codewords} of length $n \in \mathbb{N}$
|
||||
with $n > k$.
|
||||
The number of words stays the same, but they are mapped into an
|
||||
expanded Hilbert space where they are ``further appart'', giving rise
|
||||
to the error correcting properties of the coding scheme.
|
||||
The encoding step introduces redundancy by mapping input messages
|
||||
$\bm{u} \in \mathbb{F}_2^k$ of length $k \in \mathbb{N}$ (called the
|
||||
\textit{information length}) onto \textit{codewords} $\bm{x} \in
|
||||
\mathbb{F}_2^n$ of length $n \in \mathbb{N}$ (called the
|
||||
\textit{block length}) with $n > k$.
|
||||
A measure of the amount of introduced redundancy is the \textit{code
|
||||
rate} $R = k/n$.
|
||||
We call the set of all codewords $\mathcal{C}$ the \textit{code}
|
||||
\cite[Section 3.1]{ryan_channel_2009}.
|
||||
|
||||
%
|
||||
% d_min and the [] Notation
|
||||
%
|
||||
|
||||
During the encoding process, a mapping from $\mathbb{F}_2^k$
|
||||
onto $\mathcal{C} \subset \mathbb{F}_2^n$ takes place.
|
||||
The input messages are mapped onto an expanded vector space, where
|
||||
they are ``further appart'', giving rise to the error correcting
|
||||
properties of the code.
|
||||
This notion of the distance between two codewords $\bm{x}_1$ and
|
||||
$\bm{x}_2$ can be expressed using the \textit{Hamming distance} $d(\bm{x}_1,
|
||||
\bm{x}_2)$, which is defined as the number of positions in which they differ.
|
||||
We define the \textit{minimum distance} of a code $\mathcal{C}$ as
|
||||
\begin{align*}
|
||||
d_\text{min} = \min \left\{ d(\bm{x}_1, \bm{x}_2) : \bm{x}_1,
|
||||
\bm{x}_2 \in \mathcal{C}, \bm{x}_1 \neq \bm{x}_2 \right\}
|
||||
.
|
||||
\end{align*}
|
||||
We can signify that a binary linear block code has information length
|
||||
$k$, block length $n$ and minimum distance $d_\text{min}$ using the
|
||||
notation $[n,k,d_\text{dmin}]$ \cite[Section 1.3]{macwilliams_theory_1977}.
|
||||
|
||||
%
|
||||
% Parity checks, H, and the syndrome
|
||||
%
|
||||
|
||||
A particularly elegant way of describing the subspace $C$ of
|
||||
$\mathbb{F}_2^n$ that the codewords make up is the notion of
|
||||
\textit{parity checks}.
|
||||
Since $\lvert \mathcal{C} \rvert = 2^k$ and $\lvert \mathbb{F}_2^n
|
||||
\rvert = 2^n$, we could introduce $n-k$ conditions to constrain the
|
||||
additional degrees of freedom.
|
||||
These conditions, called parity checks, take the form of equations
|
||||
over $\mathbb{F}_2^n$, linking the individual positions of each codeword.
|
||||
We can arrange the coefficients of these equations in the
|
||||
\textit{parity check matrix} (\acs{pcm}) $\bm{H} \in
|
||||
\mathbb{F}_2^{(n-k) \times n}$ and equivalently define the code as
|
||||
\cite[Section 3.1]{ryan_channel_2009}
|
||||
\begin{align*}
|
||||
\mathcal{C} = \left\{ \bm{x} \in \mathbb{F}_2^n :
|
||||
\bm{H}\bm{x}^\text{T} = \bm{0} \right\}
|
||||
.%
|
||||
\end{align*}
|
||||
The \textit{syndrome} $\bm{s} = \bm{H} \bm{v}^\text{T}$ describes
|
||||
which parity checks a candidate codeword $\bm{v} \in \mathbb{F}_2^n$ violates.
|
||||
The representation using the \ac{pcm} has the benefit of providing a
|
||||
description of the code, the memory complexity of which doesn't grow
|
||||
exponentially with $n$, in contrast to keeping track of all codewords directly.
|
||||
|
||||
%
|
||||
% The decoding problem
|
||||
%
|
||||
|
||||
Figure \ref{fig:Diagram of a transmission system} visualizes the
|
||||
entire communication process.
|
||||
A data word $\bm{u}\in \mathbb{F}_2^k$ is mapped onto a codeword $\bm{x}
|
||||
\in \mathbb{F}_2^n$. This is passed on to a transmitter, which
|
||||
entire communication process \cite[Section 1.1]{ryan_channel_2009}.
|
||||
An input message $\bm{u}\in \mathbb{F}_2^k$ is mapped onto a codeword $\bm{x}
|
||||
\in \mathbb{F}_2^n$. This is passed on to a modulator, which
|
||||
interacts with the physical channel.
|
||||
A receiver acquires the message $\bm{y} \in \mathbb{R}^n$ and
|
||||
forwards it to a decoder.
|
||||
A demodulator processes the received message and forwards the result
|
||||
$\bm{y} \in \mathbb{R}^n$ to a decoder.
|
||||
Finally, the decoder is responsible for obtaining an estimate
|
||||
$\hat{\bm{u}} \in \mathbb{F}_2^k$ of the original codeword from the
|
||||
$\hat{\bm{u}} \in \mathbb{F}_2^k$ of the original input message from the
|
||||
received message.
|
||||
|
||||
\vspace{10mm}
|
||||
|
||||
For linear codes, this encoding process can be described as%
|
||||
%
|
||||
This is done by first finding an estimate $\hat{\bm{x}}$ of the sent
|
||||
codeword and undoing the encoding.
|
||||
The decoding problem that we generally attempt to solve thus consists
|
||||
in finding the best estimate $\hat{\bm{x}}$ given $\bm{y}$.
|
||||
One approach is to use the \ac{ml} criterion \cite[Section
|
||||
1.4]{ryan_channel_2009}
|
||||
\begin{align*}
|
||||
\bm{x} = \bm{u}\bm{G},
|
||||
\end{align*}%
|
||||
\hat{\bm{u}}_\text{ML} = \arg\max_{\bm{x} \in \mathcal{C}}
|
||||
P(\bm{Y} = \bm{y} \vert \bm{X} = \bm{x})
|
||||
.
|
||||
\end{align*}
|
||||
Finally, we differentiate between \textit{soft decision} decoding, where
|
||||
$\bm{y} \in \mathbb{R}^n$ and \textit{hard decision} decoding, where
|
||||
$\bm{y} \in \mathbb{F}_2^n$ \cite[Section 1.5.1.3]{ryan_channel_2009}.
|
||||
%
|
||||
using a generator matrix $\bm{G} \in \mathbb{F}_2^{k\times n}$.
|
||||
|
||||
% We call the set of all codewords the codespace $\mathcal{C}$.
|
||||
|
||||
% $\mathcal{C} = \left\{ \bm{x} \in \mathbb{F}_2^n :
|
||||
% \bm{H}\bm{x}^\text{T} = \bm{0} \right\}$.
|
||||
|
||||
\begin{figure}[H]
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
|
||||
\begin{tikzpicture}
|
||||
\node[] (in) {$\bm{u}$};
|
||||
\node[rectangle, draw=black, right=of in] (enc) {Encoder};
|
||||
\node[rectangle, draw=black, right=of enc] (tra) {Transmitter};
|
||||
\node[rectangle, draw=black, right=of tra] (cha) {Channel};
|
||||
\node[rectangle, draw=black, right=of cha] (rec) {Receiver};
|
||||
\node[rectangle, draw=black, right=of rec] (dec) {Decoder};
|
||||
\node[right=of dec] (out) {$\hat{\bm{u}}$};
|
||||
\tikzset{
|
||||
box/.style={
|
||||
rectangle, draw=black, minimum width=17mm, minimum height=8mm,
|
||||
},
|
||||
}
|
||||
|
||||
\draw[-{latex}] (in) -- (enc);
|
||||
\draw[-{latex}] (enc) -- (tra);
|
||||
\draw[-{latex}] (tra) -- (cha);
|
||||
\draw[-{latex}] (cha) -- (rec);
|
||||
\draw[-{latex}] (rec) -- (dec);
|
||||
\draw[-{latex}] (dec) -- (out);
|
||||
\begin{tikzpicture}
|
||||
[
|
||||
node distance = 2mm and 7mm,
|
||||
]
|
||||
\node (in) {};
|
||||
\node[box, right=of in] (enc) {Encoder};
|
||||
\node[box, minimum width=23mm, right=of enc] (mod) {Modulator};
|
||||
\node[box, below right=of mod] (cha) {Channel};
|
||||
\node[box, minimum width=23mm, below left=of cha] (dem) {Demodulator};
|
||||
\node[box, left=of dem] (dec) {Decoder};
|
||||
\node[left=of dec] (out) {};
|
||||
|
||||
\draw[-{latex}] (in) -- (enc) node[midway, above] {$\bm{u}$};
|
||||
\draw[-{latex}] (enc) -- (mod) node[midway, above] {$\bm{x}$};
|
||||
\draw[-{latex}] (mod) -| (cha);
|
||||
\draw[-{latex}] (cha) |- (dem);
|
||||
\draw[-{latex}] (dem) -- (dec) node[midway, above] {$\bm{y}$};
|
||||
\draw[-{latex}] (dec) -- (out) node[midway, above] {$\hat{\bm{u}}$};
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{Diagram of a transmission system}
|
||||
\caption{Overview of a transmission system.}
|
||||
\label{fig:Diagram of a transmission system}
|
||||
\end{figure}
|
||||
%
|
||||
|
||||
\red{
|
||||
\textbf{Topics to cover}
|
||||
\begin{itemize}
|
||||
\item Parity checks and describing a code by $H$ rather than
|
||||
$\mathcal{C}$
|
||||
\item G, H, notation
|
||||
\item Minimum distance, [] - notation
|
||||
\item The syndrome
|
||||
\item The decoding problem
|
||||
\item Soft vs. Hard information
|
||||
\end{itemize}
|
||||
}
|
||||
|
||||
\red{
|
||||
\textbf{General Notes:}
|
||||
\begin{itemize}
|
||||
\item Make sure all coding concepts used later on have been
|
||||
introduced (e.g., code rate?)
|
||||
\end{itemize}
|
||||
}
|
||||
%
|
||||
% Hard vs. soft information
|
||||
%
|
||||
|
||||
\subsection{Low-Density Parity-Check Codes}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user