Write ldpc section
This commit is contained in:
@@ -15,12 +15,13 @@ these topics and subsequently introduces the the fundamentals of \ac{qec}.
|
||||
% TODO: Maybe rephrase: The core concept is not the realization, its's the
|
||||
% thing itself
|
||||
The core concept underpinning error correcting codes is the
|
||||
realization that a finite amount of redundancy, introduced
|
||||
deliberately and systematically to information before its
|
||||
tranmission, can be utilized to reduce the error rate of a
|
||||
communications system considerably.
|
||||
This idea has been expanded upon significantly since it was first
|
||||
brought forward by Claude Shannon in 1948 \cite{shannon_mathematical_1948}.
|
||||
realization that the introduction of a finite amount of redundancy
|
||||
to information before its transmission can leed to a considerably
|
||||
reduced error rate.
|
||||
Specifically, Shannon proved in 1948 that for any channel, a block
|
||||
code can be found that achieves arbitrarily small probability of
|
||||
error at any communication rate up to the capacity of the channel
|
||||
when the block length approaches infinity \cite{shannon_mathematical_1948}.
|
||||
|
||||
In this section, we explore the concepts of ``classical'' (as in non-quantum)
|
||||
error correction that are central to this work.
|
||||
@@ -66,11 +67,13 @@ This notion of the distance between two codewords $\bm{x}_1$ and
|
||||
$\bm{x}_2$ can be expressed using the \textit{Hamming distance} $d(\bm{x}_1,
|
||||
\bm{x}_2)$, which is defined as the number of positions in which they differ.
|
||||
We define the \textit{minimum distance} of a code $\mathcal{C}$ as
|
||||
%
|
||||
\begin{align*}
|
||||
d_\text{min} = \min \left\{ d(\bm{x}_1, \bm{x}_2) : \bm{x}_1,
|
||||
\bm{x}_2 \in \mathcal{C}, \bm{x}_1 \neq \bm{x}_2 \right\}
|
||||
.
|
||||
\end{align*}
|
||||
%
|
||||
We can signify that a binary linear block code has information length
|
||||
$k$, block length $n$ and minimum distance $d_\text{min}$ using the
|
||||
notation $[n,k,d_\text{dmin}]$ \cite[Sec. 1.3]{macwilliams_theory_1977}.
|
||||
@@ -91,11 +94,14 @@ We can arrange the coefficients of these equations in the
|
||||
\textit{parity-check matrix} (\acs{pcm}) $\bm{H} \in
|
||||
\mathbb{F}_2^{(n-k) \times n}$ and equivalently define the code as
|
||||
\cite[Sec. 3.1]{ryan_channel_2009}
|
||||
%
|
||||
\begin{align*}
|
||||
\mathcal{C} = \left\{ \bm{x} \in \mathbb{F}_2^n :
|
||||
\bm{H}\bm{x}^\text{T} = \bm{0} \right\}
|
||||
.%
|
||||
\end{align*}
|
||||
% TODO: Define m
|
||||
%
|
||||
The \textit{syndrome} $\bm{s} = \bm{H} \bm{v}^\text{T}$ describes
|
||||
which parity checks a candidate codeword $\bm{v} \in \mathbb{F}_2^n$ violates.
|
||||
The representation using the \ac{pcm} has the benefit of providing a
|
||||
@@ -171,11 +177,125 @@ $\bm{y} \in \mathbb{F}_2^n$ \cite[Sec. 1.5.1.3]{ryan_channel_2009}.
|
||||
|
||||
\subsection{Low-Density Parity-Check Codes}
|
||||
|
||||
%
|
||||
% Core concept
|
||||
%
|
||||
|
||||
Shannon's noisy-channel coding theorem is stated for codes whose block
|
||||
length approaches infinity. This suggests that as the block length
|
||||
becomes larger, the performance of the considered condes should
|
||||
generally improve.
|
||||
However, the size of the \ac{pcm}, and thus in general the decoding complexity,
|
||||
of a linear block code grows quadratically with $n$.
|
||||
This would quickly render decoding intractable as we increase the block length.
|
||||
We can get around this problem by constructing $\bm{H}$ in such a
|
||||
manner that the number of nonzero entries grows less than quadratically, e.g.,
|
||||
only linearly.
|
||||
This is exactly the motivation behind \ac{ldpc} codes \cite[Ch.
|
||||
1]{gallager_low_1960}.
|
||||
|
||||
%
|
||||
% Tanner Graph, VNs and CNs
|
||||
%
|
||||
|
||||
\ac{ldpc} codes belong to a class sometimes referred to as ``modern codes''.
|
||||
These differ from ``classical codes'' in their decoding algorithm:
|
||||
Classical codes are usually decoded using one-step hard-decision decoding,
|
||||
whereas modern codes are suitable for iterative soft-decision
|
||||
decoding \cite[Preface]{ryan_channel_2009}. The iterative decoding algorithms
|
||||
in question are generally defined in terms of message passing on the
|
||||
\textit{Tanner graph} of the code. The Tanner graph is a bipartite
|
||||
graph that constitues an alternative representation of the \ac{pcm}.
|
||||
We define two types of nodes: \acp{vn}, corresponding to codeword
|
||||
bits, and \acp{cn}, corresponding to individual parity checks.
|
||||
We then construct the Tanner graph by connecting each \ac{cn} to
|
||||
the \acp{vn} that make up the corresponding parity check \cite[Ch.
|
||||
5]{ryan_channel_2009}.
|
||||
Figure \ref{PCM and Tanner graph of the Hamming code} shows this
|
||||
construction for the [7,4,3]-Hamming code.
|
||||
%
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
|
||||
\begin{align*}
|
||||
\bm{H} =
|
||||
\begin{pmatrix}
|
||||
0 & 1 & 1 & 1 & 1 & 0 & 0 \\
|
||||
1 & 0 & 1 & 1 & 0 & 1 & 0 \\
|
||||
1 & 1 & 0 & 1 & 0 & 0 & 1 \\
|
||||
\end{pmatrix}
|
||||
\end{align*}
|
||||
|
||||
\vspace*{2mm}
|
||||
|
||||
\tikzset{
|
||||
VN/.style={
|
||||
circle, fill=KITgreen, minimum width=1mm, minimum height=1mm,
|
||||
},
|
||||
CN/.style={
|
||||
rectangle, fill=KITblue, minimum width=1mm, minimum height=1mm,
|
||||
},
|
||||
}
|
||||
|
||||
\begin{tikzpicture}
|
||||
\node[VN, label=above:$x_1$] (vn1) {};
|
||||
\node[VN, right=12mm of vn1, label=above:$x_2$] (vn2) {};
|
||||
\node[VN, right=12mm of vn2, label=above:$x_3$] (vn3) {};
|
||||
\node[VN, right=12mm of vn3, label=above:$x_4$] (vn4) {};
|
||||
\node[VN, right=12mm of vn4, label=above:$x_5$] (vn5) {};
|
||||
\node[VN, right=12mm of vn5, label=above:$x_6$] (vn6) {};
|
||||
\node[VN, right=12mm of vn6, label=above:$x_7$] (vn7) {};
|
||||
|
||||
\node[
|
||||
CN, below=25mm of vn4,
|
||||
label={below:$x_1 + x_3 + x_4 + x_6 = 0$}
|
||||
] (cn2) {};
|
||||
\node[
|
||||
CN, left=40mm of cn2,
|
||||
label={below:$x_2 + x_3 + x_4 + x_5 = 0$}
|
||||
] (cn1) {};
|
||||
\node[
|
||||
CN, right=40mm of cn2,
|
||||
label={below:$x_1 + x_2 + x_4 + x_7 = 0$}
|
||||
] (cn3) {};
|
||||
|
||||
\foreach \n in {2,3,4,5} {
|
||||
\draw (cn1) -- (vn\n);
|
||||
}
|
||||
|
||||
\foreach \n in {1,3,4,6} {
|
||||
\draw (cn2) -- (vn\n);
|
||||
}
|
||||
|
||||
\foreach \n in {1,2,4,7} {
|
||||
\draw (cn3) -- (vn\n);
|
||||
}
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{The \ac{pcm} and corresponding Tanner graph of the
|
||||
[7,4,3]-Hamming code.}
|
||||
\label{PCM and Tanner graph of the Hamming code}
|
||||
\end{figure}
|
||||
|
||||
%
|
||||
% N_V(j), N_C(i)
|
||||
%
|
||||
|
||||
Mathematically, we represent a \ac{vn} using the index $i \in
|
||||
\mathcal{I} := \left[
|
||||
1 : n \right]$ and a \ac{cn} using the index $j \in \mathcal{J}
|
||||
:= \left[ 1 : m \right]$.
|
||||
We can then encode the information contained in the graph by defining
|
||||
the neighborhood of a varialbe node $i$ as
|
||||
$\mathcal{N}_\text{V} (i) = \left\{ i \in \mathcal{I} : \bm{H}_{j,i}
|
||||
= 1 \right\}$
|
||||
and that of a check node $j$ as
|
||||
$\mathcal{N}_\text{C} = \left\{ j \in \mathcal{J} : \bm{H}_{j,i} = 1 \right\}$.
|
||||
|
||||
\red{
|
||||
\begin{itemize}
|
||||
\item Use \cite[Ch. 5]{ryan_channel_2009} as a reference
|
||||
\item Core concept (Large $n$ with manageable complexity)
|
||||
\item Tanner graphs, VNs and CNs
|
||||
\item Cycles (? - Only if needed later)
|
||||
\item Regular vs irregular (? - only if needed later)
|
||||
\end{itemize}
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user