Add SC-LDPC Tanner Graph, fix qualitative LDPC plot, add decoding paragraph
This commit is contained in:
@@ -1,4 +1,23 @@
|
||||
|
||||
@article{dirac_new_1939,
|
||||
title = {A new notation for quantum mechanics},
|
||||
volume = {35},
|
||||
issn = {1469-8064, 0305-0041},
|
||||
url = {https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/abs/new-notation-for-quantum-mechanics/4631DB9213D680D6332BA11799D76AFB},
|
||||
doi = {10.1017/S0305004100021162},
|
||||
abstract = {In mathematical theories the question of notation, while not of primary importance, is yet worthy of careful consideration, since a good notation can be of great value in helping the development of a theory, by making it easy to write down those quantities or combinations of quantities that are important, and difficult or impossible to write down those that are unimportant. The summation convention in tensor analysis is an example, illustrating how specially appropriate a notation can be.},
|
||||
language = {en},
|
||||
number = {3},
|
||||
urldate = {2025-11-28},
|
||||
journal = {Mathematical Proceedings of the Cambridge Philosophical Society},
|
||||
author = {Dirac, P. a. M.},
|
||||
month = jul,
|
||||
year = {1939},
|
||||
note = {TLDR: In mathematical theories the question of notation is yet worthy of careful consideration, since a good notation can be of great value in helping the development of a theory, by making it easy to write down those quantities or combinations of quantities that are important, and difficult or impossible towrite down those that are unimportant.},
|
||||
keywords = {/unread},
|
||||
pages = {416--418},
|
||||
}
|
||||
|
||||
@article{huang_improved_2023,
|
||||
title = {Improved {Noisy} {Syndrome} {Decoding} of {Quantum} {LDPC} {Codes} with {Sliding} {Window}},
|
||||
url = {http://arxiv.org/abs/2311.03307},
|
||||
@@ -1505,7 +1524,7 @@ We study the performance of medium-length quantum LDPC (QLDPC) codes in the depo
|
||||
month = may,
|
||||
year = {2016},
|
||||
note = {ISSN: 1938-1883},
|
||||
keywords = {/unread, Block codes, Complexity theory, Decoding, Iterative decoding, Sparse matrices, Throughput},
|
||||
keywords = {/unread, Decoding, Complexity theory, Iterative decoding, Block codes, Sparse matrices, Throughput},
|
||||
pages = {1--6},
|
||||
file = {Full Text PDF:/home/andreas/workspace/work/hiwi/Zotero/storage/TRN7GLTA/Hassan et al. - 2016 - Fully parallel window decoder architecture for spatially-coupled LDPC codes.pdf:application/pdf},
|
||||
}
|
||||
@@ -1525,7 +1544,7 @@ We study the performance of medium-length quantum LDPC (QLDPC) codes in the depo
|
||||
month = jul,
|
||||
year = {2014},
|
||||
note = {TLDR: This article reviews a particularly exciting new class of low-density parity check codes called spatially coupled codes, which promise excellent performance over a broad range of channel conditions and decoded error rate requirements.},
|
||||
keywords = {/unread, Block codes, Convolutional codes, Decoding, Iterative decoding, Sparse matrices},
|
||||
keywords = {/unread, Decoding, Iterative decoding, Block codes, Sparse matrices, Convolutional codes},
|
||||
pages = {168--176},
|
||||
file = {Full Text PDF:/home/andreas/workspace/work/hiwi/Zotero/storage/WH3R5BMN/Costello et al. - 2014 - Spatially coupled sparse codes on graphs theory and practice.pdf:application/pdf},
|
||||
}
|
||||
|
||||
@@ -122,24 +122,17 @@ An input message $\bm{u}\in \mathbb{F}_2^k$ is mapped onto a codeword $\bm{x}
|
||||
\in \mathbb{F}_2^n$. This is passed on to a modulator, which
|
||||
interacts with the physical channel.
|
||||
A demodulator processes the channel output and forwards the result
|
||||
$\bm{y} \in \mathbb{R}^n$ to a decoder.
|
||||
$\bm{y}$ to a decoder.
|
||||
We differentiate between \textit{soft-decision} decoding, where
|
||||
$\bm{y} \in \mathbb{R}^n$, and \textit{hard-decision} decoding, where
|
||||
$\bm{y} \in \mathbb{F}_2^n$ \cite[Sec.~1.5.1.3]{ryan_channel_2009}.
|
||||
Finally, the decoder is responsible for obtaining an estimate
|
||||
$\hat{\bm{u}} \in \mathbb{F}_2^k$ of the original input message.
|
||||
This is done by first finding an estimate $\hat{\bm{x}}$ of the sent
|
||||
codeword and undoing the encoding.
|
||||
The decoding problem that we generally attempt to solve thus consists
|
||||
in finding the best estimate $\hat{\bm{x}}$ given $\bm{y}$.
|
||||
One approach is to use the \ac{ml} criterion \cite[Sec.
|
||||
1.4]{ryan_channel_2009}
|
||||
\begin{align*}
|
||||
\hat{\bm{u}}_\text{ML} = \arg\max_{\bm{x} \in \mathcal{C}}
|
||||
P(\bm{Y} = \bm{y} \vert \bm{X} = \bm{x})
|
||||
.
|
||||
\end{align*}
|
||||
Finally, we differentiate between \textit{soft-decision} decoding, where
|
||||
$\bm{y} \in \mathbb{R}^n$, and \textit{hard-decision} decoding, where
|
||||
$\bm{y} \in \mathbb{F}_2^n$ \cite[Sec.~1.5.1.3]{ryan_channel_2009}.
|
||||
%
|
||||
|
||||
\begin{figure}[t]
|
||||
\centering
|
||||
|
||||
@@ -186,7 +179,7 @@ $\bm{y} \in \mathbb{F}_2^n$ \cite[Sec.~1.5.1.3]{ryan_channel_2009}.
|
||||
|
||||
Shannon's noisy-channel coding theorem is stated for codes whose block
|
||||
length approaches infinity. This suggests that as the block length
|
||||
becomes larger, the performance of the considered condes should
|
||||
becomes larger, the performance of the considered codes should
|
||||
generally improve.
|
||||
However, the size of the \ac{pcm}, and thus in general the decoding complexity,
|
||||
of a linear block code grows quadratically with $n$.
|
||||
@@ -316,9 +309,10 @@ qualitative performance characteristic of an \ac{ldpc} code
|
||||
\begin{axis}[
|
||||
width=12cm,
|
||||
height=9cm,
|
||||
xlabel={$E_b/N_0$ (dB)},
|
||||
ylabel={\ac{ber}},
|
||||
xmin=0, xmax=6,
|
||||
xlabel={Signal-to-noise ratio},
|
||||
ylabel={Error rate},
|
||||
% xmin=0, xmax=6,
|
||||
enlarge x limits=false,
|
||||
ymin=1e-9, ymax=1,
|
||||
ticks=none,
|
||||
% y tick label={},
|
||||
@@ -330,57 +324,51 @@ qualitative performance characteristic of an \ac{ldpc} code
|
||||
legend cell align={left},
|
||||
]
|
||||
|
||||
\addplot+[mark=none, solid, thick, smooth] coordinates {
|
||||
(0.0, 1.2e-1)
|
||||
(0.3, 1.1e-1)
|
||||
(0.5, 9e-2)
|
||||
(0.7, 5e-2)
|
||||
(0.8, 2e-2)
|
||||
(0.9, 5e-3)
|
||||
(1.0, 8e-4)
|
||||
(1.1, 1e-4)
|
||||
(1.2, 1.5e-5)
|
||||
(1.3, 3e-6)
|
||||
(1.4, 5e-7)
|
||||
(1.5, 8e-8)
|
||||
(1.6, 2e-8)
|
||||
(1.8, 8e-9)
|
||||
(2.0, 5e-9)
|
||||
(2.5, 3e-9)
|
||||
(3.0, 2e-9)
|
||||
\addplot+[mark=none, solid, smooth, KITblue] coordinates {
|
||||
(4.5789E-01, 1.1821E-01)
|
||||
(6.6842E-01, 9.4575E-02)
|
||||
(8.6316E-01, 5.2657E-02)
|
||||
(1.0421E+00, 2.2183E-02)
|
||||
(1.1789E+00, 8.3588E-03)
|
||||
(1.3368E+00, 1.4835E-03)
|
||||
(1.4895E+00, 1.6852E-04)
|
||||
(1.5842E+00, 2.8285E-05)
|
||||
(1.6737E+00, 4.2465E-06)
|
||||
(1.7684E+00, 3.4519E-07)
|
||||
(1.8316E+00, 3.9213E-08)
|
||||
(1.8684E+00, 6.2247E-09)
|
||||
(1.9053E+00, 1E-09)
|
||||
};
|
||||
\addlegendentry{Regular LDPC-BC}
|
||||
\addlegendentry{Regular}
|
||||
|
||||
\addplot+[mark=none, solid, thick, smooth] coordinates {
|
||||
(0.0, 1.5e-1)
|
||||
(0.3, 1.4e-1)
|
||||
(0.5, 1.2e-1)
|
||||
(0.6, 1.0e-1)
|
||||
(0.7, 6e-2)
|
||||
(0.8, 1e-2)
|
||||
(0.85, 2e-3)
|
||||
(0.9, 2e-4)
|
||||
(0.95, 2e-5)
|
||||
(1.0, 1.5e-6)
|
||||
(1.05, 1e-7)
|
||||
(1.1, 1e-8)
|
||||
(1.2, 3e-9)
|
||||
(1.5, 1.5e-9)
|
||||
(2.0, 1e-9)
|
||||
(2.5, 8e-10)
|
||||
(3.0, 6e-10)
|
||||
\addplot+[mark=none, solid, smooth, KITorange] coordinates {
|
||||
(4.5789E-01, 1.1821E-01)
|
||||
(6.4211E-01, 4.9800E-02)
|
||||
(7.5263E-01, 1.2700E-02)
|
||||
(8.1579E-01, 2.3177E-03)
|
||||
(8.6842E-01, 3.5779E-04)
|
||||
(9.1053E-01, 5.3716E-05)
|
||||
(9.4737E-01, 4.8818E-06)
|
||||
(9.8947E-01, 6.5555E-07)
|
||||
(1.0421E+00, 9.5713E-08)
|
||||
% (1.0684E+00, 2.9670E-08)
|
||||
(1.1474E+00, 1.2499E-08)
|
||||
(1.3000E+00, 7.1560E-09)
|
||||
(1.4579E+00, 6.0535E-09)
|
||||
% (1.6105E+00, 5E-09)
|
||||
(1.9579E+00, 4E-09)
|
||||
(2.2947E+00, 3.1876E-09)
|
||||
% (2.8842E+00, 2.0403E-09)
|
||||
};
|
||||
\addlegendentry{Irregular LDPC-BC}
|
||||
\addlegendentry{Irregular}
|
||||
|
||||
\draw[red, thick, rounded corners=12pt]
|
||||
(axis cs:0.55, 2e-3) rectangle (axis cs:1.55, 5e-5);
|
||||
\node[red, font=\small\bfseries, anchor=west] at (axis
|
||||
cs:1.6, 4e-4) {Waterfall};
|
||||
\draw[gray, densely dashed]
|
||||
(axis cs:0.65, 2e-3) rectangle (axis cs:1.65, 5e-5);
|
||||
\node[below] at (axis cs:1.15, 6e-5) {Waterfall};
|
||||
|
||||
\draw[red, thick, rounded corners=12pt]
|
||||
(axis cs:1.6, 8e-9) rectangle (axis cs:3.2, 4e-10);
|
||||
\node[red, font=\small\bfseries, anchor=west] at (axis
|
||||
cs:3.3, 2e-9) {Error floor};
|
||||
\draw[gray, densely dashed]
|
||||
(axis cs:1, 6e-8) rectangle (axis cs:2, 2e-9);
|
||||
\node[above] at (axis cs:1.5, 7e-8) {Error floor};
|
||||
\end{axis}
|
||||
\end{tikzpicture}
|
||||
|
||||
@@ -427,16 +415,17 @@ This is achieved by connecting some \acp{vn} of one spatial position to
|
||||
\begin{align*}
|
||||
\bm{H} =
|
||||
\begin{pmatrix}
|
||||
\bm{H}_0(1) & & & & \\
|
||||
\vdots & \ddots & & & \\
|
||||
\bm{H}_W(1) & & \bm{H}_0(L) & & \\
|
||||
& \ddots & & & \\
|
||||
& & \bm{H}_W(L) & & \\
|
||||
\bm{H}_0(1) & & \\
|
||||
\vdots & \ddots & \\
|
||||
\bm{H}_W(1) & & \bm{H}_0(L) \\
|
||||
& \ddots & \\
|
||||
& & \bm{H}_W(L) \\
|
||||
\end{pmatrix}
|
||||
,
|
||||
\end{align*}
|
||||
%
|
||||
where $W \in \mathbb{N}$ is the \textit{coupling width}.
|
||||
where $W \in \mathbb{N}$ is the \textit{coupling width} and $L \in
|
||||
\mathbb{N}$ is the number of spatial positions.
|
||||
This construction results in a Tanner graph as depicted in
|
||||
\autoref{fig:sc-ldpc-tanner}.
|
||||
|
||||
@@ -463,7 +452,7 @@ This construction results in a Tanner graph as depicted in
|
||||
|
||||
\coordinate (temp) at ($(vn01)!0.5!(vn02)$);
|
||||
|
||||
\node[CN, left = of temp] (cn00) {};
|
||||
\node[CN, right = of temp] (cn00) {};
|
||||
\node[CN, below = of cn00] (cn01) {};
|
||||
|
||||
\draw (vn00) -- (cn00);
|
||||
@@ -473,19 +462,19 @@ This construction results in a Tanner graph as depicted in
|
||||
\draw (vn02) -- (cn01);
|
||||
\draw (vn04) -- (cn01);
|
||||
|
||||
\foreach \i in {1,2,3,4} {
|
||||
\pgfmathtruncatemacro{\prev}{\i-1}
|
||||
\foreach \i in {1,2,3} {
|
||||
\pgfmathtruncatemacro{\previ}{\i-1}
|
||||
\node[VN, right = 25mm of vn\previ 0] (vn\i0) {};
|
||||
|
||||
\node[VN, right = 25mm of vn\prev 0] (vn\i0) {};
|
||||
\node[VN, below = of vn\i0] (vn\i1) {};
|
||||
\node[VN, below = of vn\i1] (vn\i2) {};
|
||||
\node[VN, below = of vn\i2] (vn\i3) {};
|
||||
\node[VN, below = of vn\i3] (vn\i4) {};
|
||||
\foreach \j in {1,...,4} {
|
||||
\pgfmathtruncatemacro{\prevj}{\j-1}
|
||||
\node[VN, below = of vn\i\prevj] (vn\i\j) {};
|
||||
}
|
||||
|
||||
\coordinate (temp) at ($(vn\i1)!0.5!(vn\i2)$);
|
||||
|
||||
\node[CN, left = of temp] (cn\i0) {};
|
||||
\node[CN, below = of cn\i0] (cn\i1) {};
|
||||
\node[CN, right = of temp] (cn\i0) {};
|
||||
\node[CN, below = of cn\i0] (cn\i1) {};
|
||||
|
||||
\draw (vn\i0) -- (cn\i0);
|
||||
\draw (vn\i1) -- (cn\i0);
|
||||
@@ -495,12 +484,33 @@ This construction results in a Tanner graph as depicted in
|
||||
\draw (vn\i4) -- (cn\i1);
|
||||
}
|
||||
|
||||
\foreach \i in {1,2,3,4} {
|
||||
\pgfmathtruncatemacro{\prev}{\i-1}
|
||||
\node[right = 25mm of vn30] (vn40) {};
|
||||
\node[below = of vn40] (vn41) {};
|
||||
\node[below = of vn41] (vn42) {};
|
||||
\node[below = of vn42] (vn43) {};
|
||||
\node[below = of vn43] (vn44) {};
|
||||
|
||||
\draw (vn\prev 3) -- (cn\i 0);
|
||||
\draw (vn\prev 4) -- (cn\i 1);
|
||||
\coordinate (temp) at ($(vn41)!0.5!(vn42)$);
|
||||
|
||||
\node[right = of temp] (cn40) {};
|
||||
\node[below = of cn40] (cn41) {};
|
||||
|
||||
\foreach \i in {0,1,2} {
|
||||
\pgfmathtruncatemacro{\next}{\i+1}
|
||||
\pgfmathtruncatemacro{\nextnext}{\i+2}
|
||||
|
||||
\draw (vn\i 3) to[bend right] (cn\next 1);
|
||||
\draw (vn\i 1) to[bend left] (cn\nextnext 0);
|
||||
}
|
||||
|
||||
\draw (vn33) to[bend right] (cn41);
|
||||
|
||||
\node at ($(cn40)!0.5!(cn41)$) {\dots};
|
||||
|
||||
\draw[decorate, decoration={brace, amplitude=10pt}]
|
||||
([xshift=-5mm,yshift=2mm]vn00.north) --
|
||||
([xshift=5mm,yshift=2mm]vn00.north -| cn20.north)
|
||||
node[midway, above=4mm] {W};
|
||||
\end{tikzpicture}
|
||||
|
||||
\caption{
|
||||
@@ -518,71 +528,36 @@ later passed to subsequent spatial positions during decoding.
|
||||
This is precisely the effect that leads to the good performance of
|
||||
\ac{sc}-\ac{ldpc} codes in the waterfall region \cite{costello_spatially_2014}.
|
||||
|
||||
\subsection{Belief Propagation}
|
||||
\subsection{Iterative Decoding}
|
||||
|
||||
% TODO: Add exact reference
|
||||
As mentioned above, \ac{ldpc} codes are generally decoded using
|
||||
efficient iterative algorithms, something that is possilbe due to
|
||||
their sparsity \cite[\red{WHERE?}]{ryan_channel_2009}.
|
||||
Specifically, the \ac{spa} is a general decoder that provides
|
||||
near-optimal performance across many different scenarios.
|
||||
Often, the term \ac{bp} is used to denote a whole class of variants
|
||||
of the \ac{spa}, e.g., the \ac{nms} algorithm.
|
||||
\ac{ldpc} codes are generally decoded using efficient iterative
|
||||
algorithms, something that is possilbe due to their sparsity
|
||||
\cite[Sec.~5.3]{ryan_channel_2009}.
|
||||
The algorithm originally proposed for this purpose by Gallager in
|
||||
1960 is now known as the \ac{spa} \cite[5.4.1]{ryan_channel_2009},
|
||||
also called \ac{bp}.
|
||||
|
||||
%
|
||||
% Preliminaires (LLRs, etc.)
|
||||
%
|
||||
% - SPA uses symbol-wise MAP as decision criterion for each symmbol
|
||||
% - Optimal when Tanner graph is a tree, suboptimal with cycles
|
||||
% - Use of LLRs instead of probabilties directly
|
||||
% - Actual algorithm
|
||||
% - CNs: single parity-check codes; VNs: repetition codes
|
||||
% - Algorithm
|
||||
|
||||
% TODO: Make this about the SPA or message passing decoders in general?
|
||||
The \ac{spa} approximates the marginals of the
|
||||
probability distributions of the \acp{vn} by passing
|
||||
\textit{messages} along the edges of the Tanner graph \red{[CITATION]}.
|
||||
The messages take the form of \acp{llr}
|
||||
%
|
||||
% TODO: Proper LLR equation
|
||||
\begin{align*}
|
||||
L(x) = \log\left( \frac{P(X=0)}{P(X=1)} \right)
|
||||
.%
|
||||
\end{align*}
|
||||
%
|
||||
\noindent\red{[LLRs]}
|
||||
% Min-sum algorithm
|
||||
% Approximation of CN update by min sum operation
|
||||
|
||||
%
|
||||
% SPA equations
|
||||
%
|
||||
For \ac{sc}-\ac{ldpc} codes, the iterative decoding procedure is wrapped by a
|
||||
windowing step. This is done to reduce the latency and memory and
|
||||
also the overall computational complexity \cite{costello_spatially_2014}.
|
||||
To this end, the \ac{pcm} is split into several overlapping windows.
|
||||
During decoding, the messages that are passed along the edges of the
|
||||
graph in the overlapping regions are kept in memory and used for the
|
||||
decoding of subsequent blocks \cite[Sec.~III~C.]{costello_spatially_2014}.
|
||||
|
||||
\noindent\red{[SPA]}
|
||||
|
||||
%
|
||||
% NMS equations
|
||||
%
|
||||
|
||||
\noindent\red{[NMS]}
|
||||
|
||||
%
|
||||
% SC-LDPC decoding
|
||||
%
|
||||
|
||||
\noindent\red{[BP for SC-LDPC codes]}
|
||||
|
||||
\red{
|
||||
\begin{itemize}
|
||||
\item SPA and NMS algorithms
|
||||
% TODO: Would it be better to split this into a separate section?
|
||||
\item Sliding-window decoding of SC-LDPC codes
|
||||
\cite{costello_spatially_2014} \cite{hassan_fully_2016}
|
||||
\begin{itemize}
|
||||
\item Windowed decoding
|
||||
\item The core property of SC-LDPC decoders is the
|
||||
passing of reliability information (in the form
|
||||
of LLRs, i.e., soft information) from one window
|
||||
to the next.
|
||||
This way, the highly reliable information from
|
||||
the initial windows is passed on to subsequent
|
||||
windows \cite{costello_spatially_2014}.
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
}
|
||||
% BP for SC-LDPC codes
|
||||
% Windowed decoding
|
||||
|
||||
\section{Quantum Mechanics and Quantum Information Science}
|
||||
\label{sec:Quantum Mechanics and Quantum Information Science}
|
||||
|
||||
Reference in New Issue
Block a user