Add first review responses
This commit is contained in:
parent
adb7321b93
commit
2670cac40b
241
letter.tex
241
letter.tex
@ -6,6 +6,7 @@
|
||||
\usepackage{algorithmic}
|
||||
\usepackage{algorithm}
|
||||
\usepackage{siunitx}
|
||||
\usepackage[normalem]{ulem}
|
||||
\usepackage{dsfont}
|
||||
\usepackage{mleftright}
|
||||
\usepackage{bbm}
|
||||
@ -26,6 +27,18 @@
|
||||
\hyphenation{op-tical net-works semi-conduc-tor IEEE-Xplore}
|
||||
|
||||
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% Custom commands
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
%
|
||||
|
||||
|
||||
\newcommand{\reviewone}[1]{{\textcolor{KITblue}{#1}}}
|
||||
\newcommand{\reviewtwo}[1]{{\textcolor{KITpalegreen}{#1}}}
|
||||
\newcommand{\reviewthree}[1]{{\textcolor{KITred}{#1}}}
|
||||
|
||||
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% Inputs & Global Options
|
||||
@ -139,6 +152,11 @@ Optimization-based decoding, Proximal decoding, ML-in-the-list.
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
\section{Introduction}
|
||||
|
||||
\reviewone{Test1}
|
||||
\reviewtwo{Test2}
|
||||
\reviewthree{Test3}
|
||||
|
||||
|
||||
\IEEEPARstart{C}{hannel} coding using binary linear codes is a way of enhancing
|
||||
the reliability of data by detecting and correcting any errors that may occur
|
||||
during its transmission or storage.
|
||||
@ -857,5 +875,226 @@ Ministry of Education and Research (BMBF) within the project Open6GHub
|
||||
|
||||
\printbibliography
|
||||
|
||||
\end{document}
|
||||
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% Response to the reviews
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
%
|
||||
|
||||
\newpage
|
||||
\onecolumn
|
||||
|
||||
\section{Authors' Response to the Editor resp. the Reviewers}
|
||||
|
||||
\subsection{Review 1}
|
||||
|
||||
|
||||
\begin{itemize}
|
||||
\item \textbf{Comment 1:} This paper proposes a combination of proximal decoding and ML-in-the-list decoding. There are several issues with the paper in its current form that need to be addressed.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. \reviewone{The according changes will be marked by accordingly coloring the changes in the paper and be listed below.
|
||||
}
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 2:} The definition of code-constraint polynomial is baseless. The authors should explain why we use the code-constraint polynomial. Also, I think the code-constraint polynomial cannot be used to replace the prior PDF of $\boldsymbol{x}$, since $h(\boldsymbol{0})$ is the minimum value of $h(\boldsymbol{x})$.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. The definition of the code-constraint polynomial is directly according to \cite{proximal_paper}. There the authors state that:
|
||||
|
||||
\vspace{.1cm}
|
||||
"[...] The first term on the right-hand side of this equation represents the bipolar constraint [...] and the second term corresponds to the parity constraint induced by $\boldsymbol{H}$ [...]. Since the polynomial $h(x)$ has a sum-of-squares (SOS) form, it can be regarded
|
||||
as a penalty function that gives positive penalty values for non-codeword vectors in $\mathbb{R}^n$. The code-constraint polynomial $h(x)$ is inspired by the non-convex parity constraint function used in the GDBF objective function [4]. [...]"
|
||||
\vspace{.1cm}
|
||||
|
||||
Please note that $\boldsymbol{0}$ is not a global minimum for the code-constraint polynomial, but every codeword constitutes a local minimum. Therefore, an iterative algorithm can converge to one of those local minima and, thus, approximate the nearest neighbor decision.
|
||||
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 3:} The definition of the projection $\prod_\eta$ should be provided.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. We added the following description:
|
||||
|
||||
\vspace{.1cm}
|
||||
"[...] every estimate $\boldsymbol{s}$ is projected onto $\left[-\eta, \eta\right]^n$ by a projection $\Pi_\eta : \mathbb{R}^n \rightarrow \left[-\eta, \eta\right]^n$
|
||||
\reviewone{
|
||||
defined as component-wise clipping, i.e., $\Pi_\eta(x_i)=x_i$ if $-\eta\leq x_i\leq \eta$, $\Pi_\eta(x_i)=\eta$ if $x_i>\eta$, and $\Pi_\eta(x_i)=-\eta$ if $x_i<\eta$,
|
||||
}
|
||||
where $\eta$ is a positive constant larger than one, e.g., $\eta = 1.5$. [...]"
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 4:} The proposed improved proximal decoding algorithm is just a combination of proximal decoding and ML-in-the-list decoding. Then, the process of the ML-in-the-list decoding used in this paper is similar to that of chase decoding, which is commonly used in decoding. ML-in-the-list decoding.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. Yes, this is correct. The idea is pretty similar to Chase decoding. The paper at hand is not claiming the introduction of ML-in-the-list or Chase decoding, but to provide a way how the list can be being generated in proximal decoding. We tried to clarify by adding the following statement:
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewone{asdf}
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 5:}
|
||||
The criterion to construct the index set ${\mathcal I}’$ with $N$ elements should be explained clearly.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. We added the following parts in the according paragraph for more clarity:
|
||||
|
||||
\vspace{.1cm}
|
||||
"[...] \reviewone{Tagging the $N\in\mathbb{N}$ most likely erroneous bits can be based on} considering the \reviewone{oscillation of the gradient magnitudes $|\left(\nabla h\right)_{i}|$, $i=1,\ldots, n$ \sout{of the magnitude of the gradient oscillation}} of the code-constraint polynomial \reviewone{by determining the empirical variances along the iterations $\text{Var}_\text{iter}(|\left(\nabla h\right)_{i}|)$, $i=1,\ldots, n$}.
|
||||
\reviewone{\sout{some interesting behavior may be observed}}.
|
||||
\reviewone{Now,} let \reviewone{$\boldsymbol{i}'=(i'_1, \ldots, i_n')=(\tau(1),\ldots, \tau(n))$ with $\tau: \{1,\ldots, n\}\to\{1,\ldots,n\}$} be a permutation of $\{1,\ldots, n\}$ such that $\left| \left(\nabla h\right)\right|_{i'}$ is arranged according to increasing \reviewone{empirical} variances \reviewone{\sout{gradient's magnitude oscillation of its magnitude}}, i.e.,
|
||||
\begin{equation}\label{eq:def:i_prime}
|
||||
\text{Var}_\text{iter}(|\left(\nabla h\right)_{i'_1}|)\leq \cdots \leq \text{Var}_\text{iter}(|\left(\nabla h\right)_{i'_n}|).
|
||||
\end{equation}
|
||||
\reviewone{\sout{with $\text{Var}_\text{iter}(\cdot)$ denoting the empirical variance along the iterations.}}
|
||||
|
||||
\reviewone{To reason the approach in eq. (\ref{eq:def:i_prime}) \sout{Hereafter}}, Fig. \ref{fig:p_error} shows Monte Carlo simulations of the probability that decoded bit $\hat{c}_i'$ at position $i'$ of the estimated codeword
|
||||
is wrong. %, when the components of
|
||||
%$\boldsymbol{c}$ are ordered from smallest to largest oscillation of
|
||||
%$\left(\nabla h\right)_i$.
|
||||
It can be observed that lower magnitudes of oscillation correlate with higher probability that the corresponding bit was not decoded correctly.
|
||||
Thus, this magnitude might be used as a feasible indicator
|
||||
%for determining the probability that a given component was decoded incorrectly and, thus,
|
||||
for identifying \reviewone{the $N$ most likely} erroneously decoded bit positions as \reviewone{the first $N$ indices of $\boldsymbol{i}'$}:
|
||||
\[
|
||||
\mathcal{I}'=\{i_1', \ldots, i_N': \boldsymbol{i}' \text{ as defined in (\ref{eq:def:i_prime})} \}.%[...]"
|
||||
\]
|
||||
\vspace{0.75cm}
|
||||
|
||||
\item \textbf{Comment 6:}
|
||||
The performance of BP decoding should be provided as the baseline.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. We added the according BP behavior in the figure and added the following comment:
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewone{As shown in Fig., it can that BP decoding performs...}
|
||||
\vspace{0.75cm}
|
||||
|
||||
\end{itemize}
|
||||
|
||||
|
||||
\subsection{Review 2}
|
||||
|
||||
|
||||
\begin{itemize}
|
||||
\item \textbf{Comment 1:} I believe that the paper makes a nice contribution to the topic of optimization-based decoding of LDPC codes. The topic is especially relevant, nowadays, for the applicability of this kind of decoders to quantum error correction - where classical BP decoding may yield limited coding gains, due to the loopy nature of the graphs.
|
||||
|
||||
The work is nicely-presented, solid, and the results are convincing. My only comment would be to try to put the use of this decoder in some perspective:
|
||||
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your positive feedback. \reviewtwo{The according changes will be marked by accordingly coloring the changes in the paper and be listed below.}
|
||||
|
||||
\vspace{0.75cm}
|
||||
|
||||
\item \textbf{Comment 2:} [...] adding, on the performance charts, the performance of a standard BP decoder (it will beat your decoding algorithm, but this is not the point)
|
||||
|
||||
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. We added the according BP behavior in the figure and added the following comment:
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewtwo{As shown in Fig., it can seen that BP decoding performs...}
|
||||
\vspace{0.75cm}
|
||||
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 3:} [...] explaining when this class of algorithms should be preferred to BP decoding
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. We added the following statement:
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewtwo{something concerning effort?!?}
|
||||
\vspace{0.75cm}
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\end{itemize}
|
||||
|
||||
|
||||
\subsection{Review 3}
|
||||
|
||||
\begin{itemize}
|
||||
\item \textbf{Comment 1:} The paper describes an enhancement and mitigate essential flaws found in the recently reported proximal decoding algorithm for LDPC codes, mentioned in the references section. At first the algorithm subject to the paper is interesting because the published material a few years back seem to have no substantial performance improvement, and did not seem to make any influence. It is therefore interesting to see that this paper addresses this fact and fixes the issues around the originally proposed algorithm and demonstrating up to a 1 dB coding gain as a result of these corrections and enhancements.
|
||||
|
||||
While I find the paper is interesting and relevant, here are my essential comments that would prevent me in favor of publication.
|
||||
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your positive feedback. \reviewthree{The according changes will be marked by accordingly coloring the changes in the paper and be listed below.}
|
||||
\vspace{0.75cm}
|
||||
|
||||
\item \textbf{Comment 2:} The work is titled after linar block codes, however both the original proximal decoding paper and this work go after LDPC codes only. Clarification required as in whether the proposed method would work for any linear block code, and if so, elaboration and proof is needed as well. Currently, linear codes are only mentioned in the first two sentences of the Introduction section other than the title.
|
||||
|
||||
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. We also analysezed the proposed scheme for BCH codes. There it turned out that...
|
||||
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewthree{Some comment regarding the applicability of the proposed scheme to BCH...}
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 3:} Does this work (and the original work) based on BPSK modulation only? How would the code constraint polynomial change with higher order modulations? It would be interesting to see how this would change given that the polynomial is based on a nearest neighbor decision.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback.
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewthree{one sentence regarding bit-metric decoder mapping higher valued symbols to elementwise bit-LLRs. }
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 4:} The decoding failure rate stands out as a good analysis as in explaining the FER behavior. But if the codeword is not really converging at all, wouldn't there be simpler approaches than ML decoding to find out which one of $2^N$ codewords is the valid one?
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback.
|
||||
|
||||
\vspace{.1cm}
|
||||
\reviewthree{one sentence concerning the feasability of using $N$ bit candidates and choosing $N$ according to the complexity; comment on trade-off w.r.t. $N$}
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 5:} If you can, please have a more comprehensive simulation to smooth out the curve in Fig.4. Otherwise, please explain the odd behavior in the middle of the figure. I would also recommend a bar graph over a line graph for a better representation of the data.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. The behavior is due to only few errors occurring in this setting (please mind the $y$-axis. Since the relevant information is contained in only the lower values of $i'$, which will finally be chosen for constituting $\mathcal{I}'$, only indices up to, e.g. $N=12$ are relevant. The figure has been complemented by focussing the relevant region.
|
||||
\vspace{0.75cm}
|
||||
|
||||
|
||||
\item \textbf{Comment 6:} How does your algorithm handle the case when there is more than one ML in your final list? It is not shown in the algorithm.
|
||||
|
||||
\vspace{0.25cm}
|
||||
\textbf{Authors:}
|
||||
Thank you for your feedback. Since Algorithm 3 (ML-in-the-List) is dealing with real-valued numbers, the probability of two correlations being equal is zero almost surely. \textcolor{red}{@AT: Do we need to check this?} Even if the event of a draw would happen, choosing either of the candidates is equivalent with respect to the ML decision rule.
|
||||
\vspace{0.75cm}
|
||||
|
||||
\end{itemize}
|
||||
|
||||
|
||||
\end{document}
|
||||
|
||||
Loading…
Reference in New Issue
Block a user