Add more doc
This commit is contained in:
parent
97fc0695bd
commit
d5cf6dacf7
92
docs/implementation.md
Normal file
92
docs/implementation.md
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
# Homotopy Continuation Decoder Implementation
|
||||||
|
|
||||||
|
## Path Tracing
|
||||||
|
|
||||||
|
As explained in the introduction, we use a predictor-corrector scheme, with an
|
||||||
|
Euler predictor and a Newton corrector.
|
||||||
|
|
||||||
|
We perform one prediction step, followed by corrector steps until some
|
||||||
|
convergence criterion is satisfied (or we fail). If convergence fails, we
|
||||||
|
adjust the predictor step size and try again, as described in [1, p.130]. The
|
||||||
|
implementation of this process we chose can be described in pseudo code as
|
||||||
|
follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
func perform_prediction_step(y) {...}
|
||||||
|
func perform_correction_step(y, step_size) {...}
|
||||||
|
|
||||||
|
func perform_step(y0) {
|
||||||
|
for i in range(max_retries) {
|
||||||
|
step_size = step_size / 2;
|
||||||
|
|
||||||
|
// Euler predictor
|
||||||
|
|
||||||
|
y = perform_prediction_step(y0);
|
||||||
|
|
||||||
|
// Newton corrector
|
||||||
|
|
||||||
|
for k in range(max_corrector_iterations) {
|
||||||
|
y = perform_correction_step(y, step_size);
|
||||||
|
}
|
||||||
|
if (corrector converged) break;
|
||||||
|
}
|
||||||
|
|
||||||
|
return y;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Channel decoding
|
||||||
|
|
||||||
|
### Homotopy definition
|
||||||
|
|
||||||
|
We previously introduced the following set of polynomial equations:
|
||||||
|
|
||||||
|
$$
|
||||||
|
F(\bm{x}) = \left[\begin{array}{c}1 - x_1^2 \\ \vdots\\ 1 - x_n^2 \\ 1 - \prod_{i \in A(1)}x_i \\ \vdots\\ 1 - \prod_{i \in A(m)}x_i\end{array}\right] \overset{!}{=} \bm{0}.
|
||||||
|
$$
|
||||||
|
|
||||||
|
A naive approach to now define a homotopy from this is to define
|
||||||
|
|
||||||
|
$$
|
||||||
|
G(\bm{x}) = \left[\begin{array}{c}x_1\\ \vdots\\ x_n \\ x_1\\ \vdots\\ x_m\end{array}\right],
|
||||||
|
$$
|
||||||
|
|
||||||
|
whose zeros are trivial to find, and then use
|
||||||
|
$H(\bm{x}, t) = (1-t)G(\bm{x}) + tF(\bm{x})$. However, with this approach the
|
||||||
|
system is overdefined, which leads to problems. Intuitively, what we are doing
|
||||||
|
with the Euler predictor is locally linearize the solution curve of
|
||||||
|
$H(\bm{x}, t) = \bm{0}$. If the system is overdefined, there may not be a
|
||||||
|
solution.
|
||||||
|
|
||||||
|
We instead need to find a new set of fewer polynomials with the same set of
|
||||||
|
zeros. We can find a Gröbner basis of our polynomial system. If the number of
|
||||||
|
polynomials this yields is the same as the number of variable nodes, we can
|
||||||
|
then use this new polynomial system $\tilde{F}(\bm{x})$ to define our homotopy
|
||||||
|
as
|
||||||
|
|
||||||
|
$$
|
||||||
|
\tilde{G}(\bm{x}) = \left[\begin{array}{c}x_1\\ \vdots\\ x_n \end{array}\right], \hspace{3mm} H(\bm{x}, t) = (1-t)\tilde{G}(\bm{x}) + t\tilde{F}(\bm{x})
|
||||||
|
$$
|
||||||
|
|
||||||
|
### Decoding algorithm
|
||||||
|
|
||||||
|
```
|
||||||
|
func decode(y) {
|
||||||
|
for i in range(max_iterations) {
|
||||||
|
y = perform_step(y);
|
||||||
|
|
||||||
|
x_hat = hard_decision(y);
|
||||||
|
if (H @ x_hat == 0) return x_hat;
|
||||||
|
}
|
||||||
|
return x_hat; # If we don't converge, we still return the last
|
||||||
|
# estimate in the hopes that it will limit the BER
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
______________________________________________________________________
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
\[1\]: T. Chen and T.-Y. Li, “Homotopy continuation method for solving systems
|
||||||
|
of nonlinear and polynomial equations,” Communications in Information and
|
||||||
|
Systems, vol. 15, no. 2, pp. 119–307, 2015, doi: 10.4310/CIS.2015.v15.n2.a1.
|
||||||
Loading…
Reference in New Issue
Block a user