Added dual ascent; Minor changes
This commit is contained in:
parent
355d789cef
commit
81837b34f3
@ -101,11 +101,15 @@ Lastly, the optimization methods utilized are described.
|
|||||||
\section{Optimization Methods}
|
\section{Optimization Methods}
|
||||||
\label{sec:theo:Optimization Methods}
|
\label{sec:theo:Optimization Methods}
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \ac{ADMM}
|
||||||
|
\item proximal decoding
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
Generally, any linear program \todo{Acronym} can be expressed in \textit{standard form}%
|
Generally, any linear program \todo{Acronym} can be expressed in \textit{standard form}%
|
||||||
\todo{Citation needed}%
|
|
||||||
\footnote{The inequality $\boldsymbol{x} \ge \boldsymbol{0}$ is to be
|
\footnote{The inequality $\boldsymbol{x} \ge \boldsymbol{0}$ is to be
|
||||||
interpreted componentwise.}%
|
interpreted componentwise.}
|
||||||
:%
|
\cite[Sec. 1.1]{intro_to_lin_opt_book}:%
|
||||||
%
|
%
|
||||||
\begin{alignat}{3}
|
\begin{alignat}{3}
|
||||||
\begin{alignedat}{3}
|
\begin{alignedat}{3}
|
||||||
@ -116,7 +120,9 @@ interpreted componentwise.}%
|
|||||||
\label{eq:theo:admm_standard}
|
\label{eq:theo:admm_standard}
|
||||||
\end{alignat}%
|
\end{alignat}%
|
||||||
%
|
%
|
||||||
A technique called \textit{lagrangian relaxation} can then be applied - some of the
|
A technique called \textit{lagrangian relaxation}%
|
||||||
|
\todo{Citation needed}%
|
||||||
|
can then be applied - some of the
|
||||||
constraints are moved into the objective function itself and the weights
|
constraints are moved into the objective function itself and the weights
|
||||||
$\boldsymbol{\lambda}$ are introduced. A new, relaxed problem is formulated:
|
$\boldsymbol{\lambda}$ are introduced. A new, relaxed problem is formulated:
|
||||||
%
|
%
|
||||||
@ -172,8 +178,9 @@ bound actually reaches the value itself:
|
|||||||
.\end{align*}
|
.\end{align*}
|
||||||
%
|
%
|
||||||
In other words, with the optimal choice of $\boldsymbol{\lambda}$,
|
In other words, with the optimal choice of $\boldsymbol{\lambda}$,
|
||||||
the optimal objectives of the problems (\ref{eq:theo:admm_standard})
|
the optimal objectives of the problems (\ref{eq:theo:admm_relaxed})
|
||||||
and (\ref{eq:theo:admm_relaxed}) have the same value.
|
and (\ref{eq:theo:admm_standard}) have the same value.
|
||||||
|
|
||||||
Thus, we can define the \textit{dual problem} as the search for the tightest lower bound:%
|
Thus, we can define the \textit{dual problem} as the search for the tightest lower bound:%
|
||||||
%
|
%
|
||||||
\begin{align}
|
\begin{align}
|
||||||
@ -194,4 +201,17 @@ by computing \cite[Sec. 2.1]{admm_distr_stats}%
|
|||||||
\boldsymbol{\lambda}_{\text{opt}} \right)
|
\boldsymbol{\lambda}_{\text{opt}} \right)
|
||||||
\label{eq:theo:admm_obtain_primal}
|
\label{eq:theo:admm_obtain_primal}
|
||||||
.\end{align}
|
.\end{align}
|
||||||
|
%
|
||||||
|
The dual problem can then be solved using \textit{dual ascent}: starting with an
|
||||||
|
initial estimate of $\boldsymbol{\lambda}$, calculate an estimate for $\boldsymbol{x}$
|
||||||
|
using equation (\ref{eq:theo:admm_obtain_primal}); then, update $\boldsymbol{\lambda}$
|
||||||
|
using gradient descent \cite[Sec. 2.1]{admm_distr_stats}:%
|
||||||
|
%
|
||||||
|
\begin{align*}
|
||||||
|
\boldsymbol{x} &\leftarrow \argmin_{\boldsymbol{x}} \mathcal{L}\left(
|
||||||
|
\boldsymbol{x}, \boldsymbol{b}, \boldsymbol{\lambda} \right) \\
|
||||||
|
\boldsymbol{\lambda} &\leftarrow \boldsymbol{\lambda}
|
||||||
|
+ \alpha\left( \boldsymbol{A}\boldsymbol{x} - \boldsymbol{b} \right),
|
||||||
|
\hspace{5mm} \alpha > 0
|
||||||
|
.\end{align*}
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user