Added dual ascent; Minor changes
This commit is contained in:
parent
355d789cef
commit
81837b34f3
@ -101,11 +101,15 @@ Lastly, the optimization methods utilized are described.
|
||||
\section{Optimization Methods}
|
||||
\label{sec:theo:Optimization Methods}
|
||||
|
||||
\begin{itemize}
|
||||
\item \ac{ADMM}
|
||||
\item proximal decoding
|
||||
\end{itemize}
|
||||
|
||||
Generally, any linear program \todo{Acronym} can be expressed in \textit{standard form}%
|
||||
\todo{Citation needed}%
|
||||
\footnote{The inequality $\boldsymbol{x} \ge \boldsymbol{0}$ is to be
|
||||
interpreted componentwise.}%
|
||||
:%
|
||||
interpreted componentwise.}
|
||||
\cite[Sec. 1.1]{intro_to_lin_opt_book}:%
|
||||
%
|
||||
\begin{alignat}{3}
|
||||
\begin{alignedat}{3}
|
||||
@ -116,7 +120,9 @@ interpreted componentwise.}%
|
||||
\label{eq:theo:admm_standard}
|
||||
\end{alignat}%
|
||||
%
|
||||
A technique called \textit{lagrangian relaxation} can then be applied - some of the
|
||||
A technique called \textit{lagrangian relaxation}%
|
||||
\todo{Citation needed}%
|
||||
can then be applied - some of the
|
||||
constraints are moved into the objective function itself and the weights
|
||||
$\boldsymbol{\lambda}$ are introduced. A new, relaxed problem is formulated:
|
||||
%
|
||||
@ -172,8 +178,9 @@ bound actually reaches the value itself:
|
||||
.\end{align*}
|
||||
%
|
||||
In other words, with the optimal choice of $\boldsymbol{\lambda}$,
|
||||
the optimal objectives of the problems (\ref{eq:theo:admm_standard})
|
||||
and (\ref{eq:theo:admm_relaxed}) have the same value.
|
||||
the optimal objectives of the problems (\ref{eq:theo:admm_relaxed})
|
||||
and (\ref{eq:theo:admm_standard}) have the same value.
|
||||
|
||||
Thus, we can define the \textit{dual problem} as the search for the tightest lower bound:%
|
||||
%
|
||||
\begin{align}
|
||||
@ -194,4 +201,17 @@ by computing \cite[Sec. 2.1]{admm_distr_stats}%
|
||||
\boldsymbol{\lambda}_{\text{opt}} \right)
|
||||
\label{eq:theo:admm_obtain_primal}
|
||||
.\end{align}
|
||||
%
|
||||
The dual problem can then be solved using \textit{dual ascent}: starting with an
|
||||
initial estimate of $\boldsymbol{\lambda}$, calculate an estimate for $\boldsymbol{x}$
|
||||
using equation (\ref{eq:theo:admm_obtain_primal}); then, update $\boldsymbol{\lambda}$
|
||||
using gradient descent \cite[Sec. 2.1]{admm_distr_stats}:%
|
||||
%
|
||||
\begin{align*}
|
||||
\boldsymbol{x} &\leftarrow \argmin_{\boldsymbol{x}} \mathcal{L}\left(
|
||||
\boldsymbol{x}, \boldsymbol{b}, \boldsymbol{\lambda} \right) \\
|
||||
\boldsymbol{\lambda} &\leftarrow \boldsymbol{\lambda}
|
||||
+ \alpha\left( \boldsymbol{A}\boldsymbol{x} - \boldsymbol{b} \right),
|
||||
\hspace{5mm} \alpha > 0
|
||||
.\end{align*}
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user