Added proximal gradient moethod to theoretical background

This commit is contained in:
Andreas Tsouchlos 2023-03-29 10:53:01 +02:00
parent fdc1ad5df8
commit aa57b252bb

View File

@ -272,14 +272,43 @@ desired \cite[Sec. 15.3]{ryan_lin_2009}.
\section{Optimization Methods} \section{Optimization Methods}
\label{sec:theo:Optimization Methods} \label{sec:theo:Optimization Methods}
TODO: \textit{Proximal algorithms} are algorithms for solving convex optimization
\begin{itemize} problems, that rely on the use of \textit{proximal operators}.
\item Intro The proximal operator $\text{prox}_f : \mathbb{R}^n \rightarrow \mathbb{R}^n$
\item Proximal gradient method of a function $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is defined by
\end{itemize} \cite[Sec. 1.1]{proximal_algorithms}%
%
\vspace{5mm} \begin{align*}
\text{prox}_{\lambda f}\left( \boldsymbol{v} \right) = \argmin_{\boldsymbol{x}} \left(
f\left( \boldsymbol{x} \right) + \frac{1}{2\lambda}\lVert \boldsymbol{x}
- \boldsymbol{v} \rVert_2^2 \right)
.\end{align*}
%
This operator computes a point that is a compromise between minimizing $f$
and staying in the proximity of $\boldsymbol{v}$.
The parameter $\lambda$ determines how heavily each term is weighed.
The \textit{proximal gradient method} is an iterative optimization method used to
solve problems of the form%
%
\begin{align*}
\text{minimize}\hspace{5mm}f\left( \boldsymbol{x} \right) + g\left( \boldsymbol{x} \right)
\end{align*}
%
that consists of two steps: minimizing $f$ with gradient descent
and minimizing $g$ using the proximal operator
\cite[Sec. 4.2]{proximal_algorithms}:%
%
\begin{align*}
\boldsymbol{x} \leftarrow \boldsymbol{x} - \lambda \nabla f\left( \boldsymbol{x} \right) \\
\boldsymbol{x} \leftarrow \text{prox}_{\lambda g} \left( \boldsymbol{x} \right)
,\end{align*}
%
Since $g$ is minimized with the proximal operator and is thus not required
to be differentiable, it can be used to encode the constraints of the problem.
A special case of convex optimization problems are \textit{linear programs}.
These are problems where the objective function is linear and the constraints
consist of linear equalities and inequalities.
Generally, any linear program can be expressed in \textit{standard form}% Generally, any linear program can be expressed in \textit{standard form}%
\footnote{The inequality $\boldsymbol{x} \ge \boldsymbol{0}$ is to be \footnote{The inequality $\boldsymbol{x} \ge \boldsymbol{0}$ is to be
interpreted componentwise.} interpreted componentwise.}