Pular para o conteúdo

Conheça Walt Disney World

Convex optimization

Convex optimization, a subfield of mathematical optimization, studies the problem of minimizing convex functions over convex sets. Given a real vector space X together with a convex, real-valued function

f:\mathcal{X}\to \mathbb{R}

defined on a convex subset \mathcal{X} of X, the problem is to find a point x * in \mathcal{X} for which the number f(x) is smallest, i.e., a point x * such that

f(x^*) \le f(x) for all x \in \mathcal{X}.

The convexity of \mathcal{X} and f makes the powerful tools of convex analysis applicable: the Hahn–Banach theorem and the theory of subgradients lead to a particularly satisfying theory of necessary and sufficient conditions for optimality, a duality theory generalizing that for linear programming, and effective computational methods.

Convex minimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, statistics (optimal design), and finance. With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming.

Convex optimization has applications beyond minimizing convex functions. Convex optimization is useful also for maximizing concave functions and for the theory of maximizing convex functions:

Contents

Theory

The following statements are true about the convex minimization problem:

  • if a local minimum exists, then it is a global minimum.
  • the set of all (global) minima is convex.
  • for each strictly convex function, if the function has a minimum, then the minimum is unique.

These results are used by the theory of convex minimization along with geometric notions from functional analysis such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.

Standard form

Standard form is the usual and most intuitive form of describing a convex minimization problem. It consists of the following three parts:

  • A convex function f(x): \mathbb{R}^n \to \mathbb{R} to be minimized over the variable x
  • Inequality constraints of the form g_i(x) \leq 0, where the functions gi are convex
  • Equality constraints of the form hi(x) = 0, where the functions hi are affine. In practice, the terms "linear" and "affine" are often used interchangeably. Such constraints can be expressed in the form h_i(x) = a_i^T x + b_i, where ai and bi are column-vectors..

A convex minimization problem is thus written as

\begin{align}&\underset{x}{\operatorname{minimize}}& & f(x) \\&\operatorname{subject\;to}& &g_i(x) \leq 0, \quad i = 1,\dots,m \\&&&h_i(x) = 0, \quad i = 1, \dots,p\end{align}

Note that every equality constraint h(x) = 0 can be equivalently replaced by a pair of inequality constraints h(x)\leq 0 and -h(x)\leq 0. Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.

Following from this fact, it is easy to understand why hi(x) = 0 has to be affine as opposed to merely being convex. If hi(x) is convex, h_i(x) \leq 0 is convex, but -h_i(x) \leq 0 is concave. Therefore, the only way for hi(x) = 0 to be convex is for hi(x) to be affine.

Examples

The following problems are all convex minimization problems, or can be transformed into convex minimizations problems via a change of variables:

Lagrange multipliers

Consider a convex minimization problem given in standard form by a cost function f(x) and inequality constraints g_i(x)\leq 0, where i=1\ldots m. Then the domain \mathcal{X} is:

\mathcal{X} = \left\lbrace{x\in X \vert g_1(x)\le0, \ldots, g_m(x)\le0}\right\rbrace.

The Lagrangian function for the problem is

L(x,λ0,...,λm) = λ0f(x) + λ1g1(x) + ... + λmgm(x).

For each point x in X that minimizes f over X, there exist real numbers λ0, ..., λm, called Lagrange multipliers, that satisfy these conditions simultaneously:

  1. x minimizes L(y, λ0, λ1, ..., λm) over all y in X,
  2. λ0 ≥ 0, λ1 ≥ 0, ..., λm ≥ 0, with at least one λk>0,
  3. λ1g1(x) = 0, ... , λmgm(x) = 0 (complementary slackness).

If there exists a "strictly feasible point", i.e., a point z satisfying

g1(z) < 0,...,gm(z) < 0,

then the statement above can be upgraded to assert that λ0=1.

Conversely, if some x in X satisfies 1-3 for scalars λ0, ..., λm with λ0 = 1, then x is certain to minimize f over X.

Methods

Convex minimization problems can be solved by the following contemporary methods:[2]

  • "Bundle methods" (Wolfe, Lemaréchal), and
  • "Subgradient projection" methods (Polyak),
  • Interior-point methods (Nemirovskii and Nesterov).

Other methods of interest:

Subgradient methods can be implemented simply and so are widely used.[3]

Software

Although most general-purpose nonlinear equation solvers such as LSSOL, LOQO, MINOS, and Lancelot work well, many software packages dealing exclusively with convex minimization problems are also available:

Convex programming languages

Convex minimization solvers

  • MOSEK (commercial, stand-alone software and Matlab interface)
  • solver.com (commercial)
  • SeDuMi (GPLv2, Matlab package)
  • SDPT3 (GPLv2, Matlab package)
  • OBOE

See also

References

  1. ^ Theorem 32.1 in Rockafellar's Convex Analysis states this maximum principle for extended real-valued functions.
  2. ^ For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczynski and Boyd and Vandenberghe (interior point).
  3. ^ Bertsekas
  • Bertsekas, Dimitri (2003). Convex Analysis and Optimization. Athena Scientific. 
  • Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
  • Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
  • Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton University Press. 

External links

Personal tools
  • Log in / create account
Namespaces
Variants
Actions
Navigation
Toolbox
Print/export