随机算法 (Fall 2011)/Linear Programming

From TCS Wiki
Revision as of 09:37, 6 November 2011 by imported>Etone (Created page with "= Linear Programming = Given a function <math>f:\mathbb{R}^n\rightarrow\mathbb{R}</math> and a set <math>\mathcal{F}\subseteq\mathbb{R}^n</math>, an '''optimization problem''' is…")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Linear Programming

Given a function [math]\displaystyle{ f:\mathbb{R}^n\rightarrow\mathbb{R} }[/math] and a set [math]\displaystyle{ \mathcal{F}\subseteq\mathbb{R}^n }[/math], an optimization problem is the problem of finding an [math]\displaystyle{ x\in\mathcal{F} }[/math] with the optimal (minimum) [math]\displaystyle{ f(x) }[/math]. Formally, the problem can be expressed as:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad f(x)\\ \mbox{subject to} &\quad x\in\mathcal{F} \end{align} }[/math]

where [math]\displaystyle{ x\in\mathcal{R}^n }[/math] is a vector of [math]\displaystyle{ n }[/math] variables. For the problem of maximizing a function [math]\displaystyle{ f }[/math], we just minimizes [math]\displaystyle{ -f }[/math] instead.

We call the function [math]\displaystyle{ f }[/math] the objective function and call any [math]\displaystyle{ x\in\mathcal{F} }[/math] a feasible solution of the problem. A feasible solution that minimizes [math]\displaystyle{ f(x) }[/math] is called an optimal solution. Our task is to find an optimal solution.

The feasible set [math]\displaystyle{ \mathcal{F} }[/math] is usually given by a number of constraints [math]\displaystyle{ P_1,P_2,\ldots,P_m }[/math], which are predicates defined on [math]\displaystyle{ x }[/math]. An [math]\displaystyle{ x }[/math] is a feasible solution if it satisfies all the constraints.

A linear programming (LP) problem is an optimization problem with a linear objective function subject to a number of linear constraints. Formally, an LP is a problem that can be expressed in the following form:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad c_1x_1+c_2x_2+\cdots+c_nx_n\\ \mbox{subject to} & \quad a_{11}x_1+a_{12}x_2+\cdots+a_{1n}x_n\le b_1\\ & \quad a_{21}x_1+a_{22}x_2+\cdots+a_{2n}x_n\le b_2\\ & \qquad\qquad \vdots\\ & \quad a_{m1}x_1+a_{m2}x_2+\cdots+a_{mn}x_n\le b_m\\ \end{align} }[/math]

where [math]\displaystyle{ a_{ij} }[/math], [math]\displaystyle{ b_i }[/math] and [math]\displaystyle{ c_j }[/math] are constants and [math]\displaystyle{ x_j }[/math] are variables. For maximization problem, or constraints given by "[math]\displaystyle{ \ge }[/math]", we can multiply the coefficients by [math]\displaystyle{ -1 }[/math] and still write the LP in the above form.

We can describe the programming in the vector form. Let [math]\displaystyle{ x }[/math] be a vector of [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ A=(a_{ij}) }[/math] be an [math]\displaystyle{ m\times n }[/math] matrix of constant entries, [math]\displaystyle{ b=(b_1,\ldots,b_m) }[/math] be a vector of [math]\displaystyle{ m }[/math] constant entries, and [math]\displaystyle{ c=(c_1,\ldots,c_n) }[/math] be a vector of [math]\displaystyle{ n }[/math] constant entries. Then an LP is expressed as:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad c^T x\\ \mbox{subject to} & \quad Ax\le b \end{align} }[/math]

We call it the canonical form of linear programming.

The geometry of LPs

For [math]\displaystyle{ n }[/math]-dimensional space [math]\displaystyle{ \mathbb{R}^n }[/math], a linear constraint [math]\displaystyle{ ax\le b }[/math] specifies a halfspace. The feasible set specified by [math]\displaystyle{ m }[/math] constraints together is an intersection of [math]\displaystyle{ m }[/math] halfspaces, thus, a convex polytope.

The convex polytope is defined by [math]\displaystyle{ Ax\le b }[/math].

The LP can be thought as given an [math]\displaystyle{ n }[/math]-dimensional polytope and a linear (affine) function [math]\displaystyle{ f:\mathbb{R}^n\rightarrow \mathbb{R} }[/math], looking for a point [math]\displaystyle{ x }[/math] in the polytope with the smallest function value [math]\displaystyle{ f(x) }[/math].

We can choose a number of constraints in the system [math]\displaystyle{ Ax\le b }[/math], and make them hold with equality. This would define a subspace of [math]\displaystyle{ \mathbb{R}^n }[/math]. In particular, if we choose [math]\displaystyle{ n }[/math] linearly independent constraints to form an [math]\displaystyle{ n\times n }[/math] submatrix [math]\displaystyle{ A' }[/math] and the corresponding [math]\displaystyle{ n }[/math]-dimensional vector [math]\displaystyle{ b' }[/math], solving [math]\displaystyle{ A'x=b' }[/math] would give us exactly one point [math]\displaystyle{ x }[/math]. If this [math]\displaystyle{ x }[/math] is in the polytope, i.e. [math]\displaystyle{ x }[/math] satisfies that [math]\displaystyle{ Ax\le b }[/math], we call such [math]\displaystyle{ x }[/math] a vertex of the polytope [math]\displaystyle{ Ax\le b }[/math]. In the context of LP, it is also called a basic feasible solution (bfs) of the LP.

A key observation for LP is that there is a basic feasible solution which is optimal, i.e. the optimal solution is a vertex of the polytope.

The simplex algorithms

The simplex algorithm by George Dantzig solves the linear programming by moving from vertex to vertex of the convex polytope, each time making some progress towards optimizing the objective function. Eventually the algorithm reaches a vertex which is a local optimum. In a convex set, a locally optimal point is also global optimal, thus the simplex algorithm returns an optimal solution in finite steps.

The problem with the simplex algorithm is that in some bad polytopes, it takes the simplex algorithm exponentially many steps to reach the optimum. The simplex algorithm is actually a class of algorithms defined by various pivoting rules, which describe how to move locally from one vertex to another. The original pivoting rule proposed by Dantzig has an exponentially large worst-case time complexity (though works very good in practice). To-date, it is still unknown whether there exists any deterministic simplex algorithm with sub-exponential worst-case complexity.

People have tried randomized pivoting rules, and there are some amazing progresses have been made:

  • Kalai 1992: there is a randomized simplex algorithm with sub-exponential time complexity.
  • Kelner-Spielman 2006: there is a polynomial time randomized simplex algorithm.

Although we do not know whether there exists deterministic poly-time simplex algorithm, we do know that LP can be solved by deterministic poly-time algorithms, i.e. LP is in P. The following two algorithms use different ideas than the simplex algorithm, and are both in poly-time:

  • The ellipsoid algorithm.
  • Interior point methods.

Although these algorithms guarantee polynomial time complexity in the worst-case, their performances are worse than the simplex algorithms. However, the ideas of these algorithms can be used to solve more general mathematical programmings, such as convex programmings.