随机算法 (Fall 2011)/Linear Programming
Given a function and a set , an optimization problem is the problem of finding an with the optimal (minimum) . Formally, the problem can be expressed as:
where is a vector of variables. For the problem of maximizing a function , we just minimizes instead.
We call the function the objective function and call any a feasible solution of the problem. A feasible solution that minimizes is called an optimal solution. Our task is to find an optimal solution.
The feasible set is usually given by a number of constraints , which are predicates defined on . An is a feasible solution if it satisfies all the constraints.
A linear programming (LP) problem is an optimization problem with a linear objective function subject to a number of linear constraints. Formally, an LP is a problem that can be expressed in the following form:
where , and are constants and are variables. For maximization problem, or constraints given by "", we can multiply the coefficients by and still write the LP in the above form.
We can describe the programming in the vector form. Let be a vector of variables. Let be an matrix of constant entries, be a vector of constant entries, and be a vector of constant entries. Then an LP is expressed as:
We call it the canonical form of linear programming.
The geometry of LPs
The convex polytope is defined by .
The LP can be thought as given an -dimensional polytope and a linear (affine) function , looking for a point in the polytope with the smallest function value .
We can choose a number of constraints in the system , and make them hold with equality. This would define a subspace of . In particular, if we choose linearly independent constraints to form an submatrix and the corresponding -dimensional vector , solving would give us exactly one point . If this is in the polytope, i.e. satisfies that , we call such a vertex of the polytope . In the context of LP, it is also called a basic feasible solution (bfs) of the LP.
A key observation for LP is that there is a basic feasible solution which is optimal, i.e. the optimal solution is a vertex of the polytope.
The simplex algorithms
The simplex algorithm by George Dantzig solves the linear programming by moving from vertex to vertex of the convex polytope, each time making some progress towards optimizing the objective function. Eventually the algorithm reaches a vertex which is a local optimum. In a convex set, a locally optimal point is also global optimal, thus the simplex algorithm returns an optimal solution in finite steps.
The problem with the simplex algorithm is that in some bad polytopes, it takes the simplex algorithm exponentially many steps to reach the optimum. The simplex algorithm is actually a class of algorithms defined by various pivoting rules, which describe how to move locally from one vertex to another. The original pivoting rule proposed by Dantzig has an exponentially large worst-case time complexity (though works very good in practice). To-date, it is still unknown whether there exists any deterministic simplex algorithm with sub-exponential worst-case complexity.
People have tried randomized pivoting rules, and there are some amazing progresses have been made:
- Kalai 1992: there is a randomized simplex algorithm with sub-exponential time complexity.
- Kelner-Spielman 2006: there is a polynomial time randomized simplex algorithm.
Although we do not know whether there exists deterministic poly-time simplex algorithm, we do know that LP can be solved by deterministic poly-time algorithms, i.e. LP is in P. The following two algorithms use different ideas than the simplex algorithm, and are both in poly-time:
- The ellipsoid algorithm.
- Interior point methods.
Although these algorithms guarantee polynomial time complexity in the worst-case, their performances are worse than the simplex algorithms. However, the ideas of these algorithms can be used to solve more general mathematical programmings, such as convex programmings.