Combinatorics (Fall 2010)/Duality, Matroid: Difference between revisions
imported>WikiSysop |
imported>WikiSysop |
||
(17 intermediate revisions by the same user not shown) | |||
Line 212: | Line 212: | ||
:<math> | :<math> | ||
\begin{align} | \begin{align} | ||
\ | \min && -\boldsymbol{b}^T\boldsymbol{y}\\ | ||
\text{ | \text{s.t.} && | ||
-A^T\boldsymbol{y} &\ge-\boldsymbol{c}\\ | -A^T\boldsymbol{y} &\ge-\boldsymbol{c}\\ | ||
&& \boldsymbol{y} &\ge \boldsymbol{0} | && \boldsymbol{y} &\ge \boldsymbol{0} | ||
Line 221: | Line 221: | ||
:<math> | :<math> | ||
\begin{align} | \begin{align} | ||
\ | \max && -\boldsymbol{c}^T\boldsymbol{x}\\ | ||
\text{ | \text{s.t.} && | ||
-A\boldsymbol{x} &\le-\boldsymbol{b}\\ | -A\boldsymbol{x} &\le-\boldsymbol{b}\\ | ||
&& \boldsymbol{x} &\ge \boldsymbol{0} | && \boldsymbol{x} &\ge \boldsymbol{0} | ||
Line 230: | Line 230: | ||
:<math> | :<math> | ||
\begin{align} | \begin{align} | ||
\ | \min && \boldsymbol{c}^T\boldsymbol{x}\\ | ||
\text{ | \text{s.t.} && | ||
A\boldsymbol{x} &\ge\boldsymbol{b}\\ | A\boldsymbol{x} &\ge\boldsymbol{b}\\ | ||
&& \boldsymbol{x} &\ge \boldsymbol{0} | && \boldsymbol{x} &\ge \boldsymbol{0} | ||
\end{align} | |||
</math> | |||
}} | |||
We have shown that feasible solutions of a dual program can be used to lower bound the optimum of the primal program. This is formalized by the following important theorem. | |||
{{Theorem|Theorem (Weak duality theorem)| | |||
:If there exists an optimal solution to the primal LP: | |||
::<math> | |||
\begin{align} | |||
\min && \boldsymbol{c}^T\boldsymbol{x}\\ | |||
\text{s.t.} && | |||
A\boldsymbol{x} &\ge\boldsymbol{b}\\ | |||
&& \boldsymbol{x} &\ge \boldsymbol{0} | |||
\end{align} | |||
</math> | |||
:then, | |||
::<math> | |||
\begin{align} | |||
\begin{align} | |||
\min && \boldsymbol{c}^T\boldsymbol{x}\\ | |||
\text{s.t.} && | |||
A\boldsymbol{x} &\ge\boldsymbol{b}\\ | |||
&& \boldsymbol{x} &\ge \boldsymbol{0} | |||
\end{align} | |||
&\begin{align} | |||
\ge\\ | |||
\\ | |||
\\ | |||
\end{align}&\quad | |||
\begin{align} | |||
\max && \boldsymbol{b}^T\boldsymbol{y}\\ | |||
\text{s.t.} && | |||
A^T\boldsymbol{y} &\le\boldsymbol{c}\\ | |||
&& \boldsymbol{y} &\ge \boldsymbol{0} | |||
\end{align} | |||
\end{align} | |||
</math> | |||
}} | |||
{{proof| | |||
Let <math>\boldsymbol{x}</math> be an arbitrary feasible solution to the primal LP, and <math>\boldsymbol{y}</math> be an arbitrary feasible solution to the dual LP. | |||
We estimate <math>\boldsymbol{y}^TA\boldsymbol{x}</math> in two ways. Recall that <math>A\boldsymbol{x} \ge\boldsymbol{b}</math> and <math>A^T\boldsymbol{y} \le\boldsymbol{c}</math>, thus | |||
:<math>\boldsymbol{y}^T\boldsymbol{b}\le\boldsymbol{y}^TA\boldsymbol{x}\le\boldsymbol{c}^T\boldsymbol{x}</math>. | |||
Since this holds for any feasible solutions, it must also hold for the optimal solutions. | |||
}} | |||
A harmonically beautiful result is that the optimums of the primal LP and its dual are equal. This is called the strong duality theorem of linear programming. | |||
{{Theorem|Theorem (Strong duality theorem)| | |||
:If there exists an optimal solution to the primal LP: | |||
::<math> | |||
\begin{align} | |||
\min && \boldsymbol{c}^T\boldsymbol{x}\\ | |||
\text{s.t.} && | |||
A\boldsymbol{x} &\ge\boldsymbol{b}\\ | |||
&& \boldsymbol{x} &\ge \boldsymbol{0} | |||
\end{align} | |||
</math> | |||
:then, | |||
::<math> | |||
\begin{align} | |||
\begin{align} | |||
\min && \boldsymbol{c}^T\boldsymbol{x}\\ | |||
\text{s.t.} && | |||
A\boldsymbol{x} &\ge\boldsymbol{b}\\ | |||
&& \boldsymbol{x} &\ge \boldsymbol{0} | |||
\end{align} | |||
&\begin{align} | |||
=\\ | |||
\\ | |||
\\ | |||
\end{align}&\quad | |||
\begin{align} | |||
\max && \boldsymbol{b}^T\boldsymbol{y}\\ | |||
\text{s.t.} && | |||
A^T\boldsymbol{y} &\le\boldsymbol{c}\\ | |||
&& \boldsymbol{y} &\ge \boldsymbol{0} | |||
\end{align} | |||
\end{align} | \end{align} | ||
</math> | </math> | ||
Line 239: | Line 319: | ||
== Matroid == | == Matroid == | ||
The matroid is a structure shared by a class of optimization problems for which greedy algorithms work. | |||
=== Kruskal's greedy algorithm for MST === | |||
For the '''minimum-weight spanning tree (MST)''' problem. We are given a connected undirected graph <math>G(V,E)</math> with positive edge weights <math>w:E\rightarrow\mathbb{R}^+</math>, and want to find a spanning tree <math>T</math> of the minimum accumulated weight <math>\sum_{e\in T}w_e</math>. | |||
We consider the equivalent maximum problem to find a spanning tree with maximum weight. To see this makes the problem no harder, we can replace the weight <math>w_e</math> for every edge <math>e\in E</math> by <math>W-w_e</math>, where <math>W</math> is a sufficient large constant which is greater than all weights. The minimum-weight spanning tree for the modified weight is the maximum-weight spanning tree for the original weight. | |||
The following greedy algorithm solves the maximum-weight spanning tree problem. | |||
{{Theorem|Kruskal's Algorithm| | |||
:<math>S=\emptyset</math>; | |||
:while <math>\exists e\in E</math> that <math>S\cup\{e\}</math> is forest | |||
::pick such <math>e</math> with maximum <math>w_e</math>; | |||
::<math>S=S\cup\{e\}</math>; | |||
}} | |||
It is not hard to verify the correctness of this greedy algorithm. But we are more interested in the general framework underlying this algorithm. | |||
=== Matroids === | === Matroids === | ||
Let <math>X</math> be a finite set and <math>\mathcal{F}\subseteq 2^X</math> be a family of subsets of <math>X</math>. A member set <math>S\in\mathcal{F}</math> is called '''maximal''' if <math>S\cup\{x\}\not\in\mathcal{F}</math> for any <math>x\in X\setminus S</math>. | Let <math>X</math> be a finite set and <math>\mathcal{F}\subseteq 2^X</math> be a family of subsets of <math>X</math>. A member set <math>S\in\mathcal{F}</math> is called '''maximal''' if <math>S\cup\{x\}\not\in\mathcal{F}</math> for any <math>x\in X\setminus S</math>. | ||
For <math>Y\subseteq X</math>, denote <math>\mathcal{F}_Y=\{S\in\mathcal{F}\mid S\subseteq Y\}</math>. | For <math>Y\subseteq X</math>, denote <math>\mathcal{F}_Y=\{S\in\mathcal{F}\mid S\subseteq Y\}</math>. Obviously,<math>\mathcal{F}_Y=\mathcal{F}\cap 2^Y\,</math>. | ||
{{Theorem|Definition| | {{Theorem|Definition| | ||
Line 259: | Line 351: | ||
==== Graph matroids ==== | ==== Graph matroids ==== | ||
Let <math>G(V,E)</math> be a graph. Define a set system with ground set <math>E</math> as | |||
:<math>\mathcal{F}=\{S\subseteq E\mid \text{there is no cycle in }S\}.</math> | |||
That is, <math>\mathcal{F}</math> is the set of all forests in <math>G</math>. | |||
We claim that <math>\mathcal{F}</math> is a matroid. | |||
First, <math>\mathcal{F}</math> is hereditary since any subgraph of a forest must also be a forest. | |||
We then verify the matroid property of <math>\mathcal{F}</math>. Let <math>Y\subseteq E</math> be an arbitrary subgraph of <math>G</math>. Suppose <math>Y</math> has <math>k</math> connected components. For any maximal forest <math>S</math> in <math>Y</math> (i.e., <math>S</math> is a spanning forest in <math>Y</math>), it holds that <math>|S|=n-k</math>. In other words, for any <math>Y\subseteq E</math>, all maximal member of <math>\mathcal{F}_Y</math> have the same cardinality. | |||
Therefore, <math>\mathcal{F}</math> is a matroid. Each independent set (of matroid) is a forest in <math>G</math>. For any subgraph <math>Y\subseteq G</math>, the rank of <math>Y</math> is the size of a spanning forest of <math>Y</math>. | |||
==== Linear matroids ==== | ==== Linear matroids ==== | ||
Let <math>A</math> be an <math>m\times n</math> matrix. Define a set system <math>\mathcal{F}\subseteq 2^{[n]}</math> as | |||
:<math>\mathcal{F}=\{S\subseteq [n]\mid S\text{ is a set of linearly independent columns in }A\}.</math> | |||
<math>\mathcal{F}</math> is hereditary since every any subset of a set of linearly independent vectors is still linearly independent. | |||
For any subset <math>Y\subseteq [n]</math> of columns of <math>A</math>. Let <math>B</math> be the submatrix composed by these columns. Then <math>\mathcal{F}_Y</math> contains all sets of linearly independent columns of <math>B</math>. Clearly, all maximal such sets have the same size, which is the column-rank of <math>B</math>. | |||
Therefore, <math>\mathcal{F}</math> is a matroid. Each independent set (of matroid) is a linearly independent set of columns of matrix <math>A</math>. For any set <math>Y\subseteq[n]</math> of columns of matrix <math>A</math>, the rank of <math>Y</math> is the column-rank of the submatrix defined by the columns in <math>Y</math>. | |||
===Weighted matroid maximization === | |||
Consider the following '''weighted matroid maximization''' problem. Let <math>\mathcal{F}\subseteq2^X</math> be a matroid. We define positive weights <math>w:X\rightarrow\mathbb{R}^+</math> of elements in <math>X</math>. Our goal is to find an independent set <math>S\in\mathcal{F}</math> with the maximum accumulated weight <math>\sum_{x\in S}w(x)</math>. | |||
We then introduce the Greedy Algorithm which finds the maximum-weight independent set. | |||
{{Theorem|Greedy Algorithm| | |||
:<math>S=\emptyset</math>; | |||
:while <math>\exists x\not\in S</math> with <math>S\cup\{x\}\in\mathcal{F}</math> | |||
::choose such <math>x</math> with maximum <math>w(x)</math>; | |||
::<math>S=S\cup\{x\}</math>; | |||
}} | |||
The correctness of the greedy algorithm is due to the next theorem. | |||
{{Theorem|Theorem (Rado 1957; Edmonds 1970)| | |||
:The greedy algorithm finds an independent set <math>S\in\mathcal{F}</math> with the maximum weight. | |||
}} | |||
{{proof| | |||
Suppose the theorem is false. Let <math>S</math> be the independent set returned by the greedy algorithm and let <math>T</math> be a maximum-weight independent set. | |||
=== | Suppose <math>S=\{x_1,x_2,\ldots,x_m\}</math>, where the <math>x_i</math>s are chosen by the algorithm in that order. Then it is easy to see that <math>w(x_1)\ge w(x_2)\ge\cdots\ge w(x_m)</math>. | ||
Suppose <math>T=\{y_1,y_2,\ldots,y_\ell\}</math>, where <math>w(y_1)\ge w(y_2)\ge\cdots\ge w(y_\ell)</math>. | |||
Choose the least index <math>k</math> such that <math>w(x_k)>w(y_k)</math>. If none exists, then we must have <math>\ell>m</math>; in this case we can let <math>k=m+1</math>. | |||
In either case we know that the greedy algorithm did not add any of <math>y_1,\ldots,y_{k}</math> in step <math>k</math>. Since what it did choose has smaller weight, it must be that <math>y_i</math>, for <math>1\le i\le k</math>, has the property either that <math>y_i\in\{x_1,\ldots,x_{k-1}\}</math> or that <math>\{x_1,\ldots,x_{k-1},y_i\}\not\in\mathcal{F}</math>. In other words, <math>\{x_1,\ldots,x_{k-1}\}</math> is a basis of <math>Y=\{x_1,\ldots,x_{k-1},y_1,\ldots,y_k\}</math>. But this contradicts the matroid property, since <math>\{y_1,\ldots,y_k\}</math>, being a subset of <math>T</math>, is also an independent subset of <math>Y</math> and is larger. | |||
}} | |||
=== Matroid intersections === | === Matroid intersections === |
Latest revision as of 10:44, 4 January 2011
Duality
Consider the following LP:
- [math]\displaystyle{ \begin{align} \text{minimize} && 7x_1+x_2+5x_3\\ \text{subject to} && x_1-x_2+3x_3 &\ge 10\\ && 5x_1-2x_2-x_3 &\ge 6\\ && x_1,x_2,x_3 &\ge 0 \end{align} }[/math]
Let [math]\displaystyle{ OPT }[/math] be the value of the optimal solution. We want to estimate the upper and lower bound of [math]\displaystyle{ OPT }[/math].
Since [math]\displaystyle{ OPT }[/math] is the minimum over the feasible set, every feasible solution forms an upper bound for [math]\displaystyle{ OPT }[/math]. For example [math]\displaystyle{ \boldsymbol{x}=(2,1,3) }[/math] is a feasible solution, thus [math]\displaystyle{ OPT\le 7\cdot 2+1+5\cdot 3=30 }[/math].
For the lower bound, the optimal solution must satisfy the two constraints:
- [math]\displaystyle{ \begin{align} x_1-x_2+3x_3 &\ge 10,\\ 5x_1-2x_2-x_3 &\ge 6.\\ \end{align} }[/math]
Since the [math]\displaystyle{ x_i }[/math]'s are restricted to be nonnegative, term-by-term comparison of coefficients shows that
- [math]\displaystyle{ 7x_1+x_2+5x_3\ge(x_1-x_2+3x_3)+(5x_1-2x_2-x_3)\ge 16. }[/math]
The idea behind this lower bound process is that we are finding suitable nonnegative multipliers (in the above case the multipliers are all 1s) for the constraints so that when we take their sum, the coefficient of each [math]\displaystyle{ x_i }[/math] in the sum is dominated by the coefficient in the objective function. It is important to ensure that the multipliers are nonnegative, so they do not reverse the direction of the constraint inequality.
To find the best lower bound, we need to choose the multipliers in such a way that the sum is as large as possible. Interestingly, the problem of finding the best lower bound can be formulated as another LP:
- [math]\displaystyle{ \begin{align} \text{maximize} && 10y_1+6y_2\\ \text{subject to} && y_1+5y_2 &\le 7\\ && -y_1+2y_2 &\le 1\\ &&3y_1-y_2 &\le 5\\ && y_1,y_2&\ge 0 \end{align} }[/math]
Here [math]\displaystyle{ y_1 }[/math] and [math]\displaystyle{ y_2 }[/math] were chosen to be nonnegative multipliers for the first and the second constraint, respectively. We call the first LP the primal program and the second LP the dual program. By definition, every feasible solution to the dual program gives a lower bound for the primal program.
LP duality
Given an LP in canonical form, called the primal LP:
- [math]\displaystyle{ \begin{align} \text{minimize} && \boldsymbol{c}^T\boldsymbol{x}\\ \text{subject to} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} }[/math]
the dual LP is defined as follows:
- [math]\displaystyle{ \begin{align} \text{maximum} && \boldsymbol{b}^T\boldsymbol{y}\\ \text{subject to} && A^T\boldsymbol{y} &\le\boldsymbol{c}\\ && \boldsymbol{y} &\ge \boldsymbol{0} \end{align} }[/math]
We then give some examples.
- Surviving problem (diet problem)
Let us consider the surviving problem. Suppose we have [math]\displaystyle{ n }[/math] types of natural food, each containing up to [math]\displaystyle{ m }[/math] types of vitamins. The [math]\displaystyle{ j }[/math]th food has [math]\displaystyle{ a_{ij} }[/math] amount of vitamin [math]\displaystyle{ i }[/math], and the price of the [math]\displaystyle{ j }[/math]th food is [math]\displaystyle{ c_j }[/math]. We need to consume [math]\displaystyle{ b_i }[/math] amount of vitamin [math]\displaystyle{ i }[/math] for each [math]\displaystyle{ 1\le i\le m }[/math] to keep a good health. We want to minimize the total costs of food while keeping healthy. The problem can be formalized as the following LP:
- [math]\displaystyle{ \begin{align} \text{minimize} \quad& c_1x_1+c_2x_2+\cdots+c_nx_n\\ \begin{align} \text{subject to} \\ \\ \end{align} \quad & \begin{align} a_{i1}x_{1}+a_{i2}x_{2}+\cdots+a_{in}x_{n} &\le b_{i} &\quad& \forall 1\le i\le m\\ x_{j}&\ge 0 &\quad& \forall 1\le j\le n \end{align} \end{align} }[/math]
The dual LP is
- [math]\displaystyle{ \begin{align} \text{maximize} \quad& b_1y_1+b_2y_2+\cdots+b_ny_m\\ \begin{align} \text{subject to} \\ \\ \end{align} \quad & \begin{align} a_{1j}y_{1}+a_{2j}y_{2}+\cdots+a_{mj}y_{m} &\le c_{j} &\quad& \forall 1\le j\le n\\ y_{i}&\ge 0 &\quad& \forall 1\le i\le m \end{align} \end{align} }[/math]
The problem can be interpreted as follows: A food company produces [math]\displaystyle{ m }[/math] types of vitamin pills. The company wants to design a pricing system such that
- The vitamin [math]\displaystyle{ i }[/math] has a nonnegative price [math]\displaystyle{ y_i }[/math].
- The price system should be competitive to any natural food. A costumer cannot replace the vitamins by any natural food and get a cheaper price, that is, [math]\displaystyle{ \sum_{i=1}^my_ja_{ij}\le c_j }[/math] for any [math]\displaystyle{ 1\le j\le n }[/math].
- The company wants to find the maximal profit, assuming that the customer only buy exactly the necessary amount of vitamins ([math]\displaystyle{ b_i }[/math] for vitamin [math]\displaystyle{ i }[/math]).
- Maximum flow problem
In the last lecture, we defined the maximum flow problem, whose LP is
- [math]\displaystyle{ \begin{align} \text{maximize} \quad& \sum_{v:(s,v)\in E}f_{sv}\\ \begin{align} \text{subject to} \\ \\ \\ \\ \end{align} \quad & \begin{align} f_{uv}&\le c_{uv} &\quad& \forall (u,v)\in E\\ \sum_{u:(u,v)\in E}f_{uv}-\sum_{w:(v,w)\in E}f_{vw} &=0 &\quad& \forall v\in V\setminus\{s,t\}\\ f_{uv}&\ge 0 &\quad& \forall (u,v)\in E \end{align} \end{align} }[/math]
where directed graph [math]\displaystyle{ G(V,E) }[/math] is the flow network, [math]\displaystyle{ s\in V }[/math] is the source, [math]\displaystyle{ t\in V }[/math] is the sink, and [math]\displaystyle{ c_{uv} }[/math] is the capacity of directed edge [math]\displaystyle{ (u,v)\in E }[/math].
We add a new edge from [math]\displaystyle{ t }[/math] to [math]\displaystyle{ s }[/math] to [math]\displaystyle{ E }[/math], and let the capacity be [math]\displaystyle{ c_{ts}=\infty }[/math]. Let [math]\displaystyle{ E' }[/math] be the new edge set. The LP for the max-flow problem can be rewritten as:
- [math]\displaystyle{ \begin{align} \text{maximize} \quad& f_{ts}\\ \begin{align} \text{subject to} \\ \\ \\ \\ \end{align} \quad & \begin{align} f_{uv}&\le c_{uv} &\quad& \forall (u,v)\in E\\ \sum_{u:(u,v)\in E'}f_{uv}-\sum_{w:(v,w)\in E'}f_{vw} &\le0 &\quad& \forall v\in V\\ f_{uv}&\ge 0 &\quad& \forall (u,v)\in E' \end{align} \end{align} }[/math]
The second set of inequalities seem weaker than the original conservation constraint of flows, however, if this inequality holds at every node, then in fact it must be satisfied with equality at every node, thereby implying the flow conservation.
To obtain the dual program we introduce variables [math]\displaystyle{ d_{uv} }[/math] and [math]\displaystyle{ p_v }[/math] corresponding to the two types of inequalities in the primal. The dual LP is:
- [math]\displaystyle{ \begin{align} \text{minimize} \quad& \sum_{(u,v)\in E}c_{uv}d_{uv}\\ \begin{align} \text{subject to} \\ \\ \\ \\ \end{align} \quad & \begin{align} d_{uv}-p_u+p_v &\ge 0 &\quad& \forall (u,v)\in E\\ p_s-p_t &\ge1 \\ d_{uv} &\ge 0 &\quad& \forall (u,v)\in E\\ p_v&\ge 0 &\quad& \forall v\in V \end{align} \end{align} }[/math]
It is more helpful to consider its integer version:
- [math]\displaystyle{ \begin{align} \text{minimize} \quad& \sum_{(u,v)\in E}c_{uv}d_{uv}\\ \begin{align} \text{subject to} \\ \\ \\ \\ \end{align} \quad & \begin{align} d_{uv}-p_u+p_v &\ge 0 &\quad& \forall (u,v)\in E\\ p_s-p_t &\ge1 \\ d_{uv} &\in\{0,1\} &\quad& \forall (u,v)\in E\\ p_v&\in\{0,1\} &\quad& \forall v\in V \end{align} \end{align} }[/math]
In the last lecture, we know that the LP for max-flow is totally unimordular, so is this dual LP, therefore the optimal solutions to the integer program are the optimal solutions to the LP.
The variables [math]\displaystyle{ p_v }[/math] defines a bipartition of vertex set [math]\displaystyle{ V }[/math]. Let [math]\displaystyle{ S=\{v\in V\mid p_v=1\} }[/math]. The complement [math]\displaystyle{ \bar{S}=\{v\in V\mid p_v=1\} }[/math].
For 0/1-valued variables, the only way to satisfy [math]\displaystyle{ p_s-p_t\ge1 }[/math] is to have [math]\displaystyle{ p_s=1 }[/math] and [math]\displaystyle{ p_t=0 }[/math]. Therefore, [math]\displaystyle{ (S,\bar{S}) }[/math] is an [math]\displaystyle{ s }[/math]-[math]\displaystyle{ t }[/math] cut.
In an optimal solution, [math]\displaystyle{ d_{uv}=1 }[/math] if and only if [math]\displaystyle{ u\in S,v\in\bar{S} }[/math] and [math]\displaystyle{ (u,v)\in E }[/math]. Therefore, the objective function of an optimal solution [math]\displaystyle{ \sum_{u\in S,v\not\in S\atop (u,v)\in E}c_{uv} }[/math] is the capacity of the minimum [math]\displaystyle{ s }[/math]-[math]\displaystyle{ t }[/math] cut [math]\displaystyle{ (S,\bar{S}) }[/math].
Duality theorems
Let the primal LP be:
- [math]\displaystyle{ \begin{align} \text{minimize} && \boldsymbol{c}^T\boldsymbol{x}\\ \text{subject to} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} }[/math]
Its dual LP is:
- [math]\displaystyle{ \begin{align} \text{maximum} && \boldsymbol{b}^T\boldsymbol{y}\\ \text{subject to} && A^T\boldsymbol{y} &\le\boldsymbol{c}\\ && \boldsymbol{y} &\ge \boldsymbol{0} \end{align} }[/math]
Theorem - The dual of a dual is the primal.
Proof. The dual program can be written as the following minimization in canonical form:
- [math]\displaystyle{ \begin{align} \min && -\boldsymbol{b}^T\boldsymbol{y}\\ \text{s.t.} && -A^T\boldsymbol{y} &\ge-\boldsymbol{c}\\ && \boldsymbol{y} &\ge \boldsymbol{0} \end{align} }[/math]
Its dual is:
- [math]\displaystyle{ \begin{align} \max && -\boldsymbol{c}^T\boldsymbol{x}\\ \text{s.t.} && -A\boldsymbol{x} &\le-\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} }[/math]
which is equivalent to the primal:
- [math]\displaystyle{ \begin{align} \min && \boldsymbol{c}^T\boldsymbol{x}\\ \text{s.t.} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} }[/math]
- [math]\displaystyle{ \square }[/math]
We have shown that feasible solutions of a dual program can be used to lower bound the optimum of the primal program. This is formalized by the following important theorem.
Theorem (Weak duality theorem) - If there exists an optimal solution to the primal LP:
- [math]\displaystyle{ \begin{align} \min && \boldsymbol{c}^T\boldsymbol{x}\\ \text{s.t.} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} }[/math]
- then,
- [math]\displaystyle{ \begin{align} \begin{align} \min && \boldsymbol{c}^T\boldsymbol{x}\\ \text{s.t.} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} &\begin{align} \ge\\ \\ \\ \end{align}&\quad \begin{align} \max && \boldsymbol{b}^T\boldsymbol{y}\\ \text{s.t.} && A^T\boldsymbol{y} &\le\boldsymbol{c}\\ && \boldsymbol{y} &\ge \boldsymbol{0} \end{align} \end{align} }[/math]
- If there exists an optimal solution to the primal LP:
Proof. Let [math]\displaystyle{ \boldsymbol{x} }[/math] be an arbitrary feasible solution to the primal LP, and [math]\displaystyle{ \boldsymbol{y} }[/math] be an arbitrary feasible solution to the dual LP.
We estimate [math]\displaystyle{ \boldsymbol{y}^TA\boldsymbol{x} }[/math] in two ways. Recall that [math]\displaystyle{ A\boldsymbol{x} \ge\boldsymbol{b} }[/math] and [math]\displaystyle{ A^T\boldsymbol{y} \le\boldsymbol{c} }[/math], thus
- [math]\displaystyle{ \boldsymbol{y}^T\boldsymbol{b}\le\boldsymbol{y}^TA\boldsymbol{x}\le\boldsymbol{c}^T\boldsymbol{x} }[/math].
Since this holds for any feasible solutions, it must also hold for the optimal solutions.
- [math]\displaystyle{ \square }[/math]
A harmonically beautiful result is that the optimums of the primal LP and its dual are equal. This is called the strong duality theorem of linear programming.
Theorem (Strong duality theorem) - If there exists an optimal solution to the primal LP:
- [math]\displaystyle{ \begin{align} \min && \boldsymbol{c}^T\boldsymbol{x}\\ \text{s.t.} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} }[/math]
- then,
- [math]\displaystyle{ \begin{align} \begin{align} \min && \boldsymbol{c}^T\boldsymbol{x}\\ \text{s.t.} && A\boldsymbol{x} &\ge\boldsymbol{b}\\ && \boldsymbol{x} &\ge \boldsymbol{0} \end{align} &\begin{align} =\\ \\ \\ \end{align}&\quad \begin{align} \max && \boldsymbol{b}^T\boldsymbol{y}\\ \text{s.t.} && A^T\boldsymbol{y} &\le\boldsymbol{c}\\ && \boldsymbol{y} &\ge \boldsymbol{0} \end{align} \end{align} }[/math]
- If there exists an optimal solution to the primal LP:
Matroid
The matroid is a structure shared by a class of optimization problems for which greedy algorithms work.
Kruskal's greedy algorithm for MST
For the minimum-weight spanning tree (MST) problem. We are given a connected undirected graph [math]\displaystyle{ G(V,E) }[/math] with positive edge weights [math]\displaystyle{ w:E\rightarrow\mathbb{R}^+ }[/math], and want to find a spanning tree [math]\displaystyle{ T }[/math] of the minimum accumulated weight [math]\displaystyle{ \sum_{e\in T}w_e }[/math].
We consider the equivalent maximum problem to find a spanning tree with maximum weight. To see this makes the problem no harder, we can replace the weight [math]\displaystyle{ w_e }[/math] for every edge [math]\displaystyle{ e\in E }[/math] by [math]\displaystyle{ W-w_e }[/math], where [math]\displaystyle{ W }[/math] is a sufficient large constant which is greater than all weights. The minimum-weight spanning tree for the modified weight is the maximum-weight spanning tree for the original weight.
The following greedy algorithm solves the maximum-weight spanning tree problem.
Kruskal's Algorithm - [math]\displaystyle{ S=\emptyset }[/math];
- while [math]\displaystyle{ \exists e\in E }[/math] that [math]\displaystyle{ S\cup\{e\} }[/math] is forest
- pick such [math]\displaystyle{ e }[/math] with maximum [math]\displaystyle{ w_e }[/math];
- [math]\displaystyle{ S=S\cup\{e\} }[/math];
It is not hard to verify the correctness of this greedy algorithm. But we are more interested in the general framework underlying this algorithm.
Matroids
Let [math]\displaystyle{ X }[/math] be a finite set and [math]\displaystyle{ \mathcal{F}\subseteq 2^X }[/math] be a family of subsets of [math]\displaystyle{ X }[/math]. A member set [math]\displaystyle{ S\in\mathcal{F} }[/math] is called maximal if [math]\displaystyle{ S\cup\{x\}\not\in\mathcal{F} }[/math] for any [math]\displaystyle{ x\in X\setminus S }[/math].
For [math]\displaystyle{ Y\subseteq X }[/math], denote [math]\displaystyle{ \mathcal{F}_Y=\{S\in\mathcal{F}\mid S\subseteq Y\} }[/math]. Obviously,[math]\displaystyle{ \mathcal{F}_Y=\mathcal{F}\cap 2^Y\, }[/math].
Definition - A set system [math]\displaystyle{ \mathcal{F}\subseteq 2^X }[/math] is a matroid if it satisfies:
- (hereditary) if [math]\displaystyle{ T\subseteq S\in\mathcal{F} }[/math] then [math]\displaystyle{ T\in\mathcal{F} }[/math];
- (matroid property) for every [math]\displaystyle{ Y\subseteq X }[/math], all maximal [math]\displaystyle{ S\in\mathcal{F}_Y }[/math] have the same [math]\displaystyle{ |S| }[/math].
- A set system [math]\displaystyle{ \mathcal{F}\subseteq 2^X }[/math] is a matroid if it satisfies:
Suppose [math]\displaystyle{ \mathcal{F} }[/math] is a matroid. Some matroid terminologies:
- Each member set [math]\displaystyle{ S\in\mathcal{F} }[/math] is called an independent set.
- A maximal independent subset of a set [math]\displaystyle{ Y\subset X }[/math], i.e., a maximal [math]\displaystyle{ S\in\mathcal{F}_Y }[/math], is called a basis of [math]\displaystyle{ Y }[/math].
- The size of the maximal [math]\displaystyle{ S\in\mathcal{F}_Y }[/math] is called the rank of [math]\displaystyle{ Y }[/math], denoted [math]\displaystyle{ r(Y) }[/math].
Graph matroids
Let [math]\displaystyle{ G(V,E) }[/math] be a graph. Define a set system with ground set [math]\displaystyle{ E }[/math] as
- [math]\displaystyle{ \mathcal{F}=\{S\subseteq E\mid \text{there is no cycle in }S\}. }[/math]
That is, [math]\displaystyle{ \mathcal{F} }[/math] is the set of all forests in [math]\displaystyle{ G }[/math].
We claim that [math]\displaystyle{ \mathcal{F} }[/math] is a matroid.
First, [math]\displaystyle{ \mathcal{F} }[/math] is hereditary since any subgraph of a forest must also be a forest.
We then verify the matroid property of [math]\displaystyle{ \mathcal{F} }[/math]. Let [math]\displaystyle{ Y\subseteq E }[/math] be an arbitrary subgraph of [math]\displaystyle{ G }[/math]. Suppose [math]\displaystyle{ Y }[/math] has [math]\displaystyle{ k }[/math] connected components. For any maximal forest [math]\displaystyle{ S }[/math] in [math]\displaystyle{ Y }[/math] (i.e., [math]\displaystyle{ S }[/math] is a spanning forest in [math]\displaystyle{ Y }[/math]), it holds that [math]\displaystyle{ |S|=n-k }[/math]. In other words, for any [math]\displaystyle{ Y\subseteq E }[/math], all maximal member of [math]\displaystyle{ \mathcal{F}_Y }[/math] have the same cardinality.
Therefore, [math]\displaystyle{ \mathcal{F} }[/math] is a matroid. Each independent set (of matroid) is a forest in [math]\displaystyle{ G }[/math]. For any subgraph [math]\displaystyle{ Y\subseteq G }[/math], the rank of [math]\displaystyle{ Y }[/math] is the size of a spanning forest of [math]\displaystyle{ Y }[/math].
Linear matroids
Let [math]\displaystyle{ A }[/math] be an [math]\displaystyle{ m\times n }[/math] matrix. Define a set system [math]\displaystyle{ \mathcal{F}\subseteq 2^{[n]} }[/math] as
- [math]\displaystyle{ \mathcal{F}=\{S\subseteq [n]\mid S\text{ is a set of linearly independent columns in }A\}. }[/math]
[math]\displaystyle{ \mathcal{F} }[/math] is hereditary since every any subset of a set of linearly independent vectors is still linearly independent.
For any subset [math]\displaystyle{ Y\subseteq [n] }[/math] of columns of [math]\displaystyle{ A }[/math]. Let [math]\displaystyle{ B }[/math] be the submatrix composed by these columns. Then [math]\displaystyle{ \mathcal{F}_Y }[/math] contains all sets of linearly independent columns of [math]\displaystyle{ B }[/math]. Clearly, all maximal such sets have the same size, which is the column-rank of [math]\displaystyle{ B }[/math].
Therefore, [math]\displaystyle{ \mathcal{F} }[/math] is a matroid. Each independent set (of matroid) is a linearly independent set of columns of matrix [math]\displaystyle{ A }[/math]. For any set [math]\displaystyle{ Y\subseteq[n] }[/math] of columns of matrix [math]\displaystyle{ A }[/math], the rank of [math]\displaystyle{ Y }[/math] is the column-rank of the submatrix defined by the columns in [math]\displaystyle{ Y }[/math].
Weighted matroid maximization
Consider the following weighted matroid maximization problem. Let [math]\displaystyle{ \mathcal{F}\subseteq2^X }[/math] be a matroid. We define positive weights [math]\displaystyle{ w:X\rightarrow\mathbb{R}^+ }[/math] of elements in [math]\displaystyle{ X }[/math]. Our goal is to find an independent set [math]\displaystyle{ S\in\mathcal{F} }[/math] with the maximum accumulated weight [math]\displaystyle{ \sum_{x\in S}w(x) }[/math].
We then introduce the Greedy Algorithm which finds the maximum-weight independent set.
Greedy Algorithm - [math]\displaystyle{ S=\emptyset }[/math];
- while [math]\displaystyle{ \exists x\not\in S }[/math] with [math]\displaystyle{ S\cup\{x\}\in\mathcal{F} }[/math]
- choose such [math]\displaystyle{ x }[/math] with maximum [math]\displaystyle{ w(x) }[/math];
- [math]\displaystyle{ S=S\cup\{x\} }[/math];
The correctness of the greedy algorithm is due to the next theorem.
Theorem (Rado 1957; Edmonds 1970) - The greedy algorithm finds an independent set [math]\displaystyle{ S\in\mathcal{F} }[/math] with the maximum weight.
Proof. Suppose the theorem is false. Let [math]\displaystyle{ S }[/math] be the independent set returned by the greedy algorithm and let [math]\displaystyle{ T }[/math] be a maximum-weight independent set.
Suppose [math]\displaystyle{ S=\{x_1,x_2,\ldots,x_m\} }[/math], where the [math]\displaystyle{ x_i }[/math]s are chosen by the algorithm in that order. Then it is easy to see that [math]\displaystyle{ w(x_1)\ge w(x_2)\ge\cdots\ge w(x_m) }[/math].
Suppose [math]\displaystyle{ T=\{y_1,y_2,\ldots,y_\ell\} }[/math], where [math]\displaystyle{ w(y_1)\ge w(y_2)\ge\cdots\ge w(y_\ell) }[/math].
Choose the least index [math]\displaystyle{ k }[/math] such that [math]\displaystyle{ w(x_k)\gt w(y_k) }[/math]. If none exists, then we must have [math]\displaystyle{ \ell\gt m }[/math]; in this case we can let [math]\displaystyle{ k=m+1 }[/math].
In either case we know that the greedy algorithm did not add any of [math]\displaystyle{ y_1,\ldots,y_{k} }[/math] in step [math]\displaystyle{ k }[/math]. Since what it did choose has smaller weight, it must be that [math]\displaystyle{ y_i }[/math], for [math]\displaystyle{ 1\le i\le k }[/math], has the property either that [math]\displaystyle{ y_i\in\{x_1,\ldots,x_{k-1}\} }[/math] or that [math]\displaystyle{ \{x_1,\ldots,x_{k-1},y_i\}\not\in\mathcal{F} }[/math]. In other words, [math]\displaystyle{ \{x_1,\ldots,x_{k-1}\} }[/math] is a basis of [math]\displaystyle{ Y=\{x_1,\ldots,x_{k-1},y_1,\ldots,y_k\} }[/math]. But this contradicts the matroid property, since [math]\displaystyle{ \{y_1,\ldots,y_k\} }[/math], being a subset of [math]\displaystyle{ T }[/math], is also an independent subset of [math]\displaystyle{ Y }[/math] and is larger.
- [math]\displaystyle{ \square }[/math]