高级算法 (Fall 2016)/Greedy and Local Search: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
imported>Etone
No edit summary
 
(88 intermediate revisions by the same user not shown)
Line 1: Line 1:
<font color=red size=4>Under construction.</font> [[File:Under_construction.png‎|50px]]
= Set cover =
Given <math>m</math> subsets <math>S_1,S_2,\ldots,S_m\subseteq U</math> of a universe <math>U</math> of size <math>n=|U|</math>, a <math>C\subseteq\{1,2,\ldots,m\}</math> forms a '''set cover''' if <math>U=\bigcup_{i\in\mathcal{C}}S_i</math>, that is, <math>C</math> is a sub-collection of sets whose union "covers" all elements in the universe.  


= Set cover =
Without loss of generality, we always assume that the universe is <math>U==\bigcup_{i=1}^mS_i</math>.
Given a number of subsets <math>S_1,S_2,\ldots,S_n\subseteq U</math> of a universe <math>U</math>, a <math>C\subseteq\{1,2,\ldots,n\}</math> forms a '''set cover''' if <math>U=\bigcup_{i\in\mathcal{C}}S_i</math>, that is, <math>\mathcal{C}</math> is a sub-collection of sets whose union "covers" all elements in the universe.


This defines a natural optimization problem:
This defines an important optimization problem:
{{Theorem|Set Cover Problem|
{{Theorem|Set Cover Problem|
*'''Input''': a number of sets <math>S_1,S_2,\ldots,S_n</math> with the universe <math>U=\bigcup_{i=1}^nS_i</math>;
*'''Input''': <math>m</math> subsets <math>S_1,S_2,\ldots,S_m\subseteq U</math> of a universe <math>U</math> of size <math>n</math>;
*'''Output''': the smallest <math>C\subseteq\{1,2,\ldots,n\}</math> such that <math>U=\bigcup_{i\in C}S_i</math>.
*'''Output''': the smallest <math>C\subseteq\{1,2,\ldots,m\}</math> such that <math>U=\bigcup_{i\in C}S_i</math>.
}}
}}


We can think of each instance as a bipartite graph <math>G(U,\{S_1,S_2,\ldots,S_n\}, E)</math> with elements <math>x\in U</math> of the universe on the left side, subsets <math>S_1,S_2,\ldots,S_n</math> on the right side, and there is a bipartite edge between element <math>x</math> and set <math>S_i</math> if and only if <math>x\in S_i</math>. By this translation the set cover problem is precisely the problem of given as input a bipartite graph <math>G(U,V,E)</math>, to find the smallest subset <math>C\subseteq V</math> of vertices on the right side to "cover" all vertices on the left side, i.e. every vertex on the left side <math>x\in U</math> is incident to some vertex in <math>C</math>.
We can think of each instance as a bipartite graph <math>G(U,\{S_1,S_2,\ldots,S_n\}, E)</math> with <math>n</math> vertices on the right side, each corresponding to an element <math>x\in U</math>, <math>m</math> vertices on the left side, each corresponding to one of the <math>m</math> subsets <math>S_1,S_2,\ldots,S_m</math>, and there is a bipartite edge connecting <math>x</math> with <math>S_i</math> if and only if <math>x\in S_i</math>. By this translation the set cover problem is precisely the problem of given as input a bipartite graph <math>G(U,V,E)</math>, to find the smallest subset <math>C\subseteq V</math> of vertices on the right side to "cover" all vertices on the left side, such that every vertex on the left side <math>x\in U</math> is incident to some vertex in <math>C</math>.


By alternating the roles of sets and elements in the above interpretation of set cover as bipartite cover, the set cover problem can be translated to the following equivalent hitting set problem.
By alternating the roles of sets and elements in the above interpretation of set cover instances as bipartite graphs, the set cover problem can be translated to the following equivalent hitting set problem:
{{Theorem|Hitting Set Problem|
{{Theorem|Hitting Set Problem|
*'''Input''': a number of sets <math>S_1,S_2,\ldots,S_m</math> with the universe <math>U=\bigcup_{i=1}^mS_i</math>;
*'''Input''': <math>n</math> subsets <math>S_1,S_2,\ldots,S_n\subseteq U</math> of a universe <math>U</math> of size <math>m</math>;
*'''Output''': the smallest subset <math>C\subseteq U</math> of elements such that <math>C</math> intersects with every set <math>S_i</math> for <math>1\le i\le m</math>.
*'''Output''': the smallest subset <math>C\subseteq U</math> of elements such that <math>C</math> intersects with every set <math>S_i</math> for <math>1\le i\le n</math>.
}}
}}


== Frequency and Vertex Cover==
== Frequency and Vertex Cover==
Given an instance of set cover problem <math>S_1,S_2,\ldots,S_n\subseteq U</math>, for every element <math>x\in U</math>, its '''frequency''' denoted as <math>frequency(x)</math> is defined as the number of sets containing <math>X</math>. Formally,
Given an instance of set cover problem <math>S_1,S_2,\ldots,S_m\subseteq U</math>, for every element <math>x\in U</math>, its '''frequency''', denoted as <math>frequency(x)</math>, is defined as the number of sets containing <math>X</math>. Formally,
:<math>frequency(x)=\{i\mid x\in S_i\}</math>.
:<math>frequency(x)=|\{i\mid x\in S_i\}|</math>.


In the hitting set version, the frequency should be defined on each set and for a set <math>S_i</math> its frequency <math>frequency(S_i)=|S_i|</math> is just the size of <math>|S_i|</math>.
In the hitting set version, the frequency should be defined for each set: for a set <math>S_i</math> its frequency <math>frequency(S_i)=|S_i|</math> is just the size of the set <math>S_i</math>.


The set cover problem restricted to the instances with frequency 2, becomes the vertex cover problem.  
The set cover problem restricted to the instances with frequency 2 becomes the vertex cover problem.  


Given an undirected graph <math>G(U,V)</math>, a '''vertex cover''' is a subset <math>C\subseteq V</math> of vertices such that every edge <math>uv\in E</math> has at least one endpoint in <math>C</math>.
Given an undirected graph <math>G(U,V)</math>, a '''vertex cover''' is a subset <math>C\subseteq V</math> of vertices such that every edge <math>uv\in E</math> has at least one endpoint in <math>C</math>.
Line 32: Line 32:
}}
}}
It is easy to compare with the hitting set problem:  
It is easy to compare with the hitting set problem:  
*For graph <math>G(V,E)</math>, its edges <math>e_1,e_2,\ldots,e_m\subseteq V</math> are vertex-sets of size 2.  
*For graph <math>G(V,E)</math>, its edges <math>e_1,e_2,\ldots,e_n\subseteq V</math> are vertex-sets of size 2.  
*A subset <math>C\subseteq V</math> of vertices is a vertex cover if and only if it is a hitting sets for <math>e_1,e_2,\ldots,e_m</math>, i.e. every <math>e_i</math> intersects with <math>C</math>.
*A subset <math>C\subseteq V</math> of vertices is a vertex cover if and only if it is a hitting sets for <math>e_1,e_2,\ldots,e_n</math>, i.e. every <math>e_i</math> intersects with <math>C</math>.
Therefore vertex cover is just set cover with frequency 2.
Therefore vertex cover is just set cover with frequency 2.


The vertex cover problem is '''NP-hard'''. Its decision version is among [https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems Karp's 21 '''NP-complete''' problems]. Since vertex cover is a special case of set cover, the set cover problem is also '''NP-hard'''.
The vertex cover problem is '''NP-hard'''. Its decision version is among [https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems Karp's 21 '''NP-complete''' problems]. Since vertex cover is a special case of set cover, the set cover problem is also '''NP-hard'''.


== Greedy Algorithm for Set Cover==
== Greedy Algorithm==
{{Theorem|''GreedySetCover''|
We present our algorithms in the original set cover setting (instead of the hitting set version).
:'''Input:''' sets <math>S_1,S_2,\ldots,S_n</math>;
 
A natural algorithm is the greedy algorithm: sequentially add such <math>i</math> to the cover <math>C</math>, where each <math>S_i</math> covers the largest number of ''currently uncovered'' elements, until no element is left uncovered.
 
{{Theorem|''GreedyCover''|
:'''Input:''' sets <math>S_1,S_2,\ldots,S_m</math>;
----
----
:initialize the uncovered set as <math>U=\bigcup_{i=1}^nS_i</math>;
:initially, <math>U=\bigcup_{i=1}^mS_i</math>, and <math>C=\emptyset</math>;
:initialize the set cover as <math>C=\emptyset</math>;
:while <math>U\neq\emptyset</math> do
:while <math>U\neq\emptyset</math> do
::find <math>i\in\{1,2,\ldots, n\}</math> with the largest <math>|S_i\cap U|</math>;
::find <math>i\in\{1,2,\ldots, m\}</math> with the largest <math>|S_i\cap U|</math>;
::<math>C=C\cup\{i\}</math>;
::let <math>C=C\cup\{i\}</math> and <math>U=U\setminus S_i</math>;
::<math>U=U\setminus S_i</math>;
:return <math>C</math>;
}}
 
Obviously the algorithm runs in polynomial time and always returns a set cover.  We will then show how good the set cover returned by the algorithm compared to the optimal solution by analyzing its approximation ratio.
 
We define the following notations:
* We enumerate all elements of the universe <math>U</math> as <math>x_1,x_2,\ldots,x_n</math>, in the order in which they are covered in the algorithm.
* For <math>t=1,2,\ldots</math>, let <math>U_t</math> denote the set of uncovered elements in the beginning of the <math>t</math>-th iteration of the algorithm.
* For the <math>k</math>-th element <math>x_k</math> covered, supposed that it is covered by <math>S_i</math> in the <math>t</math>-th iteration, define
::<math>price(x_k)=\frac{1}{|S_i\cap U_t|}</math>
: to be the average "price" to cover element <math>x_k</math> in the algorithm.
 
Observe that if <math>x_k</math> is covered by <math>S_i</math> in the <math>t</math>-th iteration, then there are precisely <math>|S_i\cap U_t|</math> elements, including <math>x_k</math>, become covered in that iteration, and all these elements have price <math>1/|S_i\cap U_t|</math>. Then it is easy to have the following lemma:
{{Theorem|Lemma 1|
:For the set cover <math>C</math> returned by the algorithm, <math>|C|=\sum_{k=1}^nprice(x_k)</math>.
}}
This lemma connect the size of the returned set cover to the prices of elements. The next lemme connects the price of each element to the optimal solution.
 
{{Theorem|Lemma 2|
:For each <math>x_k</math>, <math>price(x_k)\le \frac{OPT}{n-k+1}</math>, where <math>OPT</math> is the size of the optimal set cover.
}}
{{Proof|
For an instance <math>S_1,S_2,\ldots,S_m\subseteq U</math> with a universe of size <math>n=|U|</math>, let <math>C^*\subseteq\{1,2,\ldots,m\}</math> denote an optimal set cover. Then
:<math>U=\bigcup_{i\in C^*}S_i</math>.
By averaging principle, there must be an <math>S_i</math> of size
:<math>|S_i|\ge\frac{n}{|C^*|}=\frac{n}{OPT}</math>.
By the greediness of the algorithm, in the first iteration the algorithm must choose a set <math>S_i</math> of at least this size to add to the set cover <math>C</math>, which means the price the element covered at first, <math>x_1</math>, along with all elements covered in the first iteration, are priced as
:<math>price(x_1)=\frac{1}{\max_{i}|S_i|}\le \frac{OPT}{n}</math>.
For the <math>k</math>-th element covered by the algorithm, supposed that it is covered by in the <math>t</math>-th iteration, and the current universe for the uncovered elements is <math>U_t</math>, it holds that
:<math>|U_t|\le n-k+1</math>,
since there are at most <math>k-1</math> elements covered before <math>x_k</math>.
 
The uncovered elements constitutes a set cover instance <math>S_1\cap U_t, S_2\cap U_t, \ldots, S_m\cap U_t</math> (some of which may be empty), with universe <math>U_t</math>. Note that this smaller instance has an optimal set cover of size at most OPT (since the optimal solution for the original instance must also be an optimal solution for this smaller instance), and <math>x_k</math> is among the elements covered in the first iteration of the algorithm running on this smaller instance. By the above argument, it holds for the <math>price(x_k)=\frac{1}{|S_i\cap U_t|}</math> (also note that this value is not changed no matter as in the <math>t</math>-th integration of the algorithm running on the original instance or as in the first iteration of the algorithm on the smaller instance) that
<math>
price(x_k)=\frac{1}{|S_i\cap U_t|}\le\frac{OPT}{|U_t|}=\frac{OPT}{n-k+1}.
</math>
The lemma is proved.
}}
 
 
Combining Lemma 1 and Lemma 2, we have
:<math>
|C|=\sum_{k=1}^nprice(x_k) \le \sum_{k=1}^n  \frac{OPT}{n-k+1}=\sum_{k=1}^n\frac{OPT}{k}=H_n\cdot OPT,
</math>
where <math>H_n=\sum_{k=1}^n\frac{1}{k}\approx\ln n+O(1)</math> is the <math>n</math>-th [http://en.wikipedia.org/wiki/Harmonic_number Harmonic number].
 
This shows that the ''GreedyCover'' algorithm has approximation ratio <math>H_n\approx\ln n</math>.
{{Theorem|Theorem|
:For any set cover instance <math>S_1,S_2,\ldots,S_m\subseteq U</math> with optimal set cover of size <math>OPT</math>, the ''GreedyCover'' returns a set cover of size
::<math>C\le H_n\cdot {OPT}</math>,
:where <math>n=|U|</math> is the size of the universe and <math>H_n\approx\ln n</math> represents the <math>n</math>-th Harmonic number.
}}
 
Ignoring lower order terms, <math>\ln n</math> is also the best approximation ratio achievable by polynomial time algorithms, assuming that '''NP'''<math>neq</math>'''P'''.
* [http://www.cs.mun.ca/~yzchen/papers/papers/hardness-approx-lund-yannakakis.pdf Lund and Yannakakis (1994)] showed that there is no poly-time algorithm with approximation ratio <math><\frac{1}{2}\log_2 n</math>, unless all '''NP''' problems have quasi-polynomial time algorithms (which runs in time <math>n^{\mathrm{polylog}(n)}</math>).
* [http://www.cs.duke.edu/courses/spring07/cps296.2/papers/p634-feige.pdf Feige (1998)] showed that there is no poly-time algorithm with approximation ratio better than <math>(1-o(1))\ln n</math> with the same complexity assumption.
* [http://courses.cs.tau.ac.il/368-3168/03a/ACT2/articles/raz97subconstant.pdf Ras and Safra (1997)] showed that there is no poly-time algorithm with approximation ratio better than <math>c\ln n</math> for a constant <math>c</math> assuming that '''NP'''<math>\neq</math>'''P'''.
* [http://arxiv.org/pdf/1305.1979.pdf Dinur and Steurer (2014)] showed that there is no poly-time algorithm with approximation ratio better than <math>(1-o(1))\ln n</math> assuming that '''NP'''<math>\neq</math>'''P'''.
 
== Primal-Dual Algorithm==
Given an instance <math>S_1,S_2,\ldots,S_m\subseteq U</math> for set cover, the set cover problem asks for minimizing the size of <math>|C|</math> subject to the constraints that <math>C\subseteq\{1,2,\ldots, m\}</math> and <math>U=\bigcup_{i\in C}S_i</math>, i.e. <math>C</math> is a set cover. We can define a '''dual''' problem on the same instance. The original problem, the set cover problem is called the '''primal''' problem. Formally, the primal and dual problems are defined as follows:
:'''''Primal''''':
:: '''minimize''' <math>|C|</math>
:: '''subject to''' <math>C\subseteq \{1,2,\ldots,m\}</math>
:::::<math>U=\bigcup_{i\in C}S_i</math>
 
:'''''Dual''''':
:: '''maximize''' <math>|M|</math>
:: '''subject to''' <math>M\subseteq U</math>
:::::<math>|S_i\cap M|\le 1</math>, <math>\forall 1\le i\le m</math>
 
The dual problem is a "maximum matching" problem, where the matching is defined for the set system instead of graph. Given an instance <math>S_1,S_2,\ldots,S_m\subseteq U</math>, an <math>M\subseteq U</math> is called a '''matching''' for <math>S_1,S_2,\ldots,S_m</math> if <math>|S_i\cap M|\le 1</math> for all <math>i=1,2,\ldots, m</math>.
 
It is easier to see these two optimization problems are dual to each other if we write them as mathematical programs.
 
For the primal problem (set cover), for each <math>1\le i\le m</math>, let <math>x_i\in\{0,1\}</math> indicate whether <math>i\in C</math>. The set cover problem can be written as the following integer '''linear programming (ILP)'''.
:'''''Primal''''':
:: '''minimize''' <math>\sum_{i=1}^m x_i</math>
:: '''subject to''' <math>\sum_{i:v\in S_i}x_i\ge 1</math>, <math>\forall v\in U</math>
:::::<math>x_i\in\{0,1\}</math>, <math>\forall 1\le i\le m</math>
 
For the dual problem (maximum matching), for each <math>v\in U</math>, let <math>y_v\in\{0,1\}</math> indicate whether <math>v\in M</math>. Then the dual problem can be written as the following ILP.
:'''''Dual''''':
:: '''maximize''' <math>\sum_{v\in U}y_v</math>
:: '''subject to''' <math>\sum_{v\in S_i}y_v\le 1</math>, <math>\forall 1\le i\le m</math>
:::::<math>y_v\in\{0,1\}</math>, <math>\forall v\in U</math>
 
It is fundamental fact that for a minimization problem, every feasible solution to the dual problem (which is a maximization problem) is a lower bound for the optimal solution to the primal problem. This is called the '''weak duality''' phenomenon. The easy direction (that every cut is a lower bound for every flow) of the famous "max-flow min-cut" is an instance of weak duality.
 
{{Theorem|Theorem (Weak Duality)|
:For every matching <math>M</math> and every vertex cover <math>C</math> for <math>S_1,S_2,\ldots,S_m\subseteq U</math>, it holds that <math>|M|\le |C|</math>.
}}
}}
{{Proof|
If <math>M\subseteq U</math> is a matching for <math>S_1,S_2,\ldots,S_m\subseteq U</math>, then every <math>S_i</math> intersects with <math>M</math> on at most one element, which means no two elements in <math>M</math> can be covered by one <math>S_i</math>, and hence each element in <math>M</math> will consume at least one distinct <math>S_i</math> to cover. Therefore, for any set cover, in order to cover all elements in <math>M</math> must cost at least <math>|M|</math> sets.


== Primal-Dual Greedy Algorithm for Set Cover==
More formally, for any matching <math>M</math> and set cover <math>C</math>, it holds that
:<math>
|M|=\left|\bigcup_{i\in C}(S_i\cap M)\right|\le \sum_{i\in C}|S_i\cap M|\le |C|,
</math>
where the first equality is because <math>C</math> is a set cover and the last inequality is because <math>M</math> is a matching.
}}
 
As a corollary, every matching <math>M</math> for <math>S_1,S_2,\ldots,S_m\subseteq U</math> is a lower bound for the optimal set cover <math>OPT</math>:
{{Theorem|Corollary|
:Let <math>S_1,S_2,\ldots,S_m\subseteq U</math> be an instance for set cover, and <math>OPT</math> the size of the optimal set cover.
:For every matching <math>M</math> for <math>S_1,S_2,\ldots,S_m</math>, it holds that <math>|M|\le OPT</math>.
}}
 
Now we are ready to present our algorithm. It is a greedy algorithm in the dual world. And the maximal (local optimal) solution to the dual problem helps us to find a good enough solution to the primal problem.
{{Theorem|''DualCover''|
:'''Input:''' sets <math>S_1,S_2,\ldots,S_m\subseteq U</math>;
----
:construct a ''maximal'' matching <math>M\subseteq U</math> such that <math>|S_i\cap M|\le 1</math> for all <math>i=1,2,\ldots, m</math>
::by sequentially adding elements to <math>M</math> until nothing can be added;
:return <math>C=\{i\mid S_i\cap M\neq\emptyset\}</math>
}}
The algorithm constructs the maximal matching <math>M</math> by sequentially adding elements into <math>M</math> until reaching the maximality. This obviously takes polynomial time.
 
It is not so obvious to see that the returned <math>C</math> is always a set cover. This is due to the maximality of the matching <math>M</math>:
* By contradiction, assuming that <math>C</math> is not a set cover, which means there is an element <math>x\in U</math> such that for all <math>i\in C</math>, <math>x\not\in S_i</math>, which implies that <math>x\not\in M</math> and the <math>M'=M\cap\{x\}</math> is still a matching, contradicting the maximality of <math>M</math>.
Therefore the <math>C</math> constructed as in the DualCover algorithm must be a set cover.
 
For the maximal matching <math>M</math> constructed by the ''DualCover'' algorithm, the output set cover is the collection of all sets which contain at least one element in <math>M</math>. Recall that the frequency of an element <math>frequency(x)</math> is defined as the number of sets <math>S_i</math> containing <math>x</math>. Then each element <math>x\in M</math> may contribute at most <math>frequency(x)</math> many sets into the set cover <math>C</math>. Then it holds that
:<math>
|C|\le \sum_{x\in M}frequency(x)\le f\cdot |M|\le f\cdot OPT,
</math>
where <math>f=\max_{x\in U}frequency(x)</math> denotes the maximum frequency of all elements.
 
We proved the following <math>f</math>-approximation bound for the ''DualCover'' algorithm on set cover instances with maximum frequency <math>f</math>.
{{Theorem|Theorem|
:For any set cover instance <math>S_1,S_2,\ldots,S_m\subseteq U</math> with optimal set cover of size <math>OPT</math>, the ''DualCover'' returns a set cover of size
::<math>C\le f\cdot {OPT}</math>,
:where <math>f=\max_{x\in U}frequency(x)=\max_{x\in U}|\{i\mid x\in S_i\}|</math> is the maximum frequency.
}}
 
When the frequency <math>f=2</math>, the set cover problem becomes the vertex cover problem. And the DualCover algorithm works simply as follows:
{{Theorem|''DualCover'' for vertex cover problem|
:'''Input:''' an undirected graph <math>G(V,E)</math>;
----
: initially <math>C=\emptyset</math>;
: while <math>E\neq\emptyset</math>
:: pick an arbitrary edge <math>uv\in E</math> and add both <math>u</math> and <math>v</math> to <math>C</math>;
:: remove all edges in <math>E</math> incident to <math>u</math> or <math>v</math>;
: return <math>C</math>;
}}
 
Since this algorithm is just an implementation of the ''DualCover'' algorithm on the vertex cover instances (set cover instances with frequency <math>f=2</math>), by the analysis of the ''DualCover'' algorithm, it is a 2-approximation algorithm for the vertex cover problem.
 
Ignoring lower order terms, <math>2</math> is also the best approximation ratio achievable by polynomial time algorithms, assuming certain complexity assumption.
* [http://www.wisdom.weizmann.ac.il/~dinuri/mypapers/vc.pdf Dinur and Safra (2005)] showed that there is no poly-time algorithm with approximation ratio <math><1.3606</math>, assuming that '''NP'''<math>\neq</math>'''P'''.
* [http://www.cs.nyu.edu/~khot/papers/vc_hard.pdf Khot and Regev (2008)] showed that there is no poly-time algorithm with approximation ratio <math>2-\epsilon</math> for any constant <math>\epsilon</math> assuming the [https://en.wikipedia.org/wiki/Unique_games_conjecture unique games conjecture].


= Scheduling =
= Scheduling =
We consider the following scheduling problem:
* There are <math>n</math> '''jobs''' to be processed.
* There are <math>m</math> identical parallel '''machines'''. Each machine can start processing jobs at time 0 and can process at most one job at a time.
* Each job <math>j=1,2,\ldots, n</math> must be processed on one of these machines for <math>p_j</math> time units without interruption. <math>p_j</math> is called the '''processing time''' of job <math>j</math>.
In a schedule each job is assigned to a machine to process starting at some time, respecting the above rules. The goal is to complete all jobs as soon as possible.
Suppose each job <math>j</math> is completed at time <math>C_j</math>, the objective is to minimize the '''makespan''' <math>C_{\max}=\max_{j}C_j</math>.
This problem is called '''minimum makespan on identical parallel machines'''. It can be described as the following simpler version as a load balancing problem:
{{Theorem|Minimum Makespan on Identical Parallel Machines (load balancing version)|
:'''Input''': <math>n</math> positive integers <math>p_1,p_2,\ldots,p_n</math> and a positive integer <math>m</math>;
:'''Output''': an assignment <math>\sigma:[n]\to[m]</math> which minimizes <math>C_{\max}=\max_{i\in[m]}\sum_{j:i=\sigma(j)}p_j</math>.
}}
With the [https://en.wikipedia.org/wiki/Notation_for_theoretic_scheduling_problems <math>\alpha|\beta|\gamma</math> notation for scheduling], this problem is the scheduling problem <math>P||C_{\max}</math>.
The <math>\alpha|\beta|\gamma</math> notation was introduced by [http://www.math.ucsd.edu/~ronspubs/79_03_scheduling_survey.pdf Ron Graham ''et al''.] to model scheduling problems. See this [http://www.cs.uu.nl/docs/vakken/stt/Overview.pdf note] for more details.
The problem is '''NP-hard'''. In particular, when <math>m=2</math>, the problem can solve the '''partition problem''', which is among [https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems Karp's 21 '''NP-complete''' problems].
{{Theorem|Partition Problem|
:'''Input''': a set of <math>n</math> positive integers <math>S=\{x_1,x_2,\ldots,x_n\}</math>;
:Determine whether there is a partition of <math>S</math> into <math>A</math> and <math>B</math> such that <math>\sum_{x\in A}=\sum_{x\in B}</math>.
}}
== Graham's ''List'' algorithm==
In a technical report in the Bell labs in 1966, Graham described a natural greedy procedure for scheduling jobs on parallel identical machines and gave an elegant analysis of the performance of the procedure. It was probably the first approximation algorithm in modern dates with provable approximation ratio. Interestingly, it was even earlier than the discovery of the notion of '''NP'''-hardness.
Graham's ''List'' algorithm takes a list of jobs as input. The '''load''' of a machine is defined as the total processing time of the jobs currently assigned to the machine.
{{Theorem|The ''List'' algorithm (Graham 1966)|
:'''Input''': a ''list'' of jobs <math>j=1,2,\ldots, n</math> with processing times <math>p_1,p_2,\ldots,p_n</math>;
------
:for <math>j=1,2,\ldots,n</math>
:: assign job <math>j</math> to the machine that currently has the smallest load;
}}
In a scheduling language, the ''List'' algorithm can be more simply described as: 
* Whenever a machine becomes idle, it starts processing the next job on the list.
It is well known that the ''List'' algorithm has approximation ratio <math>\left(2-\frac{1}{m}\right)</math>.
{{Theorem|Theorem|
:For every instance of scheduling <math>n</math> jobs with processing times <math>p_1,p_2,\ldots,p_n</math> on <math>m</math> parallel identical machines, the ''List'' algorithm finds a schedule with makespan <math>C_{\max}\le \left(2-\frac{1}{m}\right)\cdot OPT</math>, where <math>OPT</math> is the makespan of optimal schedules.
}}
{{Proof|
Obviously for any schedule the makespan is at least the maximum processing time:
:<math>OPT\ge \max_{1\le j\le n}p_j</math> 
and by averaging principle, the makespan (maximum load) is at least the average load:
:<math>OPT\ge\frac{1}{m}\sum_{j=1}^np_j</math>.
Suppose that in the schedule given by the ''List'' algorithm, job <math>\ell</math> finished at last, so the makespan <math>C_{\max}=C_\ell</math> where <math>C_\ell</math> is the completion time of job <math>\ell</math>.
By the greediness of the ''List'' algorithm, before job <math>\ell</math> is scheduled, the machine to which job <math>\ell</math> is going to be assigned has the smallest load. By averaging principle:
:<math>C_\ell-p_\ell\le\frac{1}{m}\sum_{j\neq\ell}p_j</math>.
On the other hand,
:<math>p_\ell\le \max_{1\le j\le n}p_j</math>.
Together, we have
:<math>
C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^mp_j+\left(1-\frac{1}{m}\right)p_\ell\le \frac{1}{m}\sum_{j=1}^mp_j+\left(1-\frac{1}{m}\right)\max_{1\le j\le n}p_j\le \left(2-\frac{1}{m}\right)\cdot OPT.
</math>
}}
The analysis is tight, you can try to construct a family of instances on which the ''List'' returns schedules with makespan at least <math>\left(2-\frac{1}{m}\right)\cdot OPT</math>.
== Local Search ==
Another natural idea for solving optimization problems is the local search. Given an instance of optimization problem, the principle of local search is as follows:
* We start with a feasible solution.
* At each step, we try to improve the solution by modifying the ''locally'' (changing the assignment of a constant number of variables), until nothing can be further changed (thus a '''local optimum''' is reached).
For the scheduling problem, we have the following local search algorithm.
{{Theorem|''LocalSearch'' |
:start with an arbitrary schedule of <math>n</math> jobs to <math>m</math> machines;
:while (true)
::let <math>\ell</math> denote the job finished at last in the current schedule;
::if there is machine <math>i</math> such that job <math>\ell</math> can finish earlier if transferred to machine <math>i</math>
:::job <math>\ell</math> transfers to machine <math>i</math>;
::else break;
}}
By a similar analysis to that of the ''List'' algorithm, we can give the same bound to the approximation ratio of the ''LocalSearch'' algorithm.
Suppose that when the algorithm stops and a local optimum is reached, job <math>\ell</math> finishes at last. Then the makespan <math>C_{\max}</math> is achieved by job <math>\ell</math>'s completion time <math>C_{\max}=C_\ell</math>.
In a local optimum, <math>p_\ell</math> cannot transfer to any other machine to improve its completion time <math>C_\ell</math>. Then by averaging principle, the starting time of job <math>\ell</math>, <math>C_\ell-p_\ell</math>, must be no greater than the average processing time of all jobs except <math>\ell</math>:
:<math>C_\ell-p_\ell\le\frac{1}{m}\sum_{j\neq \ell}p_j</math>.
The rest is precisely the same as the analysis of the ''List'' algorithm. We have
:<math>C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^np_j+\left(1-\frac{1}{m}\right)p_\ell\le OPT+\left(1-\frac{1}{m}\right)OPT=\left(2-\frac{1}{m}\right)\cdot OPT</math>.
So the approximation ratio of the ''LocalSearch'' algorithm is <math>\left(2-\frac{1}{m}\right)</math>.
For local search, a bigger issue is to bound its running time. In the ''LocalSearch'' algorithm for scheduling, we observe that the makespan <math>C_{\max}</math> never increases. Furthermore, in each iteration, either <math>C_{\max}</math> decreases or the number of jobs completes at time <math>C_{\max}</math> decreases. Therefore the algorithm must terminates within finite steps.
In order to improve the running time, we consider the following variant of the local search algorithm.
{{Theorem|''GreedyLocalSearch'' |
:start with an arbitrary schedule of <math>n</math> jobs to <math>m</math> machines;
:while (true)
::let <math>\ell</math> denote the job finished at last in the current schedule;
::let <math>i</math> denote the machine which completes all its jobs earliest;
::if job <math>\ell</math> can finish earlier by transferring to the machine <math>i</math>
:::job <math>\ell</math> transfers to machine <math>i</math>;
::else break;
}}
The ''GreedyLocalSearch'' algorithm has the same approximation ratio as the ''LocalSearch'' algorithm since it is a special case of the ''LocalSearch'' algorithm.
We let <math>C_{\min}</math> to denote the completion time of the machine who finishes all its jobs earliest. It is easy to see that <math>C_{\min}</math> never decreases. More precisely, in each iteration either <math>C_{\min}</math> increases or the number of machines that complete all its jobs at time <math>C_{\min}</math> decreases. In each iteration, if the algorithm does not terminate, the job that finishes at last transfers to the machine that stops at time <math>C_{\min}</math>. Since a job will not be transferred to a machine which stops no earlier than its starting time and <math>C_{\min}</math> never decreases, a job will never be transferred twice. Therefore, the ''GreedyLocalSearch'' algorithm must terminate within <math>n</math> iterations.
{{Theorem|Theorem|
:For every instance of scheduling <math>n</math> jobs with processing times <math>p_1,p_2,\ldots,p_n</math> on <math>m</math> parallel identical machines, starting from an arbitrary schedule, the ''GreedyLocalSearch'' algorithm terminates in at most <math>n</math> iterations and always reaches a schedule with makespan <math>C_{\max}\le \left(2-\frac{1}{m}\right)\cdot OPT</math>, where <math>OPT</math> is the makespan of optimal schedules.
}}
The local search algorithm can start with an arbitrary solution. If it starts with a schedule returned by the ''List'' algorithm, it will terminate without making any change to the schedule, yet the approximation ratio <math>(2-1/m)</math> still holds. This provides another proof of that the ''List'' algorithm is <math>(2-1/m)</math>-approximation.
In the analysis of the local search algorithm, we actually show that any local optimum has approximation ratio <math>\le (2-1/m)</math> to the global optimum. And the ''List'' algorithm returns a local optimum.
== Longest Processing Time (LPT)==
In the ''List'' algorithm, the jobs are presented in an arbitrary order. Intuitively, the ''List'' algorithm seems to perform better if we process the heavier jobs easier. This gives us the following '''Longest Processing Time (LPT)''' algorithm.
{{Theorem|''LongestProcessingTime(LPT)''|
:'''Input''': a ''list'' of jobs <math>j=1,2,\ldots, n</math> with processing times <math>p_1\ge p_2\ge \cdots\ge p_n</math> in non-increasing order;
------
:for <math>j=1,2,\ldots,n</math>
:: assign job <math>j</math> to the machine that currently has the smallest load;
}}
The analysis of approximation is similar as before, with an extra observation that the job that finishes at last is smaller since the jobs are scheduled in the non-increasing order in processing times.
Suppose that jobs <math>\ell</math> finishes at last. And the makespan is job <math>\ell</math>'s completion time <math>C_{\max}=C_\ell</math>. Due to the greediness of the algorithm, we still have that
:<math>C_\ell-p_\ell\le\frac{1}{m}\sum_{j=1}^np_j</math>.
Therefore,
:<math>C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^np_j+p_\ell</math>.
We still have the lower bound <math>OPT\ge \frac{1}{m}\sum_{j=1}^np_j</math>. Next we find a better lower bound of <math>OPT</math> in terms of <math>p_\ell</math>.
Note that the first <math>m</math> jobs are assigned to all <math>m</math> machines, one job per each machine. Without loss of generality we can assume:
* the number of jobs is greater than the number of machines, <math>n>m</math>;
* the makespan is achieved by a job other than the first <math>m</math> jobs, <math>\ell>m</math>;
since if otherwise, the  ''LPT'' algorithm will return an optimal solution.
Therefore <math>p_\ell\le p_{m+1}</math>. And observe that even for the first <math>m+1</math> jobs with processing times <math>p_1\ge p_2\ge\cdots\ge p_{m+1}</math>, the optimal makespan is at least <math>p_m+p_{m+1}</math> by pigeon hole principle. We have the lower bound for the optimal makespan
:<math>OPT\ge p_m+p_{m+1}\ge2p_{m+1}\ge 2p_{\ell}</math>.
Altogether, we have
:<math>C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^np_j+p_\ell\le OPT+\frac{1}{2}OPT=\frac{3}{2}OPT</math>.
We show that the ''LPT'' algorithm is <math>\frac{3}{2}</math>-approximation.
Both the analysis of the ''LPT'' algorithm and approximability of the scheduling problem can be further improved:
* With a more careful analysis, one can show that the ''LPT'' algorithm is <math>\frac{4}{3}</math>-approximation. The trick is to show by case analysis that <math>OPT\ge 3p_\ell</math> or otherwise the LPT algorithm can find an optimal solution.
* By scaling and rounding a dynamic programming, one can obtain a [https://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme PTAS (Polynomial Time Approximation Scheme)] for minimum makespan scheduling on parallel identical machines.
== Online Algorithms and Competitive Ratio==
An advantage of the ''List'' algorithm is that it is an [https://en.wikipedia.org/wiki/Online_algorithm '''online algorithm''']. For an online algorithm, the input arrives one piece at a time, and the algorithm must make decision when a piece of the input arrives without seeing the future pieces. In contrast, the algorithm which can make decision after seeing the entire input is called an '''offline algorithm'''.
For the scheduling problem, the online setting is that the <math>n</math> jobs arrive one-by-one in a series, and the online scheduling algorithm needs to assign each job to a machine to start processing right after it arrives (or having a buffer of bounded size to temporarily store the incoming jobs). The ''List'' algorithm is an online scheduling algorithm, while the LPT algorithm is not because it needs to sort all jobs according to the processing time.
For online algorithms, a notion similar to the approximation ratio, the '''competitive ratio''' was introduced to measure the performance. For a minimization problem, we say an online algorithm has competitive ratio <math>\alpha</math>, or is <math>\alpha</math>-competitive, if for any input sequence <math>I</math>,
:<math>SOL_I\le\alpha\cdot OPT_I</math>
where <math>SOL_I</math> is the value of the solution returned by the online algorithm with the input <math>I</math> and <math>OPT_I</math> is the value of the solution returned by the optimal offline algorithm on the same input.
The competitive ratio for maximization problem can be similarly defined.
For online scheduling, it is immediate to see that the ''List'' algorithm is <math>(2-1/m)</math>-competitive.
* This competitive ratio is optimal for all deterministic online algorithms on <math>m=2</math> or <math>3</math> machines.
* For large number of machines, better competitive ratios ([https://pdfs.semanticscholar.org/7c45/1526a4b27d804734d999758ff6531819ce44.pdf 1.986], [http://people.csail.mit.edu/karger/Papers/makespan.pdf 1.945], [http://wwwmayr.in.tum.de/personen/albers/papers/ic97.pdf 1.923], [https://pdfs.semanticscholar.org/623f/f99fb6959889e6351a55c7dae56a76ccec8e.pdf 1.9201], [http://wwwmayr.in.tum.de/personen/albers/papers/stoc022.pdf 1.916]) were achieved.
* On the lower bound side, it is known that no deterministic online scheduling algorithm can achieve a competitive ratio better than 1.88 and no randomized online scheduling algorithm can achieve a competitive ratio better than <math>1/(1-1/e)</math>.
See [http://www14.in.tum.de/personen/albers/papers/sched_chapter.pdf a survey of Albers] for more details.

Latest revision as of 06:23, 29 September 2016

Set cover

Given [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] of a universe [math]\displaystyle{ U }[/math] of size [math]\displaystyle{ n=|U| }[/math], a [math]\displaystyle{ C\subseteq\{1,2,\ldots,m\} }[/math] forms a set cover if [math]\displaystyle{ U=\bigcup_{i\in\mathcal{C}}S_i }[/math], that is, [math]\displaystyle{ C }[/math] is a sub-collection of sets whose union "covers" all elements in the universe.

Without loss of generality, we always assume that the universe is [math]\displaystyle{ U==\bigcup_{i=1}^mS_i }[/math].

This defines an important optimization problem:

Set Cover Problem
  • Input: [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] of a universe [math]\displaystyle{ U }[/math] of size [math]\displaystyle{ n }[/math];
  • Output: the smallest [math]\displaystyle{ C\subseteq\{1,2,\ldots,m\} }[/math] such that [math]\displaystyle{ U=\bigcup_{i\in C}S_i }[/math].

We can think of each instance as a bipartite graph [math]\displaystyle{ G(U,\{S_1,S_2,\ldots,S_n\}, E) }[/math] with [math]\displaystyle{ n }[/math] vertices on the right side, each corresponding to an element [math]\displaystyle{ x\in U }[/math], [math]\displaystyle{ m }[/math] vertices on the left side, each corresponding to one of the [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ S_1,S_2,\ldots,S_m }[/math], and there is a bipartite edge connecting [math]\displaystyle{ x }[/math] with [math]\displaystyle{ S_i }[/math] if and only if [math]\displaystyle{ x\in S_i }[/math]. By this translation the set cover problem is precisely the problem of given as input a bipartite graph [math]\displaystyle{ G(U,V,E) }[/math], to find the smallest subset [math]\displaystyle{ C\subseteq V }[/math] of vertices on the right side to "cover" all vertices on the left side, such that every vertex on the left side [math]\displaystyle{ x\in U }[/math] is incident to some vertex in [math]\displaystyle{ C }[/math].

By alternating the roles of sets and elements in the above interpretation of set cover instances as bipartite graphs, the set cover problem can be translated to the following equivalent hitting set problem:

Hitting Set Problem
  • Input: [math]\displaystyle{ n }[/math] subsets [math]\displaystyle{ S_1,S_2,\ldots,S_n\subseteq U }[/math] of a universe [math]\displaystyle{ U }[/math] of size [math]\displaystyle{ m }[/math];
  • Output: the smallest subset [math]\displaystyle{ C\subseteq U }[/math] of elements such that [math]\displaystyle{ C }[/math] intersects with every set [math]\displaystyle{ S_i }[/math] for [math]\displaystyle{ 1\le i\le n }[/math].

Frequency and Vertex Cover

Given an instance of set cover problem [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math], for every element [math]\displaystyle{ x\in U }[/math], its frequency, denoted as [math]\displaystyle{ frequency(x) }[/math], is defined as the number of sets containing [math]\displaystyle{ X }[/math]. Formally,

[math]\displaystyle{ frequency(x)=|\{i\mid x\in S_i\}| }[/math].

In the hitting set version, the frequency should be defined for each set: for a set [math]\displaystyle{ S_i }[/math] its frequency [math]\displaystyle{ frequency(S_i)=|S_i| }[/math] is just the size of the set [math]\displaystyle{ S_i }[/math].

The set cover problem restricted to the instances with frequency 2 becomes the vertex cover problem.

Given an undirected graph [math]\displaystyle{ G(U,V) }[/math], a vertex cover is a subset [math]\displaystyle{ C\subseteq V }[/math] of vertices such that every edge [math]\displaystyle{ uv\in E }[/math] has at least one endpoint in [math]\displaystyle{ C }[/math].

Vertex Cover Problem
  • Input: an undirected graph [math]\displaystyle{ G(V,E) }[/math]
  • Output: the smallest [math]\displaystyle{ C\subseteq V }[/math] such that every edge [math]\displaystyle{ e\in E }[/math] is incident to at least one vertex in [math]\displaystyle{ C }[/math].

It is easy to compare with the hitting set problem:

  • For graph [math]\displaystyle{ G(V,E) }[/math], its edges [math]\displaystyle{ e_1,e_2,\ldots,e_n\subseteq V }[/math] are vertex-sets of size 2.
  • A subset [math]\displaystyle{ C\subseteq V }[/math] of vertices is a vertex cover if and only if it is a hitting sets for [math]\displaystyle{ e_1,e_2,\ldots,e_n }[/math], i.e. every [math]\displaystyle{ e_i }[/math] intersects with [math]\displaystyle{ C }[/math].

Therefore vertex cover is just set cover with frequency 2.

The vertex cover problem is NP-hard. Its decision version is among Karp's 21 NP-complete problems. Since vertex cover is a special case of set cover, the set cover problem is also NP-hard.

Greedy Algorithm

We present our algorithms in the original set cover setting (instead of the hitting set version).

A natural algorithm is the greedy algorithm: sequentially add such [math]\displaystyle{ i }[/math] to the cover [math]\displaystyle{ C }[/math], where each [math]\displaystyle{ S_i }[/math] covers the largest number of currently uncovered elements, until no element is left uncovered.

GreedyCover
Input: sets [math]\displaystyle{ S_1,S_2,\ldots,S_m }[/math];

initially, [math]\displaystyle{ U=\bigcup_{i=1}^mS_i }[/math], and [math]\displaystyle{ C=\emptyset }[/math];
while [math]\displaystyle{ U\neq\emptyset }[/math] do
find [math]\displaystyle{ i\in\{1,2,\ldots, m\} }[/math] with the largest [math]\displaystyle{ |S_i\cap U| }[/math];
let [math]\displaystyle{ C=C\cup\{i\} }[/math] and [math]\displaystyle{ U=U\setminus S_i }[/math];
return [math]\displaystyle{ C }[/math];

Obviously the algorithm runs in polynomial time and always returns a set cover. We will then show how good the set cover returned by the algorithm compared to the optimal solution by analyzing its approximation ratio.

We define the following notations:

  • We enumerate all elements of the universe [math]\displaystyle{ U }[/math] as [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math], in the order in which they are covered in the algorithm.
  • For [math]\displaystyle{ t=1,2,\ldots }[/math], let [math]\displaystyle{ U_t }[/math] denote the set of uncovered elements in the beginning of the [math]\displaystyle{ t }[/math]-th iteration of the algorithm.
  • For the [math]\displaystyle{ k }[/math]-th element [math]\displaystyle{ x_k }[/math] covered, supposed that it is covered by [math]\displaystyle{ S_i }[/math] in the [math]\displaystyle{ t }[/math]-th iteration, define
[math]\displaystyle{ price(x_k)=\frac{1}{|S_i\cap U_t|} }[/math]
to be the average "price" to cover element [math]\displaystyle{ x_k }[/math] in the algorithm.

Observe that if [math]\displaystyle{ x_k }[/math] is covered by [math]\displaystyle{ S_i }[/math] in the [math]\displaystyle{ t }[/math]-th iteration, then there are precisely [math]\displaystyle{ |S_i\cap U_t| }[/math] elements, including [math]\displaystyle{ x_k }[/math], become covered in that iteration, and all these elements have price [math]\displaystyle{ 1/|S_i\cap U_t| }[/math]. Then it is easy to have the following lemma:

Lemma 1
For the set cover [math]\displaystyle{ C }[/math] returned by the algorithm, [math]\displaystyle{ |C|=\sum_{k=1}^nprice(x_k) }[/math].

This lemma connect the size of the returned set cover to the prices of elements. The next lemme connects the price of each element to the optimal solution.

Lemma 2
For each [math]\displaystyle{ x_k }[/math], [math]\displaystyle{ price(x_k)\le \frac{OPT}{n-k+1} }[/math], where [math]\displaystyle{ OPT }[/math] is the size of the optimal set cover.
Proof.

For an instance [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] with a universe of size [math]\displaystyle{ n=|U| }[/math], let [math]\displaystyle{ C^*\subseteq\{1,2,\ldots,m\} }[/math] denote an optimal set cover. Then

[math]\displaystyle{ U=\bigcup_{i\in C^*}S_i }[/math].

By averaging principle, there must be an [math]\displaystyle{ S_i }[/math] of size

[math]\displaystyle{ |S_i|\ge\frac{n}{|C^*|}=\frac{n}{OPT} }[/math].

By the greediness of the algorithm, in the first iteration the algorithm must choose a set [math]\displaystyle{ S_i }[/math] of at least this size to add to the set cover [math]\displaystyle{ C }[/math], which means the price the element covered at first, [math]\displaystyle{ x_1 }[/math], along with all elements covered in the first iteration, are priced as

[math]\displaystyle{ price(x_1)=\frac{1}{\max_{i}|S_i|}\le \frac{OPT}{n} }[/math].

For the [math]\displaystyle{ k }[/math]-th element covered by the algorithm, supposed that it is covered by in the [math]\displaystyle{ t }[/math]-th iteration, and the current universe for the uncovered elements is [math]\displaystyle{ U_t }[/math], it holds that

[math]\displaystyle{ |U_t|\le n-k+1 }[/math],

since there are at most [math]\displaystyle{ k-1 }[/math] elements covered before [math]\displaystyle{ x_k }[/math].

The uncovered elements constitutes a set cover instance [math]\displaystyle{ S_1\cap U_t, S_2\cap U_t, \ldots, S_m\cap U_t }[/math] (some of which may be empty), with universe [math]\displaystyle{ U_t }[/math]. Note that this smaller instance has an optimal set cover of size at most OPT (since the optimal solution for the original instance must also be an optimal solution for this smaller instance), and [math]\displaystyle{ x_k }[/math] is among the elements covered in the first iteration of the algorithm running on this smaller instance. By the above argument, it holds for the [math]\displaystyle{ price(x_k)=\frac{1}{|S_i\cap U_t|} }[/math] (also note that this value is not changed no matter as in the [math]\displaystyle{ t }[/math]-th integration of the algorithm running on the original instance or as in the first iteration of the algorithm on the smaller instance) that [math]\displaystyle{ price(x_k)=\frac{1}{|S_i\cap U_t|}\le\frac{OPT}{|U_t|}=\frac{OPT}{n-k+1}. }[/math] The lemma is proved.

[math]\displaystyle{ \square }[/math]


Combining Lemma 1 and Lemma 2, we have

[math]\displaystyle{ |C|=\sum_{k=1}^nprice(x_k) \le \sum_{k=1}^n \frac{OPT}{n-k+1}=\sum_{k=1}^n\frac{OPT}{k}=H_n\cdot OPT, }[/math]

where [math]\displaystyle{ H_n=\sum_{k=1}^n\frac{1}{k}\approx\ln n+O(1) }[/math] is the [math]\displaystyle{ n }[/math]-th Harmonic number.

This shows that the GreedyCover algorithm has approximation ratio [math]\displaystyle{ H_n\approx\ln n }[/math].

Theorem
For any set cover instance [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] with optimal set cover of size [math]\displaystyle{ OPT }[/math], the GreedyCover returns a set cover of size
[math]\displaystyle{ C\le H_n\cdot {OPT} }[/math],
where [math]\displaystyle{ n=|U| }[/math] is the size of the universe and [math]\displaystyle{ H_n\approx\ln n }[/math] represents the [math]\displaystyle{ n }[/math]-th Harmonic number.

Ignoring lower order terms, [math]\displaystyle{ \ln n }[/math] is also the best approximation ratio achievable by polynomial time algorithms, assuming that NP[math]\displaystyle{ neq }[/math]P.

  • Lund and Yannakakis (1994) showed that there is no poly-time algorithm with approximation ratio [math]\displaystyle{ \lt \frac{1}{2}\log_2 n }[/math], unless all NP problems have quasi-polynomial time algorithms (which runs in time [math]\displaystyle{ n^{\mathrm{polylog}(n)} }[/math]).
  • Feige (1998) showed that there is no poly-time algorithm with approximation ratio better than [math]\displaystyle{ (1-o(1))\ln n }[/math] with the same complexity assumption.
  • Ras and Safra (1997) showed that there is no poly-time algorithm with approximation ratio better than [math]\displaystyle{ c\ln n }[/math] for a constant [math]\displaystyle{ c }[/math] assuming that NP[math]\displaystyle{ \neq }[/math]P.
  • Dinur and Steurer (2014) showed that there is no poly-time algorithm with approximation ratio better than [math]\displaystyle{ (1-o(1))\ln n }[/math] assuming that NP[math]\displaystyle{ \neq }[/math]P.

Primal-Dual Algorithm

Given an instance [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] for set cover, the set cover problem asks for minimizing the size of [math]\displaystyle{ |C| }[/math] subject to the constraints that [math]\displaystyle{ C\subseteq\{1,2,\ldots, m\} }[/math] and [math]\displaystyle{ U=\bigcup_{i\in C}S_i }[/math], i.e. [math]\displaystyle{ C }[/math] is a set cover. We can define a dual problem on the same instance. The original problem, the set cover problem is called the primal problem. Formally, the primal and dual problems are defined as follows:

Primal:
minimize [math]\displaystyle{ |C| }[/math]
subject to [math]\displaystyle{ C\subseteq \{1,2,\ldots,m\} }[/math]
[math]\displaystyle{ U=\bigcup_{i\in C}S_i }[/math]
Dual:
maximize [math]\displaystyle{ |M| }[/math]
subject to [math]\displaystyle{ M\subseteq U }[/math]
[math]\displaystyle{ |S_i\cap M|\le 1 }[/math], [math]\displaystyle{ \forall 1\le i\le m }[/math]

The dual problem is a "maximum matching" problem, where the matching is defined for the set system instead of graph. Given an instance [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math], an [math]\displaystyle{ M\subseteq U }[/math] is called a matching for [math]\displaystyle{ S_1,S_2,\ldots,S_m }[/math] if [math]\displaystyle{ |S_i\cap M|\le 1 }[/math] for all [math]\displaystyle{ i=1,2,\ldots, m }[/math].

It is easier to see these two optimization problems are dual to each other if we write them as mathematical programs.

For the primal problem (set cover), for each [math]\displaystyle{ 1\le i\le m }[/math], let [math]\displaystyle{ x_i\in\{0,1\} }[/math] indicate whether [math]\displaystyle{ i\in C }[/math]. The set cover problem can be written as the following integer linear programming (ILP).

Primal:
minimize [math]\displaystyle{ \sum_{i=1}^m x_i }[/math]
subject to [math]\displaystyle{ \sum_{i:v\in S_i}x_i\ge 1 }[/math], [math]\displaystyle{ \forall v\in U }[/math]
[math]\displaystyle{ x_i\in\{0,1\} }[/math], [math]\displaystyle{ \forall 1\le i\le m }[/math]

For the dual problem (maximum matching), for each [math]\displaystyle{ v\in U }[/math], let [math]\displaystyle{ y_v\in\{0,1\} }[/math] indicate whether [math]\displaystyle{ v\in M }[/math]. Then the dual problem can be written as the following ILP.

Dual:
maximize [math]\displaystyle{ \sum_{v\in U}y_v }[/math]
subject to [math]\displaystyle{ \sum_{v\in S_i}y_v\le 1 }[/math], [math]\displaystyle{ \forall 1\le i\le m }[/math]
[math]\displaystyle{ y_v\in\{0,1\} }[/math], [math]\displaystyle{ \forall v\in U }[/math]

It is fundamental fact that for a minimization problem, every feasible solution to the dual problem (which is a maximization problem) is a lower bound for the optimal solution to the primal problem. This is called the weak duality phenomenon. The easy direction (that every cut is a lower bound for every flow) of the famous "max-flow min-cut" is an instance of weak duality.

Theorem (Weak Duality)
For every matching [math]\displaystyle{ M }[/math] and every vertex cover [math]\displaystyle{ C }[/math] for [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math], it holds that [math]\displaystyle{ |M|\le |C| }[/math].
Proof.

If [math]\displaystyle{ M\subseteq U }[/math] is a matching for [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math], then every [math]\displaystyle{ S_i }[/math] intersects with [math]\displaystyle{ M }[/math] on at most one element, which means no two elements in [math]\displaystyle{ M }[/math] can be covered by one [math]\displaystyle{ S_i }[/math], and hence each element in [math]\displaystyle{ M }[/math] will consume at least one distinct [math]\displaystyle{ S_i }[/math] to cover. Therefore, for any set cover, in order to cover all elements in [math]\displaystyle{ M }[/math] must cost at least [math]\displaystyle{ |M| }[/math] sets.

More formally, for any matching [math]\displaystyle{ M }[/math] and set cover [math]\displaystyle{ C }[/math], it holds that

[math]\displaystyle{ |M|=\left|\bigcup_{i\in C}(S_i\cap M)\right|\le \sum_{i\in C}|S_i\cap M|\le |C|, }[/math]

where the first equality is because [math]\displaystyle{ C }[/math] is a set cover and the last inequality is because [math]\displaystyle{ M }[/math] is a matching.

[math]\displaystyle{ \square }[/math]

As a corollary, every matching [math]\displaystyle{ M }[/math] for [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] is a lower bound for the optimal set cover [math]\displaystyle{ OPT }[/math]:

Corollary
Let [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] be an instance for set cover, and [math]\displaystyle{ OPT }[/math] the size of the optimal set cover.
For every matching [math]\displaystyle{ M }[/math] for [math]\displaystyle{ S_1,S_2,\ldots,S_m }[/math], it holds that [math]\displaystyle{ |M|\le OPT }[/math].

Now we are ready to present our algorithm. It is a greedy algorithm in the dual world. And the maximal (local optimal) solution to the dual problem helps us to find a good enough solution to the primal problem.

DualCover
Input: sets [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math];

construct a maximal matching [math]\displaystyle{ M\subseteq U }[/math] such that [math]\displaystyle{ |S_i\cap M|\le 1 }[/math] for all [math]\displaystyle{ i=1,2,\ldots, m }[/math]
by sequentially adding elements to [math]\displaystyle{ M }[/math] until nothing can be added;
return [math]\displaystyle{ C=\{i\mid S_i\cap M\neq\emptyset\} }[/math]

The algorithm constructs the maximal matching [math]\displaystyle{ M }[/math] by sequentially adding elements into [math]\displaystyle{ M }[/math] until reaching the maximality. This obviously takes polynomial time.

It is not so obvious to see that the returned [math]\displaystyle{ C }[/math] is always a set cover. This is due to the maximality of the matching [math]\displaystyle{ M }[/math]:

  • By contradiction, assuming that [math]\displaystyle{ C }[/math] is not a set cover, which means there is an element [math]\displaystyle{ x\in U }[/math] such that for all [math]\displaystyle{ i\in C }[/math], [math]\displaystyle{ x\not\in S_i }[/math], which implies that [math]\displaystyle{ x\not\in M }[/math] and the [math]\displaystyle{ M'=M\cap\{x\} }[/math] is still a matching, contradicting the maximality of [math]\displaystyle{ M }[/math].

Therefore the [math]\displaystyle{ C }[/math] constructed as in the DualCover algorithm must be a set cover.

For the maximal matching [math]\displaystyle{ M }[/math] constructed by the DualCover algorithm, the output set cover is the collection of all sets which contain at least one element in [math]\displaystyle{ M }[/math]. Recall that the frequency of an element [math]\displaystyle{ frequency(x) }[/math] is defined as the number of sets [math]\displaystyle{ S_i }[/math] containing [math]\displaystyle{ x }[/math]. Then each element [math]\displaystyle{ x\in M }[/math] may contribute at most [math]\displaystyle{ frequency(x) }[/math] many sets into the set cover [math]\displaystyle{ C }[/math]. Then it holds that

[math]\displaystyle{ |C|\le \sum_{x\in M}frequency(x)\le f\cdot |M|\le f\cdot OPT, }[/math]

where [math]\displaystyle{ f=\max_{x\in U}frequency(x) }[/math] denotes the maximum frequency of all elements.

We proved the following [math]\displaystyle{ f }[/math]-approximation bound for the DualCover algorithm on set cover instances with maximum frequency [math]\displaystyle{ f }[/math].

Theorem
For any set cover instance [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math] with optimal set cover of size [math]\displaystyle{ OPT }[/math], the DualCover returns a set cover of size
[math]\displaystyle{ C\le f\cdot {OPT} }[/math],
where [math]\displaystyle{ f=\max_{x\in U}frequency(x)=\max_{x\in U}|\{i\mid x\in S_i\}| }[/math] is the maximum frequency.

When the frequency [math]\displaystyle{ f=2 }[/math], the set cover problem becomes the vertex cover problem. And the DualCover algorithm works simply as follows:

DualCover for vertex cover problem
Input: an undirected graph [math]\displaystyle{ G(V,E) }[/math];

initially [math]\displaystyle{ C=\emptyset }[/math];
while [math]\displaystyle{ E\neq\emptyset }[/math]
pick an arbitrary edge [math]\displaystyle{ uv\in E }[/math] and add both [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] to [math]\displaystyle{ C }[/math];
remove all edges in [math]\displaystyle{ E }[/math] incident to [math]\displaystyle{ u }[/math] or [math]\displaystyle{ v }[/math];
return [math]\displaystyle{ C }[/math];

Since this algorithm is just an implementation of the DualCover algorithm on the vertex cover instances (set cover instances with frequency [math]\displaystyle{ f=2 }[/math]), by the analysis of the DualCover algorithm, it is a 2-approximation algorithm for the vertex cover problem.

Ignoring lower order terms, [math]\displaystyle{ 2 }[/math] is also the best approximation ratio achievable by polynomial time algorithms, assuming certain complexity assumption.

  • Dinur and Safra (2005) showed that there is no poly-time algorithm with approximation ratio [math]\displaystyle{ \lt 1.3606 }[/math], assuming that NP[math]\displaystyle{ \neq }[/math]P.
  • Khot and Regev (2008) showed that there is no poly-time algorithm with approximation ratio [math]\displaystyle{ 2-\epsilon }[/math] for any constant [math]\displaystyle{ \epsilon }[/math] assuming the unique games conjecture.

Scheduling

We consider the following scheduling problem:

  • There are [math]\displaystyle{ n }[/math] jobs to be processed.
  • There are [math]\displaystyle{ m }[/math] identical parallel machines. Each machine can start processing jobs at time 0 and can process at most one job at a time.
  • Each job [math]\displaystyle{ j=1,2,\ldots, n }[/math] must be processed on one of these machines for [math]\displaystyle{ p_j }[/math] time units without interruption. [math]\displaystyle{ p_j }[/math] is called the processing time of job [math]\displaystyle{ j }[/math].

In a schedule each job is assigned to a machine to process starting at some time, respecting the above rules. The goal is to complete all jobs as soon as possible.

Suppose each job [math]\displaystyle{ j }[/math] is completed at time [math]\displaystyle{ C_j }[/math], the objective is to minimize the makespan [math]\displaystyle{ C_{\max}=\max_{j}C_j }[/math].

This problem is called minimum makespan on identical parallel machines. It can be described as the following simpler version as a load balancing problem:

Minimum Makespan on Identical Parallel Machines (load balancing version)
Input: [math]\displaystyle{ n }[/math] positive integers [math]\displaystyle{ p_1,p_2,\ldots,p_n }[/math] and a positive integer [math]\displaystyle{ m }[/math];
Output: an assignment [math]\displaystyle{ \sigma:[n]\to[m] }[/math] which minimizes [math]\displaystyle{ C_{\max}=\max_{i\in[m]}\sum_{j:i=\sigma(j)}p_j }[/math].

With the [math]\displaystyle{ \alpha|\beta|\gamma }[/math] notation for scheduling, this problem is the scheduling problem [math]\displaystyle{ P||C_{\max} }[/math].

The [math]\displaystyle{ \alpha|\beta|\gamma }[/math] notation was introduced by Ron Graham et al. to model scheduling problems. See this note for more details.

The problem is NP-hard. In particular, when [math]\displaystyle{ m=2 }[/math], the problem can solve the partition problem, which is among Karp's 21 NP-complete problems.

Partition Problem
Input: a set of [math]\displaystyle{ n }[/math] positive integers [math]\displaystyle{ S=\{x_1,x_2,\ldots,x_n\} }[/math];
Determine whether there is a partition of [math]\displaystyle{ S }[/math] into [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] such that [math]\displaystyle{ \sum_{x\in A}=\sum_{x\in B} }[/math].

Graham's List algorithm

In a technical report in the Bell labs in 1966, Graham described a natural greedy procedure for scheduling jobs on parallel identical machines and gave an elegant analysis of the performance of the procedure. It was probably the first approximation algorithm in modern dates with provable approximation ratio. Interestingly, it was even earlier than the discovery of the notion of NP-hardness.

Graham's List algorithm takes a list of jobs as input. The load of a machine is defined as the total processing time of the jobs currently assigned to the machine.

The List algorithm (Graham 1966)
Input: a list of jobs [math]\displaystyle{ j=1,2,\ldots, n }[/math] with processing times [math]\displaystyle{ p_1,p_2,\ldots,p_n }[/math];

for [math]\displaystyle{ j=1,2,\ldots,n }[/math]
assign job [math]\displaystyle{ j }[/math] to the machine that currently has the smallest load;

In a scheduling language, the List algorithm can be more simply described as:

  • Whenever a machine becomes idle, it starts processing the next job on the list.

It is well known that the List algorithm has approximation ratio [math]\displaystyle{ \left(2-\frac{1}{m}\right) }[/math].

Theorem
For every instance of scheduling [math]\displaystyle{ n }[/math] jobs with processing times [math]\displaystyle{ p_1,p_2,\ldots,p_n }[/math] on [math]\displaystyle{ m }[/math] parallel identical machines, the List algorithm finds a schedule with makespan [math]\displaystyle{ C_{\max}\le \left(2-\frac{1}{m}\right)\cdot OPT }[/math], where [math]\displaystyle{ OPT }[/math] is the makespan of optimal schedules.
Proof.

Obviously for any schedule the makespan is at least the maximum processing time:

[math]\displaystyle{ OPT\ge \max_{1\le j\le n}p_j }[/math]

and by averaging principle, the makespan (maximum load) is at least the average load:

[math]\displaystyle{ OPT\ge\frac{1}{m}\sum_{j=1}^np_j }[/math].

Suppose that in the schedule given by the List algorithm, job [math]\displaystyle{ \ell }[/math] finished at last, so the makespan [math]\displaystyle{ C_{\max}=C_\ell }[/math] where [math]\displaystyle{ C_\ell }[/math] is the completion time of job [math]\displaystyle{ \ell }[/math].

By the greediness of the List algorithm, before job [math]\displaystyle{ \ell }[/math] is scheduled, the machine to which job [math]\displaystyle{ \ell }[/math] is going to be assigned has the smallest load. By averaging principle:

[math]\displaystyle{ C_\ell-p_\ell\le\frac{1}{m}\sum_{j\neq\ell}p_j }[/math].

On the other hand,

[math]\displaystyle{ p_\ell\le \max_{1\le j\le n}p_j }[/math].

Together, we have

[math]\displaystyle{ C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^mp_j+\left(1-\frac{1}{m}\right)p_\ell\le \frac{1}{m}\sum_{j=1}^mp_j+\left(1-\frac{1}{m}\right)\max_{1\le j\le n}p_j\le \left(2-\frac{1}{m}\right)\cdot OPT. }[/math]
[math]\displaystyle{ \square }[/math]

The analysis is tight, you can try to construct a family of instances on which the List returns schedules with makespan at least [math]\displaystyle{ \left(2-\frac{1}{m}\right)\cdot OPT }[/math].

Local Search

Another natural idea for solving optimization problems is the local search. Given an instance of optimization problem, the principle of local search is as follows:

  • We start with a feasible solution.
  • At each step, we try to improve the solution by modifying the locally (changing the assignment of a constant number of variables), until nothing can be further changed (thus a local optimum is reached).

For the scheduling problem, we have the following local search algorithm.

LocalSearch
start with an arbitrary schedule of [math]\displaystyle{ n }[/math] jobs to [math]\displaystyle{ m }[/math] machines;
while (true)
let [math]\displaystyle{ \ell }[/math] denote the job finished at last in the current schedule;
if there is machine [math]\displaystyle{ i }[/math] such that job [math]\displaystyle{ \ell }[/math] can finish earlier if transferred to machine [math]\displaystyle{ i }[/math]
job [math]\displaystyle{ \ell }[/math] transfers to machine [math]\displaystyle{ i }[/math];
else break;

By a similar analysis to that of the List algorithm, we can give the same bound to the approximation ratio of the LocalSearch algorithm.

Suppose that when the algorithm stops and a local optimum is reached, job [math]\displaystyle{ \ell }[/math] finishes at last. Then the makespan [math]\displaystyle{ C_{\max} }[/math] is achieved by job [math]\displaystyle{ \ell }[/math]'s completion time [math]\displaystyle{ C_{\max}=C_\ell }[/math].

In a local optimum, [math]\displaystyle{ p_\ell }[/math] cannot transfer to any other machine to improve its completion time [math]\displaystyle{ C_\ell }[/math]. Then by averaging principle, the starting time of job [math]\displaystyle{ \ell }[/math], [math]\displaystyle{ C_\ell-p_\ell }[/math], must be no greater than the average processing time of all jobs except [math]\displaystyle{ \ell }[/math]:

[math]\displaystyle{ C_\ell-p_\ell\le\frac{1}{m}\sum_{j\neq \ell}p_j }[/math].

The rest is precisely the same as the analysis of the List algorithm. We have

[math]\displaystyle{ C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^np_j+\left(1-\frac{1}{m}\right)p_\ell\le OPT+\left(1-\frac{1}{m}\right)OPT=\left(2-\frac{1}{m}\right)\cdot OPT }[/math].

So the approximation ratio of the LocalSearch algorithm is [math]\displaystyle{ \left(2-\frac{1}{m}\right) }[/math].

For local search, a bigger issue is to bound its running time. In the LocalSearch algorithm for scheduling, we observe that the makespan [math]\displaystyle{ C_{\max} }[/math] never increases. Furthermore, in each iteration, either [math]\displaystyle{ C_{\max} }[/math] decreases or the number of jobs completes at time [math]\displaystyle{ C_{\max} }[/math] decreases. Therefore the algorithm must terminates within finite steps.

In order to improve the running time, we consider the following variant of the local search algorithm.

GreedyLocalSearch
start with an arbitrary schedule of [math]\displaystyle{ n }[/math] jobs to [math]\displaystyle{ m }[/math] machines;
while (true)
let [math]\displaystyle{ \ell }[/math] denote the job finished at last in the current schedule;
let [math]\displaystyle{ i }[/math] denote the machine which completes all its jobs earliest;
if job [math]\displaystyle{ \ell }[/math] can finish earlier by transferring to the machine [math]\displaystyle{ i }[/math]
job [math]\displaystyle{ \ell }[/math] transfers to machine [math]\displaystyle{ i }[/math];
else break;

The GreedyLocalSearch algorithm has the same approximation ratio as the LocalSearch algorithm since it is a special case of the LocalSearch algorithm.

We let [math]\displaystyle{ C_{\min} }[/math] to denote the completion time of the machine who finishes all its jobs earliest. It is easy to see that [math]\displaystyle{ C_{\min} }[/math] never decreases. More precisely, in each iteration either [math]\displaystyle{ C_{\min} }[/math] increases or the number of machines that complete all its jobs at time [math]\displaystyle{ C_{\min} }[/math] decreases. In each iteration, if the algorithm does not terminate, the job that finishes at last transfers to the machine that stops at time [math]\displaystyle{ C_{\min} }[/math]. Since a job will not be transferred to a machine which stops no earlier than its starting time and [math]\displaystyle{ C_{\min} }[/math] never decreases, a job will never be transferred twice. Therefore, the GreedyLocalSearch algorithm must terminate within [math]\displaystyle{ n }[/math] iterations.

Theorem
For every instance of scheduling [math]\displaystyle{ n }[/math] jobs with processing times [math]\displaystyle{ p_1,p_2,\ldots,p_n }[/math] on [math]\displaystyle{ m }[/math] parallel identical machines, starting from an arbitrary schedule, the GreedyLocalSearch algorithm terminates in at most [math]\displaystyle{ n }[/math] iterations and always reaches a schedule with makespan [math]\displaystyle{ C_{\max}\le \left(2-\frac{1}{m}\right)\cdot OPT }[/math], where [math]\displaystyle{ OPT }[/math] is the makespan of optimal schedules.

The local search algorithm can start with an arbitrary solution. If it starts with a schedule returned by the List algorithm, it will terminate without making any change to the schedule, yet the approximation ratio [math]\displaystyle{ (2-1/m) }[/math] still holds. This provides another proof of that the List algorithm is [math]\displaystyle{ (2-1/m) }[/math]-approximation.

In the analysis of the local search algorithm, we actually show that any local optimum has approximation ratio [math]\displaystyle{ \le (2-1/m) }[/math] to the global optimum. And the List algorithm returns a local optimum.

Longest Processing Time (LPT)

In the List algorithm, the jobs are presented in an arbitrary order. Intuitively, the List algorithm seems to perform better if we process the heavier jobs easier. This gives us the following Longest Processing Time (LPT) algorithm.

LongestProcessingTime(LPT)
Input: a list of jobs [math]\displaystyle{ j=1,2,\ldots, n }[/math] with processing times [math]\displaystyle{ p_1\ge p_2\ge \cdots\ge p_n }[/math] in non-increasing order;

for [math]\displaystyle{ j=1,2,\ldots,n }[/math]
assign job [math]\displaystyle{ j }[/math] to the machine that currently has the smallest load;

The analysis of approximation is similar as before, with an extra observation that the job that finishes at last is smaller since the jobs are scheduled in the non-increasing order in processing times.

Suppose that jobs [math]\displaystyle{ \ell }[/math] finishes at last. And the makespan is job [math]\displaystyle{ \ell }[/math]'s completion time [math]\displaystyle{ C_{\max}=C_\ell }[/math]. Due to the greediness of the algorithm, we still have that

[math]\displaystyle{ C_\ell-p_\ell\le\frac{1}{m}\sum_{j=1}^np_j }[/math].

Therefore,

[math]\displaystyle{ C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^np_j+p_\ell }[/math].

We still have the lower bound [math]\displaystyle{ OPT\ge \frac{1}{m}\sum_{j=1}^np_j }[/math]. Next we find a better lower bound of [math]\displaystyle{ OPT }[/math] in terms of [math]\displaystyle{ p_\ell }[/math].

Note that the first [math]\displaystyle{ m }[/math] jobs are assigned to all [math]\displaystyle{ m }[/math] machines, one job per each machine. Without loss of generality we can assume:

  • the number of jobs is greater than the number of machines, [math]\displaystyle{ n\gt m }[/math];
  • the makespan is achieved by a job other than the first [math]\displaystyle{ m }[/math] jobs, [math]\displaystyle{ \ell\gt m }[/math];

since if otherwise, the LPT algorithm will return an optimal solution.

Therefore [math]\displaystyle{ p_\ell\le p_{m+1} }[/math]. And observe that even for the first [math]\displaystyle{ m+1 }[/math] jobs with processing times [math]\displaystyle{ p_1\ge p_2\ge\cdots\ge p_{m+1} }[/math], the optimal makespan is at least [math]\displaystyle{ p_m+p_{m+1} }[/math] by pigeon hole principle. We have the lower bound for the optimal makespan

[math]\displaystyle{ OPT\ge p_m+p_{m+1}\ge2p_{m+1}\ge 2p_{\ell} }[/math].

Altogether, we have

[math]\displaystyle{ C_{\max}=C_\ell\le \frac{1}{m}\sum_{j=1}^np_j+p_\ell\le OPT+\frac{1}{2}OPT=\frac{3}{2}OPT }[/math].

We show that the LPT algorithm is [math]\displaystyle{ \frac{3}{2} }[/math]-approximation.

Both the analysis of the LPT algorithm and approximability of the scheduling problem can be further improved:

  • With a more careful analysis, one can show that the LPT algorithm is [math]\displaystyle{ \frac{4}{3} }[/math]-approximation. The trick is to show by case analysis that [math]\displaystyle{ OPT\ge 3p_\ell }[/math] or otherwise the LPT algorithm can find an optimal solution.
  • By scaling and rounding a dynamic programming, one can obtain a PTAS (Polynomial Time Approximation Scheme) for minimum makespan scheduling on parallel identical machines.

Online Algorithms and Competitive Ratio

An advantage of the List algorithm is that it is an online algorithm. For an online algorithm, the input arrives one piece at a time, and the algorithm must make decision when a piece of the input arrives without seeing the future pieces. In contrast, the algorithm which can make decision after seeing the entire input is called an offline algorithm.

For the scheduling problem, the online setting is that the [math]\displaystyle{ n }[/math] jobs arrive one-by-one in a series, and the online scheduling algorithm needs to assign each job to a machine to start processing right after it arrives (or having a buffer of bounded size to temporarily store the incoming jobs). The List algorithm is an online scheduling algorithm, while the LPT algorithm is not because it needs to sort all jobs according to the processing time.

For online algorithms, a notion similar to the approximation ratio, the competitive ratio was introduced to measure the performance. For a minimization problem, we say an online algorithm has competitive ratio [math]\displaystyle{ \alpha }[/math], or is [math]\displaystyle{ \alpha }[/math]-competitive, if for any input sequence [math]\displaystyle{ I }[/math],

[math]\displaystyle{ SOL_I\le\alpha\cdot OPT_I }[/math]

where [math]\displaystyle{ SOL_I }[/math] is the value of the solution returned by the online algorithm with the input [math]\displaystyle{ I }[/math] and [math]\displaystyle{ OPT_I }[/math] is the value of the solution returned by the optimal offline algorithm on the same input. The competitive ratio for maximization problem can be similarly defined.

For online scheduling, it is immediate to see that the List algorithm is [math]\displaystyle{ (2-1/m) }[/math]-competitive.

  • This competitive ratio is optimal for all deterministic online algorithms on [math]\displaystyle{ m=2 }[/math] or [math]\displaystyle{ 3 }[/math] machines.
  • For large number of machines, better competitive ratios (1.986, 1.945, 1.923, 1.9201, 1.916) were achieved.
  • On the lower bound side, it is known that no deterministic online scheduling algorithm can achieve a competitive ratio better than 1.88 and no randomized online scheduling algorithm can achieve a competitive ratio better than [math]\displaystyle{ 1/(1-1/e) }[/math].

See a survey of Albers for more details.