高级算法 (Fall 2021) and 高级算法 (Fall 2021)/Min-Cut and Max-Cut: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>TCSseminar
 
imported>Etone
(Created page with "= Graph Cut = Let <math>G(V, E)</math> be an undirected graph. A subset <math>C\subseteq E</math> of edges is a '''cut''' of graph <math>G</math> if <math>G</math> becomes ''d...")
 
Line 1: Line 1:
{{Infobox
= Graph Cut =
|name        = Infobox
Let <math>G(V, E)</math> be an undirected graph. A subset <math>C\subseteq E</math> of edges is a '''cut''' of graph <math>G</math> if <math>G</math> becomes ''disconnected'' after deleting all edges in <math>C</math>.
|bodystyle    =  
|title        = <font size=3>高级算法
<br>Advanced Algorithms</font>
|titlestyle  =


|image        =  
Let <math>\{S,T\}</math> be a '''bipartition''' of <math>V</math> into nonempty subsets <math>S,T\subseteq V</math>, where <math>S\cap T=\emptyset</math> and <math>S\cup T=V</math>.  A cut <math>C</math> is specified by this bipartition as
|imagestyle  =  
:<math>C=E(S,T)\,</math>,
|caption      =
where <math>E(S,T)</math> denotes the set of "crossing edges" with one endpoint in each of <math>S</math> and <math>T</math>, formally defined as
|captionstyle =  
:<math>E(S,T)=\{uv\in E\mid u\in S, v\in T\}</math>.
|headerstyle  = background:#ccf;
|labelstyle  = background:#ddf;
|datastyle    =  


|header1 =Instructor
Given a graph <math>G</math>, there might be many cuts in <math>G</math>, and we are interested in finding the '''minimum''' or '''maximum''' cut.
|label1  =
 
|data1  =
= Min-Cut =
|header2 =  
The '''min-cut problem''', also called the '''global minimum cut problem''', is defined as follows.
|label2  =  
{{Theorem|Min-cut problem|
|data2  = 尹一通
*'''Input''': an undirected graph <math>G(V,E)</math>;
|header3 =  
*'''Output''': a cut <math>C</math> in <math>G</math> with the smallest size <math>|C|</math>.
|label3  = Email
}}
|data3  = yinyt@nju.edu.cn
 
|header4 =
Equivalently, the problem asks to find a bipartition of <math>V</math> into disjoint non-empty subsets <math>S</math> and <math>T</math> that minimizes <math>|E(S,T)|</math>.
|label4= office
 
|data4= 计算机系 804
We consider the problem in a slightly more generalized setting, where the input graphs <math>G</math> can be '''multi-graphs''', meaning that there could be multiple '''parallel edges''' between two vertices <math>u</math> and <math>v</math>. The cuts in multi-graphs are defined in the same way as before, and the cost of a cut <math>C</math> is given by the total number of edges (including parallel edges) in <math>C</math>. Equivalently, one may think of a multi-graph as a graph with integer edge weights, and the cost of a cut <math>C</math> is the total weights of all edges in <math>C</math>.
|header5 = Class
 
|label5  =  
A canonical deterministic algorithm for this problem is through the [http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem max-flow min-cut theorem]. The max-flow algorithm finds us a minimum '''<math>s</math>-<math>t</math> cut''', which disconnects a '''source''' <math>s\in V</math> from a '''sink''' <math>t\in V</math>, both specified as part of the input. A global min cut can be found by exhaustively finding the minimum <math>s</math>-<math>t</math> cut for an arbitrarily fixed source <math>s</math> and all possible sink <math>t\neq s</math>. This takes <math>(n-1)\times</math>max-flow time where <math>n=|V|</math> is the number of vertices.
|data5  =  
 
|header6 =
The fastest known deterministic algorithm for the minimum cut problem on multi-graphs is the [https://en.wikipedia.org/wiki/Stoer–Wagner_algorithm Stoer–Wagner algorithm], which achieves an <math>O(mn+n^2\log n)</math> time complexity where <math>m=|E|</math> is the total number of edges (counting the parallel edges).
|label6  = Class meetings
 
|data6  = Tuesday, 2pm-5pm <br> 腾讯会议:598 944 8767
If we restrict the input to be '''simple graphs''' (meaning there is no parallel edges) with no edge weight, there are better algorithms. A deterministic algorithm of [https://dl.acm.org/citation.cfm?id=2746588 Ken-ichi Kawarabayashi and Mikkel Thorup] published in STOC 2015, achieves the near-linear (in the number of edges) time complexity.
|header7 =
 
|label7  = Place
== Karger's ''Contraction'' algorithm ==
|data7  =
We will describe a simple and elegant randomized algorithm for the min-cut problem. The algorithm is due to [http://people.csail.mit.edu/karger/ David Karger].
|header8 =
 
|label8  = Office hours
Let <math>G(V, E)</math> be a '''multi-graph''', which allows more than one '''parallel edges''' between two distinct vertices <math>u</math> and <math>v</math> but does not allow any '''self-loops''': the edges that adjoin a vertex to itself. A multi-graph <math>G</math> can be represented by an adjacency matrix <math>A</math>, in the way that each non-diagonal entry <math>A(u,v)</math> takes nonnegative integer values instead of just 0 or 1, representing the number of parallel edges between <math>u</math> and <math>v</math> in <math>G</math>, and all diagonal entries <math>A(v,v)=0</math> (since there is no self-loop).
|data8  = Wednesday, 4pm-6pm <br>804
 
|header9 = Textbooks
Given a multi-graph <math>G(V,E)</math> and an edge <math>e\in E</math>, we define the following '''contraction''' operator Contract(<math>G</math>, <math>e</math>), which transform <math>G</math> to a new multi-graph.
|label9  =  
{{Theorem|The contraction operator ''Contract''(<math>G</math>, <math>e</math>)|
|data9  =  
:say <math>e=uv</math>:
|header10 =
:*replace <math>\{u,v\}</math> by a new vertex <math>x</math>;
|label10  =  
:*for every edge (no matter parallel or not) in the form of <math>uw</math> or <math>vw</math> that connects one of <math>\{u,v\}</math> to a vertex <math>w\in V\setminus\{u,v\}</math> in the graph other than <math>u,v</math>, replace it by a new edge <math>xw</math>;
|data10  = [[File:MR-randomized-algorithms.png|border|100px]]
:*the reset of the graph does not change.
|header11 =
}}
|label11  =  
 
|data11  = Motwani and Raghavan. <br>''Randomized Algorithms''.<br> Cambridge Univ Press, 1995.
In other words, the <math>Contract(G,uv)</math> merges the two vertices <math>u</math> and <math>v</math> into a new vertex <math>x</math> whose incident edges preserves the edges incident to <math>u</math> or <math>v</math> in the original graph <math>G</math> except for the parallel edges between them. Now you should realize why we consider multi-graphs instead of simple graphs, because even if we start with a simple graph without parallel edges, the contraction operator may create parallel edges.
|header12 =
 
|label12  =  
The contraction operator is illustrated by the following picture:
|data12  = [[File:Approximation_Algorithms.jpg|border|100px]]
[[Image:Contract.png|600px|center]]
|header13 =
 
|label13 =  
Karger's algorithm uses a simple idea:
|data13  = Vazirani. <br>''Approximation Algorithms''. <br> Springer-Verlag, 2001.
*At each step we randomly select an edge in the current multi-graph to contract until there are only two vertices left.
|belowstyle = background:#ddf;
*The parallel edges between these two remaining vertices must be a cut of the original graph.
|below =  
*We return this cut and hope that with good chance this gives us a minimum cut.
The following is the pseudocode for Karger's algorithm.
{{Theorem|''RandomContract'' (Karger 1993)|
:'''Input:''' multi-graph <math>G(V,E)</math>;
----
:while <math>|V|>2</math> do
:* choose an edge <math>uv\in E</math> uniformly at random;
:* <math>G=Contract(G,uv)</math>;
:return <math>C=E</math> (the parallel edges between the only two vertices in <math>V</math>);
}}
 
Another way of looking at the contraction operator Contract(<math>G</math>,<math>e</math>) is that we are dealing with classes of vertices. Let <math>V=\{v_1,v_2,\ldots,v_n\}</math> be the set of all vertices. We start with <math>n</math> vertex classes <math>S_1,S_2,\ldots, S_n</math> with each class <math>S_i=\{v_i\}</math> contains one vertex. By calling <math>Contract(G,uv)</math>, where <math>u\in S_i</math> and <math>v\in S_j</math> for distinct <math>i\neq j</math>, we take union of <math>S_i</math> and <math>S_j</math>. The edges in the contracted multi-graph are the edges that cross between different vertex classes.
 
This view of contraction is illustrated by the following picture:
[[Image:Contract_class.png|600px|center]]
 
The following claim is left as an exercise for the class:
:{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"
|
*With suitable choice of data structures, each operation <math>Contract(G,e)</math> can be implemented within running time <math>O(n)</math> where <math>n=|V|</math> is the number of vertices.
|}
 
In the above '''''RandomContract''''' algorithm, there are precisely <math>n-2</math> contractions. Therefore, we have the following time upper bound.
{{Theorem|Theorem|
: For any multigraph with <math>n</math> vertices, the running time of the '''''RandomContract''''' algorithm is <math>O(n^2)</math>.
}}
We emphasize that it's the time complexity of a "single running" of the algorithm: later we will see we may need to run this algorithm for many times to guarantee a desirable accuracy.
 
== Analysis of accuracy ==
We now analyze the performance of the above algorithm. Since the algorithm is '''''randomized''''', its output cut is a random variable even when the input is fixed, so ''the output may not always be correct''. We want to give a theoretical guarantee of the chance that the algorithm returns a correct answer on an arbitrary input.
 
More precisely, on an arbitrarily fixed input multi-graph <math>G</math>, we want to answer the following question rigorously:
:<math>p_{\text{correct}}=\Pr[\,\text{a minimum cut is returned by }RandomContract\,]\ge ?</math>
 
To answer this question, we prove a stronger statement: for arbitrarily fixed input multi-graph <math>G</math> and a particular minimum cut <math>C</math> in <math>G</math>,
:<math>p_{C}=\Pr[\,C\mbox{ is returned by }RandomContract\,]\ge ?</math>
Obviously this will imply the previous lower bound for <math>p_{\text{correct}}</math> because the event in <math>p_{C}</math> implies the event in <math>p_{\text{correct}}</math>.
:{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"
|
*In above argument we use the simple law in probability that <math>\Pr[A]\le \Pr[B]</math> if <math>A\subseteq B</math>, i.e. event <math>A</math> implies event <math>B</math>.
|}
 
We introduce the following notations:
*Let <math>e_1,e_2,\ldots,e_{n-2}</math> denote the sequence of random edges chosen to contract in a running of ''RandomContract'' algorithm.
*Let <math>G_1=G</math> denote the original input multi-graph. And for <math>i=1,2,\ldots,n-2</math>, let <math>G_{i+1}=Contract(G_{i},e_i)</math> be the multigraph after <math>i</math>th contraction.
Obviously <math>e_1,e_2,\ldots,e_{n-2}</math> are random variables, and they are the ''only'' random choices used in the algorithm: meaning that they along with the input <math>G</math>, uniquely determine the sequence of multi-graphs <math>G_1,G_2,\ldots,G_{n-2}</math> in every iteration as well as the final output.
 
We now compute the probability <math>p_C</math> by decompose it into more elementary events involving <math>e_1,e_2,\ldots,e_{n-2}</math>. This is due to the following proposition.
{{Theorem
|Proposition 1|
:If <math>C</math> is a minimum cut in a multi-graph <math>G</math> and <math>e\not\in C</math>, then <math>C</math> is still a minimum cut in the contracted graph <math>G'=contract(G,e)</math>.
}}
{{Proof|
We first observe that contraction will never create new cuts: every cut in the contracted graph <math>G'</math> must also be a cut in the original graph <math>G</math>.
 
We then observe that a cut <math>C</math> in <math>G</math> "survives" in the contracted graph <math>G'</math> if and only if the contracted edge <math>e\not\in C</math>.
 
Both observations are easy to verify by the definition of contraction operator (in particular, easier to verify if we take the vertex class interpretation). The detailed proofs are left as an exercise.
}}
 
Recall that <math>e_1,e_2,\ldots,e_{n-2}</math> denote the sequence of random edges chosen to contract in a running of ''RandomContract'' algorithm.
 
By Proposition 1, the event <math>\mbox{``}C\mbox{ is returned by }RandomContract\mbox{''}\,</math> is equivalent to the event <math>\mbox{``}e_i\not\in C\mbox{ for all }i=1,2,\ldots,n-2\mbox{''}</math>. Therefore:
:<math>
\begin{align}
p_C
&=
\Pr[\,C\mbox{ is returned by }{RandomContract}\,]\\
&=
\Pr[\,e_i\not\in C\mbox{ for all }i=1,2,\ldots,n-2\,]\\
&=
\prod_{i=1}^{n-2}\Pr[e_i\not\in C\mid \forall j<i, e_j\not\in C].
\end{align}
</math>
The last equation is due to the so called '''chain rule''' in probability.
:{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"
|
*The '''chain rule''', also known as the '''law of progressive conditioning''', is the following proposition: for a sequence of events (not necessarily independent) <math>A_1,A_2,\ldots,A_n</math>,
::<math>\Pr[\forall i, A_i]=\prod_{i=1}^n\Pr[A_i\mid \forall j<i, A_j]</math>.
:It is a simple consequence of the definition of conditional probability. By definition of conditional probability,
::<math>\Pr[A_n\mid \forall j<n]=\frac{\Pr[\forall i, A_i]}{\Pr[\forall j<n, A_j]}</math>,
:and equivalently we have
::<math>\Pr[\forall i, A_i]=\Pr[\forall j<n, A_j]\Pr[A_n\mid \forall j<n]</math>.
:Recursively apply this to <math>\Pr[\forall j<n, A_j]</math> we obtain the chain rule.
|}
 
Back to the analysis of probability <math>p_C</math>.
 
Now our task is to give lower bound to each <math>p_i=\Pr[e_i\not\in C\mid \forall j<i, e_j\not\in C]</math>. The condition <math>\mbox{``}\forall j<i, e_j\not\in C\mbox{''}</math> means the min-cut <math>C</math> survives all first <math>i-1</math> contractions <math>e_1,e_2,\ldots,e_{i-1}</math>, which due to Proposition 1 means that <math>C</math> is also a min-cut in the multi-graph <math>G_i</math> obtained from applying the first <math>(i-1)</math> contractions.
 
Then the conditional probability <math>p_i=\Pr[e_i\not\in C\mid \forall j<i, e_j\not\in C]</math> is the probability that no edge in <math>C</math> is hit when a uniform random edge in the current multi-graph is chosen assuming that <math>C</math> is a minimum cut in the current multi-graph. Intuitively this probability should be bounded from below, because as a min-cut <math>C</math> should be sparse among all edges. This intuition is justified by the following proposition.
 
{{Theorem
|Proposition 2|
:If <math>C</math> is a min-cut in a multi-graph <math>G(V,E)</math>, then <math>|E|\ge \frac{|V||C|}{2}</math>.
}}
{{Proof|
:It must hold that the degree of each vertex <math>v\in V</math> is at least <math>|C|</math>, or otherwise the set of edges incident to <math>v</math> forms a cut of size smaller than <math>|C|</math> which separates <math>\{v\}</math> from the rest of the graph, contradicting that <math>C</math> is a min-cut. And the bound <math>|E|\ge \frac{|V||C|}{2}</math> follows directly from applying the [https://en.wikipedia.org/wiki/Handshaking_lemma handshaking lemma] to the fact that every vertex in <math>G</math> has degree at least <math>|C|</math>.
}}
 
Let <math>V_i</math> and <math>E_i</math> denote the vertex set and edge set of the multi-graph <math>G_i</math> respectively, and recall that <math>G_i</math> is the multi-graph obtained from applying first <math>(i-1)</math> contractions. Obviously <math>|V_{i}|=n-i+1</math>. And due to Proposition 2, <math>|E_i|\ge \frac{|V_i||C|}{2}</math> if <math>C</math> is still a min-cut in <math>G_i</math>.
 
The probability <math>p_i=\Pr[e_i\not\in C\mid \forall j<i, e_j\not\in C]</math> can be computed as
:<math>
\begin{align}
p_i
&=1-\frac{|C|}{|E_i|}\\
&\ge1-\frac{2}{|V_i|}\\
&=1-\frac{2}{n-i+1}
\end{align},</math>
where the inequality is due to Proposition 2.
 
We now can put everything together. We arbitrarily fix the input multi-graph <math>G</math> and any particular minimum cut <math>C</math> in <math>G</math>.
:<math>\begin{align}
p_{\text{correct}}
&=\Pr[\,\text{a minimum cut is returned by }RandomContract\,]\\
&\ge
\Pr[\,C\mbox{ is returned by }{RandomContract}\,]\\
&=
\Pr[\,e_i\not\in C\mbox{ for all }i=1,2,\ldots,n-2\,]\\
&=
\prod_{i=1}^{n-2}\Pr[e_i\not\in C\mid \forall j<i, e_j\not\in C]\\
&\ge
\prod_{i=1}^{n-2}\left(1-\frac{2}{n-i+1}\right)\\
&=
\prod_{k=3}^{n}\frac{k-2}{k}\\
&= \frac{2}{n(n-1)}.
\end{align}</math>
 
This gives us the following theorem.
{{Theorem
|Theorem|
: For any multigraph with <math>n</math> vertices, the ''RandomContract'' algorithm returns a minimum cut with probability at least <math>\frac{2}{n(n-1)}</math>.
}}
At first glance this seems to be a miserable chance of success. However, notice that there may be exponential many cuts in a graph (because potentially every nonempty subset <math>S\subset V</math> corresponds to a cut <math>C=E(S,\overline{S})</math>), and Karger's algorithm effectively reduce this exponential-sized space of feasible solutions to a quadratic size one, an exponential improvement!
 
We can run ''RandomContract'' independently for <math>t=\frac{n(n-1)\ln n}{2}</math> times and return the smallest cut ever returned. The probability that a minimum cut is found is at least:
 
:<math>\begin{align}
&\quad 1-\Pr[\,\mbox{all }t\mbox{ independent runnings of } RandomContract\mbox{ fails to find a min-cut}\,] \\
&= 1-\Pr[\,\mbox{a single running of }{RandomContract}\mbox{ fails}\,]^{t} \\
&\ge 1- \left(1-\frac{2}{n(n-1)}\right)^{\frac{n(n-1)\ln n}{2}} \\
&\ge 1-\frac{1}{n}.
\end{align}</math>
 
Recall that a running of ''RandomContract'' algorithm takes <math>O(n^2)</math> time. Altogether this gives us a randomized algorithm running in time <math>O(n^4\log n)</math> and find a minimum cut [https://en.wikipedia.org/wiki/With_high_probability '''with high probability'''].
 
== A Corollary by the Probabilistic Method ==
The analysis of Karger's algorithm implies the following combinatorial proposition for the number of distinct minimum cuts in a graph.
{{Theorem|Corollary|
:For any graph <math>G(V,E)</math> of <math>n</math> vertices, the number of distinct minimum cuts in <math>G</math> is at most <math>\frac{n(n-1)}{2}</math>.
}}
{{Proof|
Let <math>\mathcal{C}</math> denote the set of all minimum cuts in <math>G</math>. For each min-cut <math>C\in\mathcal{C}</math>, let <math>A_C</math> denote the event "<math>C</math> is returned by ''RandomContract''", whose probability is given by
:<math>p_C=\Pr[A_C]\,</math>.
 
Clearly we have:
* for any distinct <math>C,D\in\mathcal{C}</math>, <math>A_C\,</math> and <math>A_{D}\,</math> are '''disjoint events'''; and
* the union <math>\bigcup_{C\in\mathcal{C}}A_C</math> is precisely the event "a minimum cut is returned by ''RandomContract''", whose probability is given by
::<math>p_{\text{correct}}=\Pr[\,\text{a minimum cut is returned by } RandomContract\,]</math>.
Due to the [https://en.wikipedia.org/wiki/Probability_axioms#Third_axiom '''additivity of probability'''], it holds that
:<math>
p_{\text{correct}}=\sum_{C\in\mathcal{C}}\Pr[A_C]=\sum_{C\in\mathcal{C}}p_C.
</math>
 
By the analysis of Karger's algorithm, we know <math>p_C\ge\frac{2}{n(n-1)}</math>. And since <math>p_{\text{correct}}</math> is a well defined probability, due to the [https://en.wikipedia.org/wiki/Probability_axioms#Second_axiom '''unitarity of probability'''], it must hold that <math>p_{\text{correct}}\le 1</math>. Therefore,
:<math>1\ge p_{\text{correct}}=\sum_{C\in\mathcal{C}}p_C\ge|\mathcal{C}|\frac{2}{n(n-1)}</math>,
which means <math>|\mathcal{C}|\le\frac{n(n-1)}{2}</math>.
}}
 
Note that the statement of this theorem has no randomness at all, while the proof consists of a randomized procedure. This is an example of [http://en.wikipedia.org/wiki/Probabilistic_method the probabilistic method].
 
== Fast Min-Cut ==
In the analysis of ''RandomContract'' algorithm, recall that we lower bound the probability <math>p_C</math> that a min-cut <math>C</math> is returned by ''RandomContract'' by the following '''telescopic product''':
:<math>p_C\ge\prod_{i=1}^{n-2}\left(1-\frac{2}{n-i+1}\right)</math>.
Here the index <math>i</math> corresponds to the <math>i</math>th contraction. The factor <math>\left(1-\frac{2}{n-i+1}\right)</math> is decreasing in <math>i</math>, which means:
* The probability of success is only getting bad when the graph is getting "too contracted", that is, when the number of remaining vertices is getting small.  
This motivates us to consider the following alternation to the algorithm: first using random contractions to reduce the number of vertices to a moderately small number, and then recursively finding a min-cut in this smaller instance. This seems just a restatement of exactly what we have been doing. Inspired by the idea of boosting the accuracy via independent repetition, here we apply the recursion on ''two'' smaller instances generated independently.
 
The algorithm obtained in this way is called ''FastCut''. We first define a procedure to randomly contract edges until there are <math>t</math> number of vertices left.
 
{{Theorem|''RandomContract''<math>(G, t)</math>|
:'''Input:''' multi-graph <math>G(V,E)</math>, and integer <math>t\ge 2</math>;
----
:while <math>|V|>t</math> do
:* choose an edge <math>uv\in E</math> uniformly at random;
:* <math>G=Contract(G,uv)</math>;
:return <math>G</math>;
}}
 
The ''FastCut'' algorithm is recursively defined as follows.
{{Theorem|''FastCut''<math>(G)</math>|
:'''Input:''' multi-graph <math>G(V,E)</math>;
----
:if <math>|V|\le 6</math> then return a mincut by brute force;
:else let <math>t=\left\lceil1+|V|/\sqrt{2}\right\rceil</math>;
:: <math>G_1=RandomContract(G,t)</math>;
:: <math>G_2=RandomContract(G,t)</math>;
::return the smaller one of <math>FastCut(G_1)</math> and <math>FastCut(G_2)</math>;
}}
 
As before, all <math>G</math> are multigraphs.
 
Fix a min-cut <math>C</math> in the original multigraph <math>G</math>. By the same analysis as in the case of ''RandomContract'', we have
:<math>
\begin{align}
&\Pr[C\text{ survives all contractions in }RandomContract(G,t)]\\
=
&\prod_{i=1}^{n-t}\Pr[C\text{ survives the }i\text{-th contraction}\mid C\text{ survives the first }(i-1)\text{-th contractions}]\\
\ge
&\prod_{i=1}^{n-t}\left(1-\frac{2}{n-i+1}\right)\\
=
&\prod_{k=t+1}^{n}\frac{k-2}{k}\\
=
&\frac{t(t-1)}{n(n-1)}.
\end{align}
</math>
When <math>t=\left\lceil1+n/\sqrt{2}\right\rceil</math>, this probability is at least <math>1/2</math>. The choice of <math>t</math> is due to our purpose to make this probability at least <math>1/2</math>. You will see this is crucial in the following analysis of accuracy.
 
We denote by <math>A</math> and <math>B</math> the following events:
:<math>
\begin{align}
A:
&\quad C\text{  survives all contractions in }RandomContract(G,t);\\
B:
&\quad\text{size of min-cut is unchanged after }RandomContract(G,t);
\end{align}
</math>
Clearly, <math>A</math> implies <math>B</math> and by above analysis <math>\Pr[B]\ge\Pr[A]\ge\frac{1}{2}</math>.
 
We denote by <math>p(n)</math> the lower bound on the probability that <math>FastCut(G)</math> succeeds for a multigraph of <math>n</math> vertices, that is
:<math>
p(n)
=\min_{G: |V|=n}\Pr[\,FastCut(G)\text{ returns a min-cut in }G\,].
</math>
Suppose that <math>G</math> is the multigraph that achieves the minimum in above definition. The following recurrence holds for <math>p(n)</math>.
:<math>
\begin{align}
p(n)
&=
\Pr[\,FastCut(G)\text{ returns a min-cut in }G\,]\\
&=
\Pr[\,\text{ a min-cut of }G\text{ is returned by }FastCut(G_1)\text{ or }FastCut(G_2)\,]\\
&\ge
1-\left(1-\Pr[B\wedge FastCut(G_1)\text{ returns a min-cut in }G_1\,]\right)^2\\
&\ge
1-\left(1-\Pr[A\wedge FastCut(G_1)\text{ returns a min-cut in }G_1\,]\right)^2\\
&=
1-\left(1-\Pr[A]\Pr[ FastCut(G_1)\text{ returns a min-cut in }G_1\mid A]\right)^2\\
&\ge
1-\left(1-\frac{1}{2}p\left(\left\lceil1+n/\sqrt{2}\right\rceil\right)\right)^2,
\end{align}
</math>
where <math>A</math> and <math>B</math> are defined as above such that <math>\Pr[A]\ge\frac{1}{2}</math>.
 
The base case is that <math>p(n)=1</math> for <math>n\le 6</math>. By induction it is easy to prove that
:<math>
p(n)=\Omega\left(\frac{1}{\log n}\right).
</math>
 
Recall that we can implement an edge contraction in <math>O(n)</math> time, thus it is easy to verify the following recursion of time complexity:
:<math>
T(n)=2T\left(\left\lceil1+n/\sqrt{2}\right\rceil\right)+O(n^2),
</math>
where <math>T(n)</math> denotes the running time of <math>FastCut(G)</math> on a multigraph <math>G</math> of <math>n</math> vertices.
 
By induction with the base case <math>T(n)=O(1)</math> for <math>n\le 6</math>, it is easy to verify that <math>T(n)=O(n^2\log n)</math>.
 
{{Theorem
|Theorem|
: For any multigraph with <math>n</math> vertices, the ''FastCut'' algorithm returns a minimum cut with probability <math>\Omega\left(\frac{1}{\log n}\right)</math> in time <math>O(n^2\log n)</math>.
}}
 
At this point, we see the name ''FastCut'' is misleading because it is actually slower than the original ''RandomContract'' algorithm, only the chance of successfully finding a min-cut is much better (improved from an <math>\Omega(1/n^2)</math> to an <math>\Omega(1/\log n)</math>).
 
Given any input multi-graph, repeatedly running the ''FastCut'' algorithm independently for some <math>O((\log n)^2)</math> times and returns the smallest cut ever returned, we have an algorithm which runs in time <math>O(n^2\log^3n)</math> and returns a min-cut with probability <math>1-O(1/n)</math>, i.e. with high probability.
 
Recall that the running time of best known deterministic algorithm for min-cut on multi-graph is <math>O(mn+n^2\log n)</math>. On dense graph, the randomized algorithm outperforms the best known deterministic algorithm.
 
Finally, Karger further improves this and obtains a near-linear (in the number of edges) time [https://arxiv.org/abs/cs/9812007 randomized algorithm] for minimum cut in multi-graphs.
 
= Max-Cut=
The '''maximum cut problem''', in short the '''max-cut problem''', is defined as follows.
{{Theorem|Max-cut problem|
*'''Input''': an undirected graph <math>G(V,E)</math>;
*'''Output''': a bipartition of <math>V</math> into disjoint subsets <math>S</math> and <math>T</math> that maximizes <math>|E(S,T)|</math>.
}}
 
The problem is a typical MAX-CSP, an optimization version of the [https://en.wikipedia.org/wiki/Constraint_satisfaction_problem constraint satisfaction problem]. An instance of CSP consists of:
* a set of variables <math>x_1,x_2,\ldots,x_n</math> usually taking values from some finite domain;
* a sequence of constraints (predicates) <math>C_1,C_2,\ldots, C_m</math> defined on those variables.
The MAX-CSP asks to find an assignment of values to variables <math>x_1,x_2,\ldots,x_n</math> which maximizes the number of satisfied constraints.
 
In particular, when the variables <math>x_1,x_2,\ldots,x_n</math> takes Boolean values <math>\{0,1\}</math> and every constraint is a binary constraint <math>\cdot\neq\cdot</math> in the form of <math>x_1\neq x_j</math>, then the MAX-CSP is precisely the max-cut problem.
 
Unlike the min-cut problem, which can be solved in polynomial time, the max-cut is known to be [https://en.wikipedia.org/wiki/NP-hardness '''NP-hard''']. Its decision version is among the [https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems 21 '''NP-complete''' problems found by Karp]. This means we should not hope for a polynomial-time algorithm for solving the problem if [https://en.wikipedia.org/wiki/P_versus_NP_problem a famous conjecture in computational complexity] is correct. And due to another [https://en.wikipedia.org/wiki/BPP_(complexity)#Problems less famous conjecture in computational complexity], randomization alone probably cannot help this situation either.
 
We may compromise our goal and allow algorithm to ''not always find the optimal solution''. However, we still want to guarantee that the algorithm ''always returns a relatively good solution on all possible instances''. This notion is formally captured by '''approximation algorithms''' and '''approximation ratio'''.
 
== Greedy algorithm ==
A natural heuristics for solving the max-cut is to sequentially join the vertices to one of the two disjoint subsets <math>S</math> and <math>T</math> to ''greedily'' maximize the ''current'' number of edges crossing between <math>S</math> and <math>T</math>.
 
To state the algorithm, we overload the definition <math>E(S,T)</math>. Given an undirected graph <math>G(V,E)</math>, for any disjoint subsets <math>S,T\subseteq V</math> of vertices, we define
:<math>E(S,T)=\{uv\in E\mid u\in S, v\in T\}</math>.
 
We also assume that the vertices are ordered arbitrarily as <math>V=\{v_1,v_2,\ldots,v_n\}</math>.
 
The greedy heuristics is then described as follows.
{{Theorem|''GreedyMaxCut''|
:'''Input:''' undirected graph <math>G(V,E)</math>,
:::with an arbitrary order of vertices <math>V=\{v_1,v_2,\ldots,v_n\}</math>;
----
:initially <math>S=T=\emptyset</math>;
:for <math>i=1,2,\ldots,n</math>
::<math>v_i</math> joins one of <math>S,T</math> to maximize the current <math>|E(S,T)|</math> (breaking ties arbitrarily);
}}
 
The algorithm certainly runs in polynomial time.
 
Without any guarantee of how good the solution returned by the algorithm approximates the optimal solution, the algorithm is only a heuristics, not an '''approximation algorithm'''.
 
=== Approximation ratio ===
For now we restrict ourselves to the max-cut problem, although the notion applies more generally.
 
Let <math>G</math> be an arbitrary instance of max-cut problem. Let <math>OPT_G</math> denote the size of the of max-cut in graph <math>G</math>. More precisely,
:<math>OPT_G=\max_{S\subseteq V}|E(S,\overline{S})|</math>.
Let <math>SOL_G</math> be the size of of the cut <math>|E(S,T)|</math> returned by the ''GreedyMaxCut'' algorithm on input graph <math>G</math>.
 
As a maximization problem it is trivial that <math>SOL_G\le OPT_G</math> for all <math>G</math>. To guarantee that the ''GreedyMaxCut'' gives good approximation of optimal solution, we need the other direction:
{{Theorem|Approximation ratio|
:We say that the '''approximation ratio''' of the ''GreedyMaxCut'' algorithm is <math>\alpha</math>, or ''GreedyMaxCut'' is an '''<math>\alpha</math>-approximation''' algorithm, for some <math>0<\alpha\le 1</math>, if
::<math>\frac{SOL_G}{OPT_G}\ge \alpha</math> for every possible instance <math>G</math> of max-cut.
}}
}}


This is the webpage for the ''Advanced Algorithms'' class of fall 2021. Students who take this class should check this page periodically for content updates and new announcements.  
With this notion, we now try to analyze the approximation ratio of the ''GreedyMaxCut'' algorithm.
 
A dilemma to apply this notion in our analysis is that in the definition of approximation ratio, we compare the solution returned by the algorithm with the '''optimal solution'''. However, in the analysis we can hardly conduct similar comparisons to the optimal solutions. A fallacy in this logic is that the optimal solutions are '''NP-hard''', meaning there is no easy way to calculate them (e.g. a closed form).
 
A popular step (usually the first step of analyzing approximation ratio) to avoid this dilemma is that instead of directly comparing to the optimal solution, we compare to an '''upper bound''' of the optimal solution (for minimization problem, this needs to be a lower bound), that is, we compare to something which is even better than the optimal solution (which means it cannot be realized by any feasible solution).


= Announcement =
For the max-cut problem, a simple upper bound to <math>OPT_G</math> is <math>|E|</math>, the number of all edges. This is a trivial upper bound of max-cut since any cut is a subset of edges.
* (2021/08/31) 今天在线课程的slides和录屏视频已经上传,参见lecture notes部分。
* (2021/09/15) 第一次作业已发布,请在 2021/9/28 上课之前提交到 [mailto:njuadvalg21@163.com njuadvalg21@163.com] (文件名为'<font color=red >学号_姓名_A1.pdf</font>').
* (2021/11/25) 下周(11月30日)开始,课程改为线上,线上腾讯会议号仍为 598 944 8767。
* (2021/12/21) <font color=red size=4> 2022年1月4日,下午2点,进行一次线上作业讲解和答疑。线上腾讯会议号仍为 598 944 8767。</font>


= Course info =
Let <math>G(V,E)</math> be the input graph and <math>V=\{v_1,v_2,\ldots,v_n\}</math>. Initially <math>S_1=T_1=\emptyset</math>. And for <math>i=1,2,\ldots,n</math>, we let <math>S_{i+1}</math> and <math>T_{i+1}</math> be the respective <math>S</math> and <math>T</math> after <math>v_i</math> joins one of <math>S,T</math>. More precisely,
* '''Instructor ''': 尹一通 ([http://tcs.nju.edu.cn/yinyt/ homepage])
* <math>S_{i+1}=S_i\cup\{v_i\}</math> and <math>T_{i+1}=T_i\,</math> if <math>E(S_{i}\cup\{v_i\},T_i)>E(S_{i},T_i\cup\{v_i\})</math>;
:*'''email''': yinyt@nju.edu.cn
* <math>S_{i+1}=S_i\,</math> and <math>T_{i+1}=T_i\cup\{v_i\}</math>  if otherwise.  
* '''Teaching Assistant''':
Finally, the max-cut is given by
** 陈小羽([mailto:chenxiaoyu233@smail.nju.edu.cn chenxiaoyu233@smail.nju.edu.cn])
:<math>SOL_G=|E(S_{n+1},T_{n+1})|</math>.
** 吴旭东 ([mailto:xdwu@smail.nju.edu.cn xdwu@smail.nju.edu.cn])
* '''Mailbox for Homework''': [mailto:njuadvalg21@163.com njuadvalg21@163.com]
* '''Class meeting''': Tuesday, 2pm-5pm
:* '''线上直播''': 腾讯会议 598 944 8767
:* '''线下''': 逸B-207
* '''Office hour''': 待定
:* '''QQ群''': 893909781


= Syllabus =
We first observe that we can count the number of edges <math>|E|</math> by summarizing the contributions of individual <math>v_i</math>'s.
随着计算机算法理论的不断发展,现代计算机算法的设计与分析大量地使用非初等的数学工具以及非传统的算法思想。“高级算法”这门课程就是面向计算机算法的这一发展趋势而设立的。课程将针对传统算法课程未系统涉及、却在计算机科学各领域的科研和实践中扮演重要角色的高等算法设计思想和算法分析工具进行系统讲授。
{{Theorem|Proposition 1|
:<math>|E| = \sum_{i=1}^n\left(|E(S_i,\{v_i\})|+|E(T_i,\{v_i\})|\right)</math>.
}}
{{Proof|
Note that <math>S_i\cup T_i=\{v_1,v_2,\ldots,v_{i-1}\}</math>, i.e. <math>S_i</math> and <math>T_i</math> together contain precisely those vertices preceding <math>v_i</math>. Therefore, by taking the sum
:<math>\sum_{i=1}^n\left(|E(S_i,\{v_i\})|+|E(T_i,\{v_i\})|\right)</math>,
we effectively enumerate all <math>(v_j,v_i)</math> that <math>v_jv_i\in E</math> and <math>j<i</math>. The total number is precisely <math>|E|</math>.
}}


=== 先修课程 Prerequisites ===
We then observe that the <math>SOL_G</math> can be decomposed into contributions of individual <math>v_i</math>'s in the same way.
* 必须:离散数学,概率论,线性代数。
{{Theorem|Proposition 2|
* 推荐:算法设计与分析。
:<math>SOL_G = \sum_{i=1}^n\max\left(|E(S_i, \{v_i\})|,|E(T_i, \{v_i\})|\right)</math>.
}}
{{Proof|
It is east to observe that <math>E(S_i,T_i)\subseteq E(S_{i+1},T_{i+1})</math>, i.e. once an edge joins the cut between current <math>S,T</math> it will never drop from the cut in the future.


=== Course materials ===
We then define
* [[高级算法 (Fall 2020) / Course materials|<font size=3>教材和参考书</font>]]
:<math>\Delta_i= |E(S_{i+1},T_{i+1})|-|E(S_i,T_i)|=|E(S_{i+1},T_{i+1})\setminus E(S_i,T_i)|</math>
to be the contribution of <math>v_i</math> in the final cut.


=== 成绩 Grades ===
It holds that
* 课程成绩:本课程将会有若干次作业和一次期末考试。最终成绩将由平时作业成绩和期末考试成绩综合得出。
:<math>\sum_{i=1}^n\Delta_i=|E(S_{n+1},T_{n+1})|-|E(S_{1},T_{1})|=|E(S_{n+1},T_{n+1})|=SOL_G</math>.
* 迟交:如果有特殊的理由,无法按时完成作业,请提前联系授课老师,给出正当理由。否则迟交的作业将不被接受。
On the other hand, due to the greedy rule:
* <math>S_{i+1}=S_i\cup\{v_i\}</math> and <math>T_{i+1}=T_i\,</math> if <math>E(S_{i}\cup\{v_i\},T_i)>E(S_{i},T_i\cup\{v_i\})</math>;
* <math>S_{i+1}=S_i\,</math> and <math>T_{i+1}=T_i\cup\{v_i\}</math>  if otherwise;
it holds that
:<math>\Delta_i=|E(S_{i+1},T_{i+1})\setminus E(S_i,T_i)| = \max\left(|E(S_i, \{v_i\})|,|E(T_i, \{v_i\})|\right)</math>.
Together the proposition follows.
}}


=== <font color=red> 学术诚信 Academic Integrity </font>===
Combining the above Proposition 1 and Proposition 2, we have
学术诚信是所有从事学术活动的学生和学者最基本的职业道德底线,本课程将不遗余力的维护学术诚信规范,违反这一底线的行为将不会被容忍。
:<math>
\begin{align}
SOL_G
&= \sum_{i=1}^n\max\left(|E(S_i, \{v_i\})|,|E(T_i, \{v_i\})|\right)\\
&\ge \frac{1}{2}\sum_{i=1}^n\left(|E(S_i, \{v_i\})|+|E(T_i, \{v_i\})|\right)\\
&=\frac{1}{2}|E|\\
&\ge\frac{1}{2}OPT_G.
\end{align}
</math>


作业完成的原则:署你名字的工作必须是你个人的贡献。在完成作业的过程中,允许讨论,前提是讨论的所有参与者均处于同等完成度。但关键想法的执行、以及作业文本的写作必须独立完成,并在作业中致谢(acknowledge)所有参与讨论的人。不允许其他任何形式的合作——尤其是与已经完成作业的同学“讨论”。
{{Theorem|Theorem|
:The ''GreedyMaxCut'' is a <math>0.5</math>-approximation algorithm for the max-cut problem.
}}


本课程将对剽窃行为采取零容忍的态度。在完成作业过程中,对他人工作(出版物、互联网资料、其他人的作业等)直接的文本抄袭和对关键思想、关键元素的抄袭,按照 [http://www.acm.org/publications/policies/plagiarism_policy ACM Policy on Plagiarism]的解释,都将视为剽窃。剽窃者成绩将被取消。如果发现互相抄袭行为,<font color=red> 抄袭和被抄袭双方的成绩都将被取消</font>。因此请主动防止自己的作业被他人抄袭。
This is not the best approximation ratio achieved by polynomial-time algorithms for max-cut.
* The best known approximation ratio achieved by any polynomial-time algorithm is achieved by the [http://www-math.mit.edu/~goemans/PAPERS/maxcut-jacm.pdf Goemans-Williamson algorithm], which relies on rounding an [https://en.wikipedia.org/wiki/Semidefinite_programming SDP] relaxation of the max-cut, and achieves an approximation ratio <math>\alpha^*\approx 0.878</math>, where <math>\alpha^*</math> is an irrational whose precise value is given by <math>\alpha^*=\frac{2}{\pi}\inf_{x\in[-1,1]}\frac{\arccos(x)}{1-x}</math>.
* Assuming the [https://en.wikipedia.org/wiki/Unique_games_conjecture unique game conjecture], there does not exist any polynomial-time algorithm for max-cut with approximation ratio <math>\alpha>\alpha^*</math>.


学术诚信影响学生个人的品行,也关乎整个教育系统的正常运转。为了一点分数而做出学术不端的行为,不仅使自己沦为一个欺骗者,也使他人的诚实努力失去意义。让我们一起努力维护一个诚信的环境。
== Derandomization by conditional expectation ==
There is a probabilistic interpretation of the greedy algorithm, which may explains why we use greedy scheme for max-cut and why it works for finding an approximate max-cut.


= Assignments =
Given an undirected graph <math>G(V,E)</math>, let us calculate the average size of cuts in <math>G</math>. For every vertex <math>v\in V</math> let <math>X_v\in\{0,1\}</math> be a ''uniform'' and ''independent'' random bit which indicates whether <math>v</math> joins <math>S</math> or <math>T</math>. This gives us a uniform random bipartition of <math>V</math> into <math>S</math> and <math>T</math>.
*[[高级算法 (Fall 2021)/Problem Set 1|Problem Set 1]]  请在 2021/9/28 上课之前提交到 [mailto:njuadvalg21@163.com njuadvalg21@163.com] (文件名为'<font color=red >学号_姓名_A1.pdf</font>'). [[高级算法 (Fall 2021)/第一次作业提交名单|第一次作业提交名单]].
* '''Reading assignment''': Mitzenmacher and Upfal, ''Probability and Computing, second edition'', '''Chapter 17''' "Balanced Allocations and Cuckoo Hashing", 请在一周内读完
* [[高级算法 (Fall 2021)/Problem Set 2|Problem Set 2]]  请在 2021/11/2 上课之前提交到 [mailto:njuadvalg21@163.com njuadvalg21@163.com] (文件名为'<font color=red >学号_姓名_A2.pdf</font>'). [[高级算法 (Fall 2021)/第二次作业提交名单|第二次作业提交名单]], [https://chenxiaoyu233.github.io/algadv21-assiment-slides/assiment-1-2.pdf 前两次作业习题课slides].
* [[高级算法 (Fall 2021)/Problem Set 3|Problem Set 3]] 请在 2021/12/7 上课之前提交到 [mailto:njuadvalg21@163.com njuadvalg21@163.com] (文件名为'<font color=red >学号_姓名_A3.pdf</font>'). [[高级算法 (Fall 2021)/第三次作业提交名单|第三次作业提交名单]].
* [[高级算法 (Fall 2021)/Problem Set 4|Problem Set 4]] 请在 2022/1/3 23:59之前提交到 [mailto:njuadvalg21@163.com njuadvalg21@163.com] (文件名为'<font color=red >学号_姓名_A4.pdf</font>'). [[高级算法 (Fall 2021)/第四次作业提交名单|第四次作业提交名单]], [https://chenxiaoyu233.github.io/algadv21-assiment-slides/assiment-3-4.pdf 后两次作业习题课slides].
* [http://tcs.nju.edu.cn/slides/aa2021/final.mp4 习题课录像]


= Lecture Notes =
The size of the random cut <math>|E(S,T)|</math> is given by
# [[高级算法 (Fall 2021)/Min-Cut and Max-Cut|Min-Cut and Max-Cut]] ([http://tcs.nju.edu.cn/slides/aa2021/Cut.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_0.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_0.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_0.3.mp4 video3])
:<math>
#:  [[高级算法 (Fall 2021)/Probability Basics|Probability basics]]
|E(S,T)| = \sum_{uv\in E} I[X_u\neq X_v],
#  [[高级算法 (Fall 2021)/Fingerprinting| Fingerprinting]] ([http://tcs.nju.edu.cn/slides/aa2021/Fingerprinting.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_1.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_1.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_1.3.mp4 video3])
</math>
#:  [[高级算法 (Fall 2021)/Finite Field Basics|Finite field basics]]
where <math>I[X_u\neq X_v]</math> is the Boolean indicator random variable that indicates whether event <math>X_u\neq X_v</math> occurs.
#  [[高级算法 (Fall 2021)/Hashing and Sketching|Hashing and Sketching]] ([http://tcs.nju.edu.cn/slides/aa2021/Hashing.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_2.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_2.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_2.3.mp4 video3] [http://tcs.nju.edu.cn/slides/aa2021/meeting_3.1.mp4 video4] [http://tcs.nju.edu.cn/slides/aa2021/meeting_3.2.mp4 video5])
#:  [[高级算法 (Fall 2021)/Basic tail inequalities|Basic tail inequalities]]
# [[高级算法 (Fall 2021)/Balls into bins|Balls into bins]] ([http://tcs.nju.edu.cn/slides/aa2021/Balls2bins.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_3.3.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_4.1.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_4.2.mp4 video3] [http://tcs.nju.edu.cn/slides/aa2021/meeting_4.3.mp4 video4])
#:  [[高级算法 (Fall 2021)/Limited independence|Limited independence]]
# [[高级算法 (Fall 2021)/Concentration of measure|Concentration of measure]] ([http://tcs.nju.edu.cn/slides/aa2021/Concentration.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_5.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_5.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_5.3.mp4 video3])  
#:  [[高级算法 (Fall 2021)/Conditional expectations|Conditional expectations]]
# [[高级算法 (Fall 2021)/Dimension Reduction|Dimension Reduction]] ([http://tcs.nju.edu.cn/slides/aa2021/NNS.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_6.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_6.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_6.3.mp4 video3] [http://tcs.nju.edu.cn/slides/aa2021/meeting_7.1.mp4 video4] [http://tcs.nju.edu.cn/slides/aa2021/meeting_7.2.mp4 video5])
#: [http://people.seas.harvard.edu/~minilek/madalgo2015/index.html Jelani Nelson's note on Johnson-Lindenstrauss Theorem]
#: [http://people.csail.mit.edu/gregory/annbook/introduction.pdf An introduction of LSH]
#  [[高级算法 (Fall 2021)/Greedy and Local Search|Greedy and Local Search]] ([http://tcs.nju.edu.cn/slides/aa2021/Greedy.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_8.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_8.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_8.3.mp4 video3] [http://tcs.nju.edu.cn/slides/aa2021/meeting_9.1.mp4 video4])
#: [https://theory.stanford.edu/~jvondrak/CS369P/lec16.pdf Jan Vondrák's notes] and [https://theory.stanford.edu/~jvondrak/data/submod-tutorial-1.pdf slides] on submodular optimization
# Rounding Dynamic Programs ([http://tcs.nju.edu.cn/slides/aa2021/DP.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_9.2.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_9.3.mp4 video2])
#:  [http://tcs.nju.edu.cn/notes/DP.Note.pdf Vazirani book Chap. 8]
# Rounding Linear Programs ([http://tcs.nju.edu.cn/slides/aa2021/LP.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_10.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_10.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_10.3.mp4 video3])
#: [http://tcs.nju.edu.cn/notes/LP.Note.pdf Vazirani book Chap. 14, 16]
#: [https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/notes/lpsdp.pdf Notes on LP and SDP by Anupam Gupta and Ryan O’Donnell]
# The Primal-Dual Schema ([http://tcs.nju.edu.cn/slides/aa2021/Primal-Dual.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_11.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_11.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_11.3.mp4 video3] [http://tcs.nju.edu.cn/slides/aa2021/meeting_12.1.mp4 video4] [http://tcs.nju.edu.cn/slides/aa2021/meeting_12.2.mp4 video5])
#: [http://tcs.nju.edu.cn/notes/DualityNote.pdf Vazirani book Chap. 12, 15]
# SDP based algorithms ([http://tcs.nju.edu.cn/slides/aa2021/SDP.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_12.3.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_12.4.mp4 video2])
#: [http://tcs.nju.edu.cn/notes/SDP.Note.pdf Vazirani book Chap. 26]
# ''Lovász'' Local Lemma  ([http://tcs.nju.edu.cn/slides/aa2021/LLL.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_13.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_13.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_13.3.mp4 video3])
#: [https://people.eecs.berkeley.edu/~sinclair/cs271/n22.pdf Alistair Sinclair's Lecture Notes on LLL]
#: [https://www.cc.gatech.edu/~vigoda/6550/Notes/Lec16.pdf Alistair Sinclair's Lecture Notes on Algorithmic LLL]
# Markov Chain Monte Carlo (MCMC) methods ([http://tcs.nju.edu.cn/slides/aa2021/MCMC.pdf slides]) ([http://tcs.nju.edu.cn/slides/aa2021/meeting_14.1.mp4 video1] [http://tcs.nju.edu.cn/slides/aa2021/meeting_14.2.mp4 video2] [http://tcs.nju.edu.cn/slides/aa2021/meeting_14.3.mp4 video3])


= Related Online Courses=
Due to '''linearity of expectation''',
* [http://people.csail.mit.edu/moitra/854.html Advanced Algorithms] by Ankur Moitra at MIT.
:<math>
* [http://courses.csail.mit.edu/6.854/current/ Advanced Algorithms] by David Karger and Aleksander Mądry at MIT.
\mathbb{E}[|E(S,T)|]=\sum_{uv\in E} \mathbb{E}[I[X_u\neq X_v]] =\sum_{uv\in E} \Pr[X_u\neq X_v]=\frac{|E|}{2}.
* [http://web.stanford.edu/class/cs168/index.html The Modern Algorithmic Toolbox] by Tim Roughgarden and Gregory Valiant at Stanford.
</math>
* [https://www.cs.princeton.edu/courses/archive/fall15/cos521/ Advanced Algorithm Design] by Sanjeev Arora at Princeton.
Recall that <math>|E|</math> is a trivial upper bound for the max-cut <math>OPT_G</math>. Due to the above argument, we have
* [http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/ Linear and Semidefinite Programming (Advanced Algorithms)] by Anupam Gupta and Ryan O'Donnell at CMU.
:<math>
* The [https://www.cs.cornell.edu/jeh/book.pdf "Foundations of Data Science" book] by Avrim Blum, John Hopcroft, and Ravindran Kannan.
\mathbb{E}[|E(S,T)|]\ge\frac{OPT_G}{2}.
</math>
:{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"
|
*In above argument we use a few probability propositions.
: '''linearity of expectation:'''
:: Let <math>\boldsymbol{X}=(X_1,X_2,\ldots,X_n)</math> be a random vector. Then
:::<math>\mathbb{E}\left[\sum_{i=1}^nc_iX_i\right]=\sum_{i=1}^nc_i\mathbb{E}[X_i]</math>,
::where <math>c_1,c_2,\ldots,c_n</math> are scalars.
::That is, the order of computations of expectation and linear (affine) function of a random vector can be exchanged.  
::Note that this property ignores the dependency between random variables, and hence is very useful.
:'''Expectation of indicator random variable:'''
::We usually use the notation <math>I[A]</math> to represent the Boolean indicator random variable that indicates whether the event <math>A</math> occurs: i.e. <math>I[A]=1</math> if event <math>A</math> occurs and <math>I[A]=0</math> if otherwise.  
::It is easy to see that <math>\mathbb{E}[I[A]]=\Pr[A]</math>. The expectation of an indicator random variable equals the probability of the event it indicates.
|}
 
By above analysis, the average (under uniform distribution) size of all cuts in any graph <math>G</math> must be at least <math>\frac{OPT_G}{2}</math>. Due to '''the probabilistic method''', in particular '''the averaging principle''', there must exists a bipartition of <math>V</math> into <math>S</math> and <math>T</math> whose cut <math>E(S,T)</math> is of size at least <math>\frac{OPT_G}{2}</math>. Then next question is how to find such a bipartition <math>\{S,T\}</math> ''algorithmically''.
 
We still fix an arbitrary order of all vertices as <math>V=\{v_1,v_2,\ldots,v_n\}</math>. Recall that each vertex <math>v_i</math> is associated with a uniform and independent random bit <math>X_{v_i}</math> to indicate whether <math>v_i</math> joins <math>S</math> or <math>T</math>. We want to fix the value of <math>X_{v_i}</math> one after another to construct a bipartition <math>\{\hat{S},\hat{T}\}</math> of <math>V</math> such that
:<math>|E(\hat{S},\hat{T})|\ge\mathbb{E}[|E(S,T)|]\ge\frac{OPT_G}{2}</math>.
 
We start with the first vertex <math>v_i</math> and its random variable <math>X_{v_1}</math>. By the '''law of total expectation''',
:<math>
\mathbb{E}[E(S,T)]=\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=0]+\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=1].
</math>
There must exist an assignment <math>x_1\in\{0,1\}</math> of <math>X_{v_1}</math> such that
:<math>\mathbb{E}[E(S,T)\mid X_{v_1}=x_1]\ge \mathbb{E}[E(S,T)]</math>.
We can continuously applying this argument. In general, for any <math>i\le n</math> and any particular partial assignment <math>x_1,x_2,\ldots,x_{i-1}\in\{0,1\}</math> of <math>X_{v_1},X_{v_2},\ldots,X_{v_{i-1}}</math>, by the law of total expectation
:<math>
\begin{align}
\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}]
=
&\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}, X_{v_{i}}=0]\\
&+\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}, X_{v_{i}}=1].
\end{align}
</math>
There must exist an assignment <math>x_{i}\in\{0,1\}</math> of <math>X_{v_i}</math> such that
:<math>
\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i}}=x_{i}]\ge \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}].
</math>
By this argument, we can find a sequence <math>x_1,x_2,\ldots,x_n\in\{0,1\}</math> of bits which forms a ''monotone path'':
:<math>
\mathbb{E}[E(S,T)]\le \cdots \le \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}] \le \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i}}=x_{i}] \le \cdots \le  \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{n}}=x_{n}].
</math>
We already know the first step of this monotone path <math>\mathbb{E}[E(S,T)]\ge\frac{OPT_G}{2}</math>. And for the last step of the monotone path <math>\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{n}}=x_{n}]</math> since all random bits have been fixed, a bipartition <math>(\hat{S},\hat{T})</math> is determined by the assignment <math>x_1,\ldots, x_n</math>, so the expectation has no effect except just retuning the size of that cut <math>|E(\hat{S},\hat{T})|</math>. We found the cut <math>E(\hat{S},\hat{T})</math> such that <math>|E(\hat{S},\hat{T})|\ge \frac{OPT_G}{2}</math>.
 
We translate the procedure of constructing this monotone path of conditional expectation to the following algorithm.  
{{Theorem|''MonotonePath''|
:'''Input:''' undirected graph <math>G(V,E)</math>,
:::with an arbitrary order of vertices <math>V=\{v_1,v_2,\ldots,v_n\}</math>;
----
:initially <math>S=T=\emptyset</math>;
:for <math>i=1,2,\ldots,n</math>
::<math>v_i</math> joins one of <math>S,T</math> to maximize the average size of cut conditioning on the choices made so far by the vertices <math>v_1,v_2,\ldots,v_i</math>;
}}
We leave as an exercise to verify that the choice of each <math>v_i</math> (to join which one of <math>S,T</math>) in the ''MonotonePath'' algorithm (which maximizes the average size of cut conditioning on the choices made so far by the vertices <math>v_1,v_2,\ldots,v_i</math>) must be the same choice made by <math>v_i</math> in the ''GreedyMaxCut'' algorithm (which maximizes the current <math>|E(S,T)|</math>).  
 
Therefore, the greedy algorithm for max-cut is actually due to a derandomization of average-case.
 
== Derandomization by pairwise independence ==
We still construct a random bipartition of <math>V</math> into <math>S</math> and <math>T</math>. But this time the random choices have '''bounded independence'''.
 
For each vertex <math>v\in V</math>, we use a Boolean random variable <math>Y_v\in\{0,1\}</math> to indicate whether <math>v</math> joins <math>S</math> and <math>T</math>. The dependencies between <math>Y_v</math>'s are to be specified later.
 
By linearity of expectation, regardless of the dependencies between <math>Y_v</math>'s, it holds that:
:<math>
\mathbb{E}[|E(S,T)|]=\sum_{uv\in E} \Pr[Y_u\neq Y_v].
</math>
In order to have the average cut <math>\mathbb{E}[|E(S,T)|]=\frac{|E|}{2}</math> as the fully random case, we need <math>\Pr[Y_u\neq Y_v]=\frac{1}{2}</math>. This only requires that the Boolean random variables <math>Y_v</math>'s are uniform and '''pairwise independent''' instead of being '''mutually independent'''.
 
The <math>n</math> pairwise independent random bits <math>\{Y_v\}_{v\in V}</math> can be constructed by at most <math>k=\lceil\log (n+1)\rceil</math> mutually independent random bits <math>X_1,X_2,\ldots,X_k\in\{0,1\}</math> by the following standard routine.
 
{{Theorem|Theorem|
:Let <math>X_1, X_2, \ldots, X_k\in\{0,1\}</math> be mutually independent uniform random bits.
:Let <math>S_1, S_2, \ldots, S_{2^k-1}\subseteq \{1,2,\ldots,k\}</math> enumerate the <math>2^k-1</math> nonempty subsets of <math>\{1,2,\ldots,k\}</math>.  
:For each <math>i\le i\le2^k-1</math>, let
::<math>Y_i=\bigoplus_{j\in S_i}X_j=\left(\sum_{j\in S_i}X_j\right)\bmod 2.</math>
:Then <math>Y_1,Y_2,\ldots,Y_{2^k-1}</math> are pairwise independent uniform random bits.
}}
 
If <math>Y_v</math> for each vertex <math>v\in V</math> is constructed in this way by at most <math>k=\lceil\log (n+1)\rceil</math> mutually independent random bits <math>X_1,X_2,\ldots,X_k\in\{0,1\}</math>, then they are uniform and pairwise independent, which by the above calculation, it holds for the corresponding bipartition <math>\{S,T\}</math> of <math>V</math> that
:<math>
\mathbb{E}[|E(S,T)|]=\sum_{uv\in E} \Pr[Y_u\neq Y_v]=\frac{|E|}{2}.
</math>
Note that the average is taken over the random choices of <math>X_1,X_2,\ldots,X_k\in\{0,1\}</math> (because they are the only random choices used to construct the bipartition <math>\{S,T\}</math>). By the probabilistic method, there must exist an assignment of <math>X_1,X_2,\ldots,X_k\in\{0,1\}</math> such that the corresponding <math>Y_v</math>'s and the bipartition <math>\{S,T\}</math> of <math>V</math> indicated by the <math>Y_v</math>'s have that
:<math>|E(S,T)|\ge \frac{|E|}{2}\ge\frac{OPT}{2}</math>.
 
This gives us the following algorithm for exhaustive search in a smaller solution space of size <math>2^k-1=O(n^2)</math>.
{{Theorem|Algorithm|
:Enumerate vertices as <math>V=\{v_1,v_2,\ldots,v_n\}</math>;
:let <math>k=\lceil\log (n+1)\rceil</math>;
:for all <math>\vec{x}\in\{0,1\}^k</math>
::initialize <math>S_{\vec{x}}=T_{\vec{x}}=\emptyset</math>;
::for <math>i=1, 2, \ldots, n</math>
:::if <math>\bigoplus_{j:\lfloor i/2^j\rfloor\bmod 2=1}x_j=1</math> then <math>v_i</math> joins <math>S_{\vec{x}}</math>;
:::else <math>v_i</math> joins <math>T_{\vec{x}}</math>;
:return the <math>\{S_{\vec{x}},T_{\vec{x}}\}</math> with the largest <math>|E(S_{\vec{x}},T_{\vec{x}})|</math>;
}}
The algorithm has approximation ratio 1/2 and runs in polynomial time.

Latest revision as of 08:44, 24 August 2021

Graph Cut

Let [math]\displaystyle{ G(V, E) }[/math] be an undirected graph. A subset [math]\displaystyle{ C\subseteq E }[/math] of edges is a cut of graph [math]\displaystyle{ G }[/math] if [math]\displaystyle{ G }[/math] becomes disconnected after deleting all edges in [math]\displaystyle{ C }[/math].

Let [math]\displaystyle{ \{S,T\} }[/math] be a bipartition of [math]\displaystyle{ V }[/math] into nonempty subsets [math]\displaystyle{ S,T\subseteq V }[/math], where [math]\displaystyle{ S\cap T=\emptyset }[/math] and [math]\displaystyle{ S\cup T=V }[/math]. A cut [math]\displaystyle{ C }[/math] is specified by this bipartition as

[math]\displaystyle{ C=E(S,T)\, }[/math],

where [math]\displaystyle{ E(S,T) }[/math] denotes the set of "crossing edges" with one endpoint in each of [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math], formally defined as

[math]\displaystyle{ E(S,T)=\{uv\in E\mid u\in S, v\in T\} }[/math].

Given a graph [math]\displaystyle{ G }[/math], there might be many cuts in [math]\displaystyle{ G }[/math], and we are interested in finding the minimum or maximum cut.

Min-Cut

The min-cut problem, also called the global minimum cut problem, is defined as follows.

Min-cut problem
  • Input: an undirected graph [math]\displaystyle{ G(V,E) }[/math];
  • Output: a cut [math]\displaystyle{ C }[/math] in [math]\displaystyle{ G }[/math] with the smallest size [math]\displaystyle{ |C| }[/math].

Equivalently, the problem asks to find a bipartition of [math]\displaystyle{ V }[/math] into disjoint non-empty subsets [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] that minimizes [math]\displaystyle{ |E(S,T)| }[/math].

We consider the problem in a slightly more generalized setting, where the input graphs [math]\displaystyle{ G }[/math] can be multi-graphs, meaning that there could be multiple parallel edges between two vertices [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math]. The cuts in multi-graphs are defined in the same way as before, and the cost of a cut [math]\displaystyle{ C }[/math] is given by the total number of edges (including parallel edges) in [math]\displaystyle{ C }[/math]. Equivalently, one may think of a multi-graph as a graph with integer edge weights, and the cost of a cut [math]\displaystyle{ C }[/math] is the total weights of all edges in [math]\displaystyle{ C }[/math].

A canonical deterministic algorithm for this problem is through the max-flow min-cut theorem. The max-flow algorithm finds us a minimum [math]\displaystyle{ s }[/math]-[math]\displaystyle{ t }[/math] cut, which disconnects a source [math]\displaystyle{ s\in V }[/math] from a sink [math]\displaystyle{ t\in V }[/math], both specified as part of the input. A global min cut can be found by exhaustively finding the minimum [math]\displaystyle{ s }[/math]-[math]\displaystyle{ t }[/math] cut for an arbitrarily fixed source [math]\displaystyle{ s }[/math] and all possible sink [math]\displaystyle{ t\neq s }[/math]. This takes [math]\displaystyle{ (n-1)\times }[/math]max-flow time where [math]\displaystyle{ n=|V| }[/math] is the number of vertices.

The fastest known deterministic algorithm for the minimum cut problem on multi-graphs is the Stoer–Wagner algorithm, which achieves an [math]\displaystyle{ O(mn+n^2\log n) }[/math] time complexity where [math]\displaystyle{ m=|E| }[/math] is the total number of edges (counting the parallel edges).

If we restrict the input to be simple graphs (meaning there is no parallel edges) with no edge weight, there are better algorithms. A deterministic algorithm of Ken-ichi Kawarabayashi and Mikkel Thorup published in STOC 2015, achieves the near-linear (in the number of edges) time complexity.

Karger's Contraction algorithm

We will describe a simple and elegant randomized algorithm for the min-cut problem. The algorithm is due to David Karger.

Let [math]\displaystyle{ G(V, E) }[/math] be a multi-graph, which allows more than one parallel edges between two distinct vertices [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] but does not allow any self-loops: the edges that adjoin a vertex to itself. A multi-graph [math]\displaystyle{ G }[/math] can be represented by an adjacency matrix [math]\displaystyle{ A }[/math], in the way that each non-diagonal entry [math]\displaystyle{ A(u,v) }[/math] takes nonnegative integer values instead of just 0 or 1, representing the number of parallel edges between [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] in [math]\displaystyle{ G }[/math], and all diagonal entries [math]\displaystyle{ A(v,v)=0 }[/math] (since there is no self-loop).

Given a multi-graph [math]\displaystyle{ G(V,E) }[/math] and an edge [math]\displaystyle{ e\in E }[/math], we define the following contraction operator Contract([math]\displaystyle{ G }[/math], [math]\displaystyle{ e }[/math]), which transform [math]\displaystyle{ G }[/math] to a new multi-graph.

The contraction operator Contract([math]\displaystyle{ G }[/math], [math]\displaystyle{ e }[/math])
say [math]\displaystyle{ e=uv }[/math]:
  • replace [math]\displaystyle{ \{u,v\} }[/math] by a new vertex [math]\displaystyle{ x }[/math];
  • for every edge (no matter parallel or not) in the form of [math]\displaystyle{ uw }[/math] or [math]\displaystyle{ vw }[/math] that connects one of [math]\displaystyle{ \{u,v\} }[/math] to a vertex [math]\displaystyle{ w\in V\setminus\{u,v\} }[/math] in the graph other than [math]\displaystyle{ u,v }[/math], replace it by a new edge [math]\displaystyle{ xw }[/math];
  • the reset of the graph does not change.

In other words, the [math]\displaystyle{ Contract(G,uv) }[/math] merges the two vertices [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] into a new vertex [math]\displaystyle{ x }[/math] whose incident edges preserves the edges incident to [math]\displaystyle{ u }[/math] or [math]\displaystyle{ v }[/math] in the original graph [math]\displaystyle{ G }[/math] except for the parallel edges between them. Now you should realize why we consider multi-graphs instead of simple graphs, because even if we start with a simple graph without parallel edges, the contraction operator may create parallel edges.

The contraction operator is illustrated by the following picture:

Karger's algorithm uses a simple idea:

  • At each step we randomly select an edge in the current multi-graph to contract until there are only two vertices left.
  • The parallel edges between these two remaining vertices must be a cut of the original graph.
  • We return this cut and hope that with good chance this gives us a minimum cut.

The following is the pseudocode for Karger's algorithm.

RandomContract (Karger 1993)
Input: multi-graph [math]\displaystyle{ G(V,E) }[/math];

while [math]\displaystyle{ |V|\gt 2 }[/math] do
  • choose an edge [math]\displaystyle{ uv\in E }[/math] uniformly at random;
  • [math]\displaystyle{ G=Contract(G,uv) }[/math];
return [math]\displaystyle{ C=E }[/math] (the parallel edges between the only two vertices in [math]\displaystyle{ V }[/math]);

Another way of looking at the contraction operator Contract([math]\displaystyle{ G }[/math],[math]\displaystyle{ e }[/math]) is that we are dealing with classes of vertices. Let [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math] be the set of all vertices. We start with [math]\displaystyle{ n }[/math] vertex classes [math]\displaystyle{ S_1,S_2,\ldots, S_n }[/math] with each class [math]\displaystyle{ S_i=\{v_i\} }[/math] contains one vertex. By calling [math]\displaystyle{ Contract(G,uv) }[/math], where [math]\displaystyle{ u\in S_i }[/math] and [math]\displaystyle{ v\in S_j }[/math] for distinct [math]\displaystyle{ i\neq j }[/math], we take union of [math]\displaystyle{ S_i }[/math] and [math]\displaystyle{ S_j }[/math]. The edges in the contracted multi-graph are the edges that cross between different vertex classes.

This view of contraction is illustrated by the following picture:

The following claim is left as an exercise for the class:

  • With suitable choice of data structures, each operation [math]\displaystyle{ Contract(G,e) }[/math] can be implemented within running time [math]\displaystyle{ O(n) }[/math] where [math]\displaystyle{ n=|V| }[/math] is the number of vertices.

In the above RandomContract algorithm, there are precisely [math]\displaystyle{ n-2 }[/math] contractions. Therefore, we have the following time upper bound.

Theorem
For any multigraph with [math]\displaystyle{ n }[/math] vertices, the running time of the RandomContract algorithm is [math]\displaystyle{ O(n^2) }[/math].

We emphasize that it's the time complexity of a "single running" of the algorithm: later we will see we may need to run this algorithm for many times to guarantee a desirable accuracy.

Analysis of accuracy

We now analyze the performance of the above algorithm. Since the algorithm is randomized, its output cut is a random variable even when the input is fixed, so the output may not always be correct. We want to give a theoretical guarantee of the chance that the algorithm returns a correct answer on an arbitrary input.

More precisely, on an arbitrarily fixed input multi-graph [math]\displaystyle{ G }[/math], we want to answer the following question rigorously:

[math]\displaystyle{ p_{\text{correct}}=\Pr[\,\text{a minimum cut is returned by }RandomContract\,]\ge ? }[/math]

To answer this question, we prove a stronger statement: for arbitrarily fixed input multi-graph [math]\displaystyle{ G }[/math] and a particular minimum cut [math]\displaystyle{ C }[/math] in [math]\displaystyle{ G }[/math],

[math]\displaystyle{ p_{C}=\Pr[\,C\mbox{ is returned by }RandomContract\,]\ge ? }[/math]

Obviously this will imply the previous lower bound for [math]\displaystyle{ p_{\text{correct}} }[/math] because the event in [math]\displaystyle{ p_{C} }[/math] implies the event in [math]\displaystyle{ p_{\text{correct}} }[/math].

  • In above argument we use the simple law in probability that [math]\displaystyle{ \Pr[A]\le \Pr[B] }[/math] if [math]\displaystyle{ A\subseteq B }[/math], i.e. event [math]\displaystyle{ A }[/math] implies event [math]\displaystyle{ B }[/math].

We introduce the following notations:

  • Let [math]\displaystyle{ e_1,e_2,\ldots,e_{n-2} }[/math] denote the sequence of random edges chosen to contract in a running of RandomContract algorithm.
  • Let [math]\displaystyle{ G_1=G }[/math] denote the original input multi-graph. And for [math]\displaystyle{ i=1,2,\ldots,n-2 }[/math], let [math]\displaystyle{ G_{i+1}=Contract(G_{i},e_i) }[/math] be the multigraph after [math]\displaystyle{ i }[/math]th contraction.

Obviously [math]\displaystyle{ e_1,e_2,\ldots,e_{n-2} }[/math] are random variables, and they are the only random choices used in the algorithm: meaning that they along with the input [math]\displaystyle{ G }[/math], uniquely determine the sequence of multi-graphs [math]\displaystyle{ G_1,G_2,\ldots,G_{n-2} }[/math] in every iteration as well as the final output.

We now compute the probability [math]\displaystyle{ p_C }[/math] by decompose it into more elementary events involving [math]\displaystyle{ e_1,e_2,\ldots,e_{n-2} }[/math]. This is due to the following proposition.

Proposition 1
If [math]\displaystyle{ C }[/math] is a minimum cut in a multi-graph [math]\displaystyle{ G }[/math] and [math]\displaystyle{ e\not\in C }[/math], then [math]\displaystyle{ C }[/math] is still a minimum cut in the contracted graph [math]\displaystyle{ G'=contract(G,e) }[/math].
Proof.

We first observe that contraction will never create new cuts: every cut in the contracted graph [math]\displaystyle{ G' }[/math] must also be a cut in the original graph [math]\displaystyle{ G }[/math].

We then observe that a cut [math]\displaystyle{ C }[/math] in [math]\displaystyle{ G }[/math] "survives" in the contracted graph [math]\displaystyle{ G' }[/math] if and only if the contracted edge [math]\displaystyle{ e\not\in C }[/math].

Both observations are easy to verify by the definition of contraction operator (in particular, easier to verify if we take the vertex class interpretation). The detailed proofs are left as an exercise.

[math]\displaystyle{ \square }[/math]

Recall that [math]\displaystyle{ e_1,e_2,\ldots,e_{n-2} }[/math] denote the sequence of random edges chosen to contract in a running of RandomContract algorithm.

By Proposition 1, the event [math]\displaystyle{ \mbox{``}C\mbox{ is returned by }RandomContract\mbox{''}\, }[/math] is equivalent to the event [math]\displaystyle{ \mbox{``}e_i\not\in C\mbox{ for all }i=1,2,\ldots,n-2\mbox{''} }[/math]. Therefore:

[math]\displaystyle{ \begin{align} p_C &= \Pr[\,C\mbox{ is returned by }{RandomContract}\,]\\ &= \Pr[\,e_i\not\in C\mbox{ for all }i=1,2,\ldots,n-2\,]\\ &= \prod_{i=1}^{n-2}\Pr[e_i\not\in C\mid \forall j\lt i, e_j\not\in C]. \end{align} }[/math]

The last equation is due to the so called chain rule in probability.

  • The chain rule, also known as the law of progressive conditioning, is the following proposition: for a sequence of events (not necessarily independent) [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math],
[math]\displaystyle{ \Pr[\forall i, A_i]=\prod_{i=1}^n\Pr[A_i\mid \forall j\lt i, A_j] }[/math].
It is a simple consequence of the definition of conditional probability. By definition of conditional probability,
[math]\displaystyle{ \Pr[A_n\mid \forall j\lt n]=\frac{\Pr[\forall i, A_i]}{\Pr[\forall j\lt n, A_j]} }[/math],
and equivalently we have
[math]\displaystyle{ \Pr[\forall i, A_i]=\Pr[\forall j\lt n, A_j]\Pr[A_n\mid \forall j\lt n] }[/math].
Recursively apply this to [math]\displaystyle{ \Pr[\forall j\lt n, A_j] }[/math] we obtain the chain rule.

Back to the analysis of probability [math]\displaystyle{ p_C }[/math].

Now our task is to give lower bound to each [math]\displaystyle{ p_i=\Pr[e_i\not\in C\mid \forall j\lt i, e_j\not\in C] }[/math]. The condition [math]\displaystyle{ \mbox{``}\forall j\lt i, e_j\not\in C\mbox{''} }[/math] means the min-cut [math]\displaystyle{ C }[/math] survives all first [math]\displaystyle{ i-1 }[/math] contractions [math]\displaystyle{ e_1,e_2,\ldots,e_{i-1} }[/math], which due to Proposition 1 means that [math]\displaystyle{ C }[/math] is also a min-cut in the multi-graph [math]\displaystyle{ G_i }[/math] obtained from applying the first [math]\displaystyle{ (i-1) }[/math] contractions.

Then the conditional probability [math]\displaystyle{ p_i=\Pr[e_i\not\in C\mid \forall j\lt i, e_j\not\in C] }[/math] is the probability that no edge in [math]\displaystyle{ C }[/math] is hit when a uniform random edge in the current multi-graph is chosen assuming that [math]\displaystyle{ C }[/math] is a minimum cut in the current multi-graph. Intuitively this probability should be bounded from below, because as a min-cut [math]\displaystyle{ C }[/math] should be sparse among all edges. This intuition is justified by the following proposition.

Proposition 2
If [math]\displaystyle{ C }[/math] is a min-cut in a multi-graph [math]\displaystyle{ G(V,E) }[/math], then [math]\displaystyle{ |E|\ge \frac{|V||C|}{2} }[/math].
Proof.
It must hold that the degree of each vertex [math]\displaystyle{ v\in V }[/math] is at least [math]\displaystyle{ |C| }[/math], or otherwise the set of edges incident to [math]\displaystyle{ v }[/math] forms a cut of size smaller than [math]\displaystyle{ |C| }[/math] which separates [math]\displaystyle{ \{v\} }[/math] from the rest of the graph, contradicting that [math]\displaystyle{ C }[/math] is a min-cut. And the bound [math]\displaystyle{ |E|\ge \frac{|V||C|}{2} }[/math] follows directly from applying the handshaking lemma to the fact that every vertex in [math]\displaystyle{ G }[/math] has degree at least [math]\displaystyle{ |C| }[/math].
[math]\displaystyle{ \square }[/math]

Let [math]\displaystyle{ V_i }[/math] and [math]\displaystyle{ E_i }[/math] denote the vertex set and edge set of the multi-graph [math]\displaystyle{ G_i }[/math] respectively, and recall that [math]\displaystyle{ G_i }[/math] is the multi-graph obtained from applying first [math]\displaystyle{ (i-1) }[/math] contractions. Obviously [math]\displaystyle{ |V_{i}|=n-i+1 }[/math]. And due to Proposition 2, [math]\displaystyle{ |E_i|\ge \frac{|V_i||C|}{2} }[/math] if [math]\displaystyle{ C }[/math] is still a min-cut in [math]\displaystyle{ G_i }[/math].

The probability [math]\displaystyle{ p_i=\Pr[e_i\not\in C\mid \forall j\lt i, e_j\not\in C] }[/math] can be computed as

[math]\displaystyle{ \begin{align} p_i &=1-\frac{|C|}{|E_i|}\\ &\ge1-\frac{2}{|V_i|}\\ &=1-\frac{2}{n-i+1} \end{align}, }[/math]

where the inequality is due to Proposition 2.

We now can put everything together. We arbitrarily fix the input multi-graph [math]\displaystyle{ G }[/math] and any particular minimum cut [math]\displaystyle{ C }[/math] in [math]\displaystyle{ G }[/math].

[math]\displaystyle{ \begin{align} p_{\text{correct}} &=\Pr[\,\text{a minimum cut is returned by }RandomContract\,]\\ &\ge \Pr[\,C\mbox{ is returned by }{RandomContract}\,]\\ &= \Pr[\,e_i\not\in C\mbox{ for all }i=1,2,\ldots,n-2\,]\\ &= \prod_{i=1}^{n-2}\Pr[e_i\not\in C\mid \forall j\lt i, e_j\not\in C]\\ &\ge \prod_{i=1}^{n-2}\left(1-\frac{2}{n-i+1}\right)\\ &= \prod_{k=3}^{n}\frac{k-2}{k}\\ &= \frac{2}{n(n-1)}. \end{align} }[/math]

This gives us the following theorem.

Theorem
For any multigraph with [math]\displaystyle{ n }[/math] vertices, the RandomContract algorithm returns a minimum cut with probability at least [math]\displaystyle{ \frac{2}{n(n-1)} }[/math].

At first glance this seems to be a miserable chance of success. However, notice that there may be exponential many cuts in a graph (because potentially every nonempty subset [math]\displaystyle{ S\subset V }[/math] corresponds to a cut [math]\displaystyle{ C=E(S,\overline{S}) }[/math]), and Karger's algorithm effectively reduce this exponential-sized space of feasible solutions to a quadratic size one, an exponential improvement!

We can run RandomContract independently for [math]\displaystyle{ t=\frac{n(n-1)\ln n}{2} }[/math] times and return the smallest cut ever returned. The probability that a minimum cut is found is at least:

[math]\displaystyle{ \begin{align} &\quad 1-\Pr[\,\mbox{all }t\mbox{ independent runnings of } RandomContract\mbox{ fails to find a min-cut}\,] \\ &= 1-\Pr[\,\mbox{a single running of }{RandomContract}\mbox{ fails}\,]^{t} \\ &\ge 1- \left(1-\frac{2}{n(n-1)}\right)^{\frac{n(n-1)\ln n}{2}} \\ &\ge 1-\frac{1}{n}. \end{align} }[/math]

Recall that a running of RandomContract algorithm takes [math]\displaystyle{ O(n^2) }[/math] time. Altogether this gives us a randomized algorithm running in time [math]\displaystyle{ O(n^4\log n) }[/math] and find a minimum cut with high probability.

A Corollary by the Probabilistic Method

The analysis of Karger's algorithm implies the following combinatorial proposition for the number of distinct minimum cuts in a graph.

Corollary
For any graph [math]\displaystyle{ G(V,E) }[/math] of [math]\displaystyle{ n }[/math] vertices, the number of distinct minimum cuts in [math]\displaystyle{ G }[/math] is at most [math]\displaystyle{ \frac{n(n-1)}{2} }[/math].
Proof.

Let [math]\displaystyle{ \mathcal{C} }[/math] denote the set of all minimum cuts in [math]\displaystyle{ G }[/math]. For each min-cut [math]\displaystyle{ C\in\mathcal{C} }[/math], let [math]\displaystyle{ A_C }[/math] denote the event "[math]\displaystyle{ C }[/math] is returned by RandomContract", whose probability is given by

[math]\displaystyle{ p_C=\Pr[A_C]\, }[/math].

Clearly we have:

  • for any distinct [math]\displaystyle{ C,D\in\mathcal{C} }[/math], [math]\displaystyle{ A_C\, }[/math] and [math]\displaystyle{ A_{D}\, }[/math] are disjoint events; and
  • the union [math]\displaystyle{ \bigcup_{C\in\mathcal{C}}A_C }[/math] is precisely the event "a minimum cut is returned by RandomContract", whose probability is given by
[math]\displaystyle{ p_{\text{correct}}=\Pr[\,\text{a minimum cut is returned by } RandomContract\,] }[/math].

Due to the additivity of probability, it holds that

[math]\displaystyle{ p_{\text{correct}}=\sum_{C\in\mathcal{C}}\Pr[A_C]=\sum_{C\in\mathcal{C}}p_C. }[/math]

By the analysis of Karger's algorithm, we know [math]\displaystyle{ p_C\ge\frac{2}{n(n-1)} }[/math]. And since [math]\displaystyle{ p_{\text{correct}} }[/math] is a well defined probability, due to the unitarity of probability, it must hold that [math]\displaystyle{ p_{\text{correct}}\le 1 }[/math]. Therefore,

[math]\displaystyle{ 1\ge p_{\text{correct}}=\sum_{C\in\mathcal{C}}p_C\ge|\mathcal{C}|\frac{2}{n(n-1)} }[/math],

which means [math]\displaystyle{ |\mathcal{C}|\le\frac{n(n-1)}{2} }[/math].

[math]\displaystyle{ \square }[/math]

Note that the statement of this theorem has no randomness at all, while the proof consists of a randomized procedure. This is an example of the probabilistic method.

Fast Min-Cut

In the analysis of RandomContract algorithm, recall that we lower bound the probability [math]\displaystyle{ p_C }[/math] that a min-cut [math]\displaystyle{ C }[/math] is returned by RandomContract by the following telescopic product:

[math]\displaystyle{ p_C\ge\prod_{i=1}^{n-2}\left(1-\frac{2}{n-i+1}\right) }[/math].

Here the index [math]\displaystyle{ i }[/math] corresponds to the [math]\displaystyle{ i }[/math]th contraction. The factor [math]\displaystyle{ \left(1-\frac{2}{n-i+1}\right) }[/math] is decreasing in [math]\displaystyle{ i }[/math], which means:

  • The probability of success is only getting bad when the graph is getting "too contracted", that is, when the number of remaining vertices is getting small.

This motivates us to consider the following alternation to the algorithm: first using random contractions to reduce the number of vertices to a moderately small number, and then recursively finding a min-cut in this smaller instance. This seems just a restatement of exactly what we have been doing. Inspired by the idea of boosting the accuracy via independent repetition, here we apply the recursion on two smaller instances generated independently.

The algorithm obtained in this way is called FastCut. We first define a procedure to randomly contract edges until there are [math]\displaystyle{ t }[/math] number of vertices left.

RandomContract[math]\displaystyle{ (G, t) }[/math]
Input: multi-graph [math]\displaystyle{ G(V,E) }[/math], and integer [math]\displaystyle{ t\ge 2 }[/math];

while [math]\displaystyle{ |V|\gt t }[/math] do
  • choose an edge [math]\displaystyle{ uv\in E }[/math] uniformly at random;
  • [math]\displaystyle{ G=Contract(G,uv) }[/math];
return [math]\displaystyle{ G }[/math];

The FastCut algorithm is recursively defined as follows.

FastCut[math]\displaystyle{ (G) }[/math]
Input: multi-graph [math]\displaystyle{ G(V,E) }[/math];

if [math]\displaystyle{ |V|\le 6 }[/math] then return a mincut by brute force;
else let [math]\displaystyle{ t=\left\lceil1+|V|/\sqrt{2}\right\rceil }[/math];
[math]\displaystyle{ G_1=RandomContract(G,t) }[/math];
[math]\displaystyle{ G_2=RandomContract(G,t) }[/math];
return the smaller one of [math]\displaystyle{ FastCut(G_1) }[/math] and [math]\displaystyle{ FastCut(G_2) }[/math];

As before, all [math]\displaystyle{ G }[/math] are multigraphs.

Fix a min-cut [math]\displaystyle{ C }[/math] in the original multigraph [math]\displaystyle{ G }[/math]. By the same analysis as in the case of RandomContract, we have

[math]\displaystyle{ \begin{align} &\Pr[C\text{ survives all contractions in }RandomContract(G,t)]\\ = &\prod_{i=1}^{n-t}\Pr[C\text{ survives the }i\text{-th contraction}\mid C\text{ survives the first }(i-1)\text{-th contractions}]\\ \ge &\prod_{i=1}^{n-t}\left(1-\frac{2}{n-i+1}\right)\\ = &\prod_{k=t+1}^{n}\frac{k-2}{k}\\ = &\frac{t(t-1)}{n(n-1)}. \end{align} }[/math]

When [math]\displaystyle{ t=\left\lceil1+n/\sqrt{2}\right\rceil }[/math], this probability is at least [math]\displaystyle{ 1/2 }[/math]. The choice of [math]\displaystyle{ t }[/math] is due to our purpose to make this probability at least [math]\displaystyle{ 1/2 }[/math]. You will see this is crucial in the following analysis of accuracy.

We denote by [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] the following events:

[math]\displaystyle{ \begin{align} A: &\quad C\text{ survives all contractions in }RandomContract(G,t);\\ B: &\quad\text{size of min-cut is unchanged after }RandomContract(G,t); \end{align} }[/math]

Clearly, [math]\displaystyle{ A }[/math] implies [math]\displaystyle{ B }[/math] and by above analysis [math]\displaystyle{ \Pr[B]\ge\Pr[A]\ge\frac{1}{2} }[/math].

We denote by [math]\displaystyle{ p(n) }[/math] the lower bound on the probability that [math]\displaystyle{ FastCut(G) }[/math] succeeds for a multigraph of [math]\displaystyle{ n }[/math] vertices, that is

[math]\displaystyle{ p(n) =\min_{G: |V|=n}\Pr[\,FastCut(G)\text{ returns a min-cut in }G\,]. }[/math]

Suppose that [math]\displaystyle{ G }[/math] is the multigraph that achieves the minimum in above definition. The following recurrence holds for [math]\displaystyle{ p(n) }[/math].

[math]\displaystyle{ \begin{align} p(n) &= \Pr[\,FastCut(G)\text{ returns a min-cut in }G\,]\\ &= \Pr[\,\text{ a min-cut of }G\text{ is returned by }FastCut(G_1)\text{ or }FastCut(G_2)\,]\\ &\ge 1-\left(1-\Pr[B\wedge FastCut(G_1)\text{ returns a min-cut in }G_1\,]\right)^2\\ &\ge 1-\left(1-\Pr[A\wedge FastCut(G_1)\text{ returns a min-cut in }G_1\,]\right)^2\\ &= 1-\left(1-\Pr[A]\Pr[ FastCut(G_1)\text{ returns a min-cut in }G_1\mid A]\right)^2\\ &\ge 1-\left(1-\frac{1}{2}p\left(\left\lceil1+n/\sqrt{2}\right\rceil\right)\right)^2, \end{align} }[/math]

where [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] are defined as above such that [math]\displaystyle{ \Pr[A]\ge\frac{1}{2} }[/math].

The base case is that [math]\displaystyle{ p(n)=1 }[/math] for [math]\displaystyle{ n\le 6 }[/math]. By induction it is easy to prove that

[math]\displaystyle{ p(n)=\Omega\left(\frac{1}{\log n}\right). }[/math]

Recall that we can implement an edge contraction in [math]\displaystyle{ O(n) }[/math] time, thus it is easy to verify the following recursion of time complexity:

[math]\displaystyle{ T(n)=2T\left(\left\lceil1+n/\sqrt{2}\right\rceil\right)+O(n^2), }[/math]

where [math]\displaystyle{ T(n) }[/math] denotes the running time of [math]\displaystyle{ FastCut(G) }[/math] on a multigraph [math]\displaystyle{ G }[/math] of [math]\displaystyle{ n }[/math] vertices.

By induction with the base case [math]\displaystyle{ T(n)=O(1) }[/math] for [math]\displaystyle{ n\le 6 }[/math], it is easy to verify that [math]\displaystyle{ T(n)=O(n^2\log n) }[/math].

Theorem
For any multigraph with [math]\displaystyle{ n }[/math] vertices, the FastCut algorithm returns a minimum cut with probability [math]\displaystyle{ \Omega\left(\frac{1}{\log n}\right) }[/math] in time [math]\displaystyle{ O(n^2\log n) }[/math].

At this point, we see the name FastCut is misleading because it is actually slower than the original RandomContract algorithm, only the chance of successfully finding a min-cut is much better (improved from an [math]\displaystyle{ \Omega(1/n^2) }[/math] to an [math]\displaystyle{ \Omega(1/\log n) }[/math]).

Given any input multi-graph, repeatedly running the FastCut algorithm independently for some [math]\displaystyle{ O((\log n)^2) }[/math] times and returns the smallest cut ever returned, we have an algorithm which runs in time [math]\displaystyle{ O(n^2\log^3n) }[/math] and returns a min-cut with probability [math]\displaystyle{ 1-O(1/n) }[/math], i.e. with high probability.

Recall that the running time of best known deterministic algorithm for min-cut on multi-graph is [math]\displaystyle{ O(mn+n^2\log n) }[/math]. On dense graph, the randomized algorithm outperforms the best known deterministic algorithm.

Finally, Karger further improves this and obtains a near-linear (in the number of edges) time randomized algorithm for minimum cut in multi-graphs.

Max-Cut

The maximum cut problem, in short the max-cut problem, is defined as follows.

Max-cut problem
  • Input: an undirected graph [math]\displaystyle{ G(V,E) }[/math];
  • Output: a bipartition of [math]\displaystyle{ V }[/math] into disjoint subsets [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] that maximizes [math]\displaystyle{ |E(S,T)| }[/math].

The problem is a typical MAX-CSP, an optimization version of the constraint satisfaction problem. An instance of CSP consists of:

  • a set of variables [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] usually taking values from some finite domain;
  • a sequence of constraints (predicates) [math]\displaystyle{ C_1,C_2,\ldots, C_m }[/math] defined on those variables.

The MAX-CSP asks to find an assignment of values to variables [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] which maximizes the number of satisfied constraints.

In particular, when the variables [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] takes Boolean values [math]\displaystyle{ \{0,1\} }[/math] and every constraint is a binary constraint [math]\displaystyle{ \cdot\neq\cdot }[/math] in the form of [math]\displaystyle{ x_1\neq x_j }[/math], then the MAX-CSP is precisely the max-cut problem.

Unlike the min-cut problem, which can be solved in polynomial time, the max-cut is known to be NP-hard. Its decision version is among the 21 NP-complete problems found by Karp. This means we should not hope for a polynomial-time algorithm for solving the problem if a famous conjecture in computational complexity is correct. And due to another less famous conjecture in computational complexity, randomization alone probably cannot help this situation either.

We may compromise our goal and allow algorithm to not always find the optimal solution. However, we still want to guarantee that the algorithm always returns a relatively good solution on all possible instances. This notion is formally captured by approximation algorithms and approximation ratio.

Greedy algorithm

A natural heuristics for solving the max-cut is to sequentially join the vertices to one of the two disjoint subsets [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] to greedily maximize the current number of edges crossing between [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math].

To state the algorithm, we overload the definition [math]\displaystyle{ E(S,T) }[/math]. Given an undirected graph [math]\displaystyle{ G(V,E) }[/math], for any disjoint subsets [math]\displaystyle{ S,T\subseteq V }[/math] of vertices, we define

[math]\displaystyle{ E(S,T)=\{uv\in E\mid u\in S, v\in T\} }[/math].

We also assume that the vertices are ordered arbitrarily as [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math].

The greedy heuristics is then described as follows.

GreedyMaxCut
Input: undirected graph [math]\displaystyle{ G(V,E) }[/math],
with an arbitrary order of vertices [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math];

initially [math]\displaystyle{ S=T=\emptyset }[/math];
for [math]\displaystyle{ i=1,2,\ldots,n }[/math]
[math]\displaystyle{ v_i }[/math] joins one of [math]\displaystyle{ S,T }[/math] to maximize the current [math]\displaystyle{ |E(S,T)| }[/math] (breaking ties arbitrarily);

The algorithm certainly runs in polynomial time.

Without any guarantee of how good the solution returned by the algorithm approximates the optimal solution, the algorithm is only a heuristics, not an approximation algorithm.

Approximation ratio

For now we restrict ourselves to the max-cut problem, although the notion applies more generally.

Let [math]\displaystyle{ G }[/math] be an arbitrary instance of max-cut problem. Let [math]\displaystyle{ OPT_G }[/math] denote the size of the of max-cut in graph [math]\displaystyle{ G }[/math]. More precisely,

[math]\displaystyle{ OPT_G=\max_{S\subseteq V}|E(S,\overline{S})| }[/math].

Let [math]\displaystyle{ SOL_G }[/math] be the size of of the cut [math]\displaystyle{ |E(S,T)| }[/math] returned by the GreedyMaxCut algorithm on input graph [math]\displaystyle{ G }[/math].

As a maximization problem it is trivial that [math]\displaystyle{ SOL_G\le OPT_G }[/math] for all [math]\displaystyle{ G }[/math]. To guarantee that the GreedyMaxCut gives good approximation of optimal solution, we need the other direction:

Approximation ratio
We say that the approximation ratio of the GreedyMaxCut algorithm is [math]\displaystyle{ \alpha }[/math], or GreedyMaxCut is an [math]\displaystyle{ \alpha }[/math]-approximation algorithm, for some [math]\displaystyle{ 0\lt \alpha\le 1 }[/math], if
[math]\displaystyle{ \frac{SOL_G}{OPT_G}\ge \alpha }[/math] for every possible instance [math]\displaystyle{ G }[/math] of max-cut.

With this notion, we now try to analyze the approximation ratio of the GreedyMaxCut algorithm.

A dilemma to apply this notion in our analysis is that in the definition of approximation ratio, we compare the solution returned by the algorithm with the optimal solution. However, in the analysis we can hardly conduct similar comparisons to the optimal solutions. A fallacy in this logic is that the optimal solutions are NP-hard, meaning there is no easy way to calculate them (e.g. a closed form).

A popular step (usually the first step of analyzing approximation ratio) to avoid this dilemma is that instead of directly comparing to the optimal solution, we compare to an upper bound of the optimal solution (for minimization problem, this needs to be a lower bound), that is, we compare to something which is even better than the optimal solution (which means it cannot be realized by any feasible solution).

For the max-cut problem, a simple upper bound to [math]\displaystyle{ OPT_G }[/math] is [math]\displaystyle{ |E| }[/math], the number of all edges. This is a trivial upper bound of max-cut since any cut is a subset of edges.

Let [math]\displaystyle{ G(V,E) }[/math] be the input graph and [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math]. Initially [math]\displaystyle{ S_1=T_1=\emptyset }[/math]. And for [math]\displaystyle{ i=1,2,\ldots,n }[/math], we let [math]\displaystyle{ S_{i+1} }[/math] and [math]\displaystyle{ T_{i+1} }[/math] be the respective [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] after [math]\displaystyle{ v_i }[/math] joins one of [math]\displaystyle{ S,T }[/math]. More precisely,

  • [math]\displaystyle{ S_{i+1}=S_i\cup\{v_i\} }[/math] and [math]\displaystyle{ T_{i+1}=T_i\, }[/math] if [math]\displaystyle{ E(S_{i}\cup\{v_i\},T_i)\gt E(S_{i},T_i\cup\{v_i\}) }[/math];
  • [math]\displaystyle{ S_{i+1}=S_i\, }[/math] and [math]\displaystyle{ T_{i+1}=T_i\cup\{v_i\} }[/math] if otherwise.

Finally, the max-cut is given by

[math]\displaystyle{ SOL_G=|E(S_{n+1},T_{n+1})| }[/math].

We first observe that we can count the number of edges [math]\displaystyle{ |E| }[/math] by summarizing the contributions of individual [math]\displaystyle{ v_i }[/math]'s.

Proposition 1
[math]\displaystyle{ |E| = \sum_{i=1}^n\left(|E(S_i,\{v_i\})|+|E(T_i,\{v_i\})|\right) }[/math].
Proof.

Note that [math]\displaystyle{ S_i\cup T_i=\{v_1,v_2,\ldots,v_{i-1}\} }[/math], i.e. [math]\displaystyle{ S_i }[/math] and [math]\displaystyle{ T_i }[/math] together contain precisely those vertices preceding [math]\displaystyle{ v_i }[/math]. Therefore, by taking the sum

[math]\displaystyle{ \sum_{i=1}^n\left(|E(S_i,\{v_i\})|+|E(T_i,\{v_i\})|\right) }[/math],

we effectively enumerate all [math]\displaystyle{ (v_j,v_i) }[/math] that [math]\displaystyle{ v_jv_i\in E }[/math] and [math]\displaystyle{ j\lt i }[/math]. The total number is precisely [math]\displaystyle{ |E| }[/math].

[math]\displaystyle{ \square }[/math]

We then observe that the [math]\displaystyle{ SOL_G }[/math] can be decomposed into contributions of individual [math]\displaystyle{ v_i }[/math]'s in the same way.

Proposition 2
[math]\displaystyle{ SOL_G = \sum_{i=1}^n\max\left(|E(S_i, \{v_i\})|,|E(T_i, \{v_i\})|\right) }[/math].
Proof.

It is east to observe that [math]\displaystyle{ E(S_i,T_i)\subseteq E(S_{i+1},T_{i+1}) }[/math], i.e. once an edge joins the cut between current [math]\displaystyle{ S,T }[/math] it will never drop from the cut in the future.

We then define

[math]\displaystyle{ \Delta_i= |E(S_{i+1},T_{i+1})|-|E(S_i,T_i)|=|E(S_{i+1},T_{i+1})\setminus E(S_i,T_i)| }[/math]

to be the contribution of [math]\displaystyle{ v_i }[/math] in the final cut.

It holds that

[math]\displaystyle{ \sum_{i=1}^n\Delta_i=|E(S_{n+1},T_{n+1})|-|E(S_{1},T_{1})|=|E(S_{n+1},T_{n+1})|=SOL_G }[/math].

On the other hand, due to the greedy rule:

  • [math]\displaystyle{ S_{i+1}=S_i\cup\{v_i\} }[/math] and [math]\displaystyle{ T_{i+1}=T_i\, }[/math] if [math]\displaystyle{ E(S_{i}\cup\{v_i\},T_i)\gt E(S_{i},T_i\cup\{v_i\}) }[/math];
  • [math]\displaystyle{ S_{i+1}=S_i\, }[/math] and [math]\displaystyle{ T_{i+1}=T_i\cup\{v_i\} }[/math] if otherwise;

it holds that

[math]\displaystyle{ \Delta_i=|E(S_{i+1},T_{i+1})\setminus E(S_i,T_i)| = \max\left(|E(S_i, \{v_i\})|,|E(T_i, \{v_i\})|\right) }[/math].

Together the proposition follows.

[math]\displaystyle{ \square }[/math]

Combining the above Proposition 1 and Proposition 2, we have

[math]\displaystyle{ \begin{align} SOL_G &= \sum_{i=1}^n\max\left(|E(S_i, \{v_i\})|,|E(T_i, \{v_i\})|\right)\\ &\ge \frac{1}{2}\sum_{i=1}^n\left(|E(S_i, \{v_i\})|+|E(T_i, \{v_i\})|\right)\\ &=\frac{1}{2}|E|\\ &\ge\frac{1}{2}OPT_G. \end{align} }[/math]
Theorem
The GreedyMaxCut is a [math]\displaystyle{ 0.5 }[/math]-approximation algorithm for the max-cut problem.

This is not the best approximation ratio achieved by polynomial-time algorithms for max-cut.

  • The best known approximation ratio achieved by any polynomial-time algorithm is achieved by the Goemans-Williamson algorithm, which relies on rounding an SDP relaxation of the max-cut, and achieves an approximation ratio [math]\displaystyle{ \alpha^*\approx 0.878 }[/math], where [math]\displaystyle{ \alpha^* }[/math] is an irrational whose precise value is given by [math]\displaystyle{ \alpha^*=\frac{2}{\pi}\inf_{x\in[-1,1]}\frac{\arccos(x)}{1-x} }[/math].
  • Assuming the unique game conjecture, there does not exist any polynomial-time algorithm for max-cut with approximation ratio [math]\displaystyle{ \alpha\gt \alpha^* }[/math].

Derandomization by conditional expectation

There is a probabilistic interpretation of the greedy algorithm, which may explains why we use greedy scheme for max-cut and why it works for finding an approximate max-cut.

Given an undirected graph [math]\displaystyle{ G(V,E) }[/math], let us calculate the average size of cuts in [math]\displaystyle{ G }[/math]. For every vertex [math]\displaystyle{ v\in V }[/math] let [math]\displaystyle{ X_v\in\{0,1\} }[/math] be a uniform and independent random bit which indicates whether [math]\displaystyle{ v }[/math] joins [math]\displaystyle{ S }[/math] or [math]\displaystyle{ T }[/math]. This gives us a uniform random bipartition of [math]\displaystyle{ V }[/math] into [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math].

The size of the random cut [math]\displaystyle{ |E(S,T)| }[/math] is given by

[math]\displaystyle{ |E(S,T)| = \sum_{uv\in E} I[X_u\neq X_v], }[/math]

where [math]\displaystyle{ I[X_u\neq X_v] }[/math] is the Boolean indicator random variable that indicates whether event [math]\displaystyle{ X_u\neq X_v }[/math] occurs.

Due to linearity of expectation,

[math]\displaystyle{ \mathbb{E}[|E(S,T)|]=\sum_{uv\in E} \mathbb{E}[I[X_u\neq X_v]] =\sum_{uv\in E} \Pr[X_u\neq X_v]=\frac{|E|}{2}. }[/math]

Recall that [math]\displaystyle{ |E| }[/math] is a trivial upper bound for the max-cut [math]\displaystyle{ OPT_G }[/math]. Due to the above argument, we have

[math]\displaystyle{ \mathbb{E}[|E(S,T)|]\ge\frac{OPT_G}{2}. }[/math]
  • In above argument we use a few probability propositions.
linearity of expectation:
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,X_2,\ldots,X_n) }[/math] be a random vector. Then
[math]\displaystyle{ \mathbb{E}\left[\sum_{i=1}^nc_iX_i\right]=\sum_{i=1}^nc_i\mathbb{E}[X_i] }[/math],
where [math]\displaystyle{ c_1,c_2,\ldots,c_n }[/math] are scalars.
That is, the order of computations of expectation and linear (affine) function of a random vector can be exchanged.
Note that this property ignores the dependency between random variables, and hence is very useful.
Expectation of indicator random variable:
We usually use the notation [math]\displaystyle{ I[A] }[/math] to represent the Boolean indicator random variable that indicates whether the event [math]\displaystyle{ A }[/math] occurs: i.e. [math]\displaystyle{ I[A]=1 }[/math] if event [math]\displaystyle{ A }[/math] occurs and [math]\displaystyle{ I[A]=0 }[/math] if otherwise.
It is easy to see that [math]\displaystyle{ \mathbb{E}[I[A]]=\Pr[A] }[/math]. The expectation of an indicator random variable equals the probability of the event it indicates.

By above analysis, the average (under uniform distribution) size of all cuts in any graph [math]\displaystyle{ G }[/math] must be at least [math]\displaystyle{ \frac{OPT_G}{2} }[/math]. Due to the probabilistic method, in particular the averaging principle, there must exists a bipartition of [math]\displaystyle{ V }[/math] into [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] whose cut [math]\displaystyle{ E(S,T) }[/math] is of size at least [math]\displaystyle{ \frac{OPT_G}{2} }[/math]. Then next question is how to find such a bipartition [math]\displaystyle{ \{S,T\} }[/math] algorithmically.

We still fix an arbitrary order of all vertices as [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math]. Recall that each vertex [math]\displaystyle{ v_i }[/math] is associated with a uniform and independent random bit [math]\displaystyle{ X_{v_i} }[/math] to indicate whether [math]\displaystyle{ v_i }[/math] joins [math]\displaystyle{ S }[/math] or [math]\displaystyle{ T }[/math]. We want to fix the value of [math]\displaystyle{ X_{v_i} }[/math] one after another to construct a bipartition [math]\displaystyle{ \{\hat{S},\hat{T}\} }[/math] of [math]\displaystyle{ V }[/math] such that

[math]\displaystyle{ |E(\hat{S},\hat{T})|\ge\mathbb{E}[|E(S,T)|]\ge\frac{OPT_G}{2} }[/math].

We start with the first vertex [math]\displaystyle{ v_i }[/math] and its random variable [math]\displaystyle{ X_{v_1} }[/math]. By the law of total expectation,

[math]\displaystyle{ \mathbb{E}[E(S,T)]=\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=0]+\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=1]. }[/math]

There must exist an assignment [math]\displaystyle{ x_1\in\{0,1\} }[/math] of [math]\displaystyle{ X_{v_1} }[/math] such that

[math]\displaystyle{ \mathbb{E}[E(S,T)\mid X_{v_1}=x_1]\ge \mathbb{E}[E(S,T)] }[/math].

We can continuously applying this argument. In general, for any [math]\displaystyle{ i\le n }[/math] and any particular partial assignment [math]\displaystyle{ x_1,x_2,\ldots,x_{i-1}\in\{0,1\} }[/math] of [math]\displaystyle{ X_{v_1},X_{v_2},\ldots,X_{v_{i-1}} }[/math], by the law of total expectation

[math]\displaystyle{ \begin{align} \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}] = &\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}, X_{v_{i}}=0]\\ &+\frac{1}{2}\mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}, X_{v_{i}}=1]. \end{align} }[/math]

There must exist an assignment [math]\displaystyle{ x_{i}\in\{0,1\} }[/math] of [math]\displaystyle{ X_{v_i} }[/math] such that

[math]\displaystyle{ \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i}}=x_{i}]\ge \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}]. }[/math]

By this argument, we can find a sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\{0,1\} }[/math] of bits which forms a monotone path:

[math]\displaystyle{ \mathbb{E}[E(S,T)]\le \cdots \le \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i-1}}=x_{i-1}] \le \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{i}}=x_{i}] \le \cdots \le \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{n}}=x_{n}]. }[/math]

We already know the first step of this monotone path [math]\displaystyle{ \mathbb{E}[E(S,T)]\ge\frac{OPT_G}{2} }[/math]. And for the last step of the monotone path [math]\displaystyle{ \mathbb{E}[E(S,T)\mid X_{v_1}=x_1,\ldots, X_{v_{n}}=x_{n}] }[/math] since all random bits have been fixed, a bipartition [math]\displaystyle{ (\hat{S},\hat{T}) }[/math] is determined by the assignment [math]\displaystyle{ x_1,\ldots, x_n }[/math], so the expectation has no effect except just retuning the size of that cut [math]\displaystyle{ |E(\hat{S},\hat{T})| }[/math]. We found the cut [math]\displaystyle{ E(\hat{S},\hat{T}) }[/math] such that [math]\displaystyle{ |E(\hat{S},\hat{T})|\ge \frac{OPT_G}{2} }[/math].

We translate the procedure of constructing this monotone path of conditional expectation to the following algorithm.

MonotonePath
Input: undirected graph [math]\displaystyle{ G(V,E) }[/math],
with an arbitrary order of vertices [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math];

initially [math]\displaystyle{ S=T=\emptyset }[/math];
for [math]\displaystyle{ i=1,2,\ldots,n }[/math]
[math]\displaystyle{ v_i }[/math] joins one of [math]\displaystyle{ S,T }[/math] to maximize the average size of cut conditioning on the choices made so far by the vertices [math]\displaystyle{ v_1,v_2,\ldots,v_i }[/math];

We leave as an exercise to verify that the choice of each [math]\displaystyle{ v_i }[/math] (to join which one of [math]\displaystyle{ S,T }[/math]) in the MonotonePath algorithm (which maximizes the average size of cut conditioning on the choices made so far by the vertices [math]\displaystyle{ v_1,v_2,\ldots,v_i }[/math]) must be the same choice made by [math]\displaystyle{ v_i }[/math] in the GreedyMaxCut algorithm (which maximizes the current [math]\displaystyle{ |E(S,T)| }[/math]).

Therefore, the greedy algorithm for max-cut is actually due to a derandomization of average-case.

Derandomization by pairwise independence

We still construct a random bipartition of [math]\displaystyle{ V }[/math] into [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math]. But this time the random choices have bounded independence.

For each vertex [math]\displaystyle{ v\in V }[/math], we use a Boolean random variable [math]\displaystyle{ Y_v\in\{0,1\} }[/math] to indicate whether [math]\displaystyle{ v }[/math] joins [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math]. The dependencies between [math]\displaystyle{ Y_v }[/math]'s are to be specified later.

By linearity of expectation, regardless of the dependencies between [math]\displaystyle{ Y_v }[/math]'s, it holds that:

[math]\displaystyle{ \mathbb{E}[|E(S,T)|]=\sum_{uv\in E} \Pr[Y_u\neq Y_v]. }[/math]

In order to have the average cut [math]\displaystyle{ \mathbb{E}[|E(S,T)|]=\frac{|E|}{2} }[/math] as the fully random case, we need [math]\displaystyle{ \Pr[Y_u\neq Y_v]=\frac{1}{2} }[/math]. This only requires that the Boolean random variables [math]\displaystyle{ Y_v }[/math]'s are uniform and pairwise independent instead of being mutually independent.

The [math]\displaystyle{ n }[/math] pairwise independent random bits [math]\displaystyle{ \{Y_v\}_{v\in V} }[/math] can be constructed by at most [math]\displaystyle{ k=\lceil\log (n+1)\rceil }[/math] mutually independent random bits [math]\displaystyle{ X_1,X_2,\ldots,X_k\in\{0,1\} }[/math] by the following standard routine.

Theorem
Let [math]\displaystyle{ X_1, X_2, \ldots, X_k\in\{0,1\} }[/math] be mutually independent uniform random bits.
Let [math]\displaystyle{ S_1, S_2, \ldots, S_{2^k-1}\subseteq \{1,2,\ldots,k\} }[/math] enumerate the [math]\displaystyle{ 2^k-1 }[/math] nonempty subsets of [math]\displaystyle{ \{1,2,\ldots,k\} }[/math].
For each [math]\displaystyle{ i\le i\le2^k-1 }[/math], let
[math]\displaystyle{ Y_i=\bigoplus_{j\in S_i}X_j=\left(\sum_{j\in S_i}X_j\right)\bmod 2. }[/math]
Then [math]\displaystyle{ Y_1,Y_2,\ldots,Y_{2^k-1} }[/math] are pairwise independent uniform random bits.

If [math]\displaystyle{ Y_v }[/math] for each vertex [math]\displaystyle{ v\in V }[/math] is constructed in this way by at most [math]\displaystyle{ k=\lceil\log (n+1)\rceil }[/math] mutually independent random bits [math]\displaystyle{ X_1,X_2,\ldots,X_k\in\{0,1\} }[/math], then they are uniform and pairwise independent, which by the above calculation, it holds for the corresponding bipartition [math]\displaystyle{ \{S,T\} }[/math] of [math]\displaystyle{ V }[/math] that

[math]\displaystyle{ \mathbb{E}[|E(S,T)|]=\sum_{uv\in E} \Pr[Y_u\neq Y_v]=\frac{|E|}{2}. }[/math]

Note that the average is taken over the random choices of [math]\displaystyle{ X_1,X_2,\ldots,X_k\in\{0,1\} }[/math] (because they are the only random choices used to construct the bipartition [math]\displaystyle{ \{S,T\} }[/math]). By the probabilistic method, there must exist an assignment of [math]\displaystyle{ X_1,X_2,\ldots,X_k\in\{0,1\} }[/math] such that the corresponding [math]\displaystyle{ Y_v }[/math]'s and the bipartition [math]\displaystyle{ \{S,T\} }[/math] of [math]\displaystyle{ V }[/math] indicated by the [math]\displaystyle{ Y_v }[/math]'s have that

[math]\displaystyle{ |E(S,T)|\ge \frac{|E|}{2}\ge\frac{OPT}{2} }[/math].

This gives us the following algorithm for exhaustive search in a smaller solution space of size [math]\displaystyle{ 2^k-1=O(n^2) }[/math].

Algorithm
Enumerate vertices as [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math];
let [math]\displaystyle{ k=\lceil\log (n+1)\rceil }[/math];
for all [math]\displaystyle{ \vec{x}\in\{0,1\}^k }[/math]
initialize [math]\displaystyle{ S_{\vec{x}}=T_{\vec{x}}=\emptyset }[/math];
for [math]\displaystyle{ i=1, 2, \ldots, n }[/math]
if [math]\displaystyle{ \bigoplus_{j:\lfloor i/2^j\rfloor\bmod 2=1}x_j=1 }[/math] then [math]\displaystyle{ v_i }[/math] joins [math]\displaystyle{ S_{\vec{x}} }[/math];
else [math]\displaystyle{ v_i }[/math] joins [math]\displaystyle{ T_{\vec{x}} }[/math];
return the [math]\displaystyle{ \{S_{\vec{x}},T_{\vec{x}}\} }[/math] with the largest [math]\displaystyle{ |E(S_{\vec{x}},T_{\vec{x}})| }[/math];

The algorithm has approximation ratio 1/2 and runs in polynomial time.