# Conditional Probability

In probability theory, the word "condition" is a verb. "Conditioning on the event ..." means that it is assumed that the event occurs.

 Definition (conditional probability) The conditional probability that event ${\displaystyle {\mathcal {E}}_{1}}$ occurs given that event ${\displaystyle {\mathcal {E}}_{2}}$ occurs is ${\displaystyle \Pr[{\mathcal {E}}_{1}\mid {\mathcal {E}}_{2}]={\frac {\Pr[{\mathcal {E}}_{1}\wedge {\mathcal {E}}_{2}]}{\Pr[{\mathcal {E}}_{2}]}}.}$

The conditional probability is well-defined only if ${\displaystyle \Pr[{\mathcal {E}}_{2}]\neq 0}$.

For independent events ${\displaystyle {\mathcal {E}}_{1}}$ and ${\displaystyle {\mathcal {E}}_{2}}$, it holds that

${\displaystyle \Pr[{\mathcal {E}}_{1}\mid {\mathcal {E}}_{2}]={\frac {\Pr[{\mathcal {E}}_{1}\wedge {\mathcal {E}}_{2}]}{\Pr[{\mathcal {E}}_{2}]}}={\frac {\Pr[{\mathcal {E}}_{1}]\cdot \Pr[{\mathcal {E}}_{2}]}{\Pr[{\mathcal {E}}_{2}]}}=\Pr[{\mathcal {E}}_{1}].}$

It supports our intuition that for two independent events, whether one of them occurs will not affect the chance of the other.

## Law of total probability

The following fact is known as the law of total probability. It computes the probability by averaging over all possible cases.

 Theorem (law of total probability) Let ${\displaystyle {\mathcal {E}}_{1},{\mathcal {E}}_{2},\ldots ,{\mathcal {E}}_{n}}$ be mutually disjoint events, and ${\displaystyle \bigvee _{i=1}^{n}{\mathcal {E}}_{i}=\Omega }$ is the sample space. Then for any event ${\displaystyle {\mathcal {E}}}$, ${\displaystyle \Pr[{\mathcal {E}}]=\sum _{i=1}^{n}\Pr[{\mathcal {E}}\mid {\mathcal {E}}_{i}]\cdot \Pr[{\mathcal {E}}_{i}].}$
Proof.
 Since ${\displaystyle {\mathcal {E}}_{1},{\mathcal {E}}_{2},\ldots ,{\mathcal {E}}_{n}}$ are mutually disjoint and ${\displaystyle \bigvee _{i=1}^{n}{\mathcal {E}}_{i}=\Omega }$, events ${\displaystyle {\mathcal {E}}\wedge {\mathcal {E}}_{1},{\mathcal {E}}\wedge {\mathcal {E}}_{2},\ldots ,{\mathcal {E}}\wedge {\mathcal {E}}_{n}}$ are also mutually disjoint, and ${\displaystyle {\mathcal {E}}=\bigvee _{i=1}^{n}\left({\mathcal {E}}\wedge {\mathcal {E}}_{i}\right)}$. Then ${\displaystyle \Pr[{\mathcal {E}}]=\sum _{i=1}^{n}\Pr[{\mathcal {E}}\wedge {\mathcal {E}}_{i}],}$ which according to the definition of conditional probability, is ${\displaystyle \sum _{i=1}^{n}\Pr[{\mathcal {E}}\mid {\mathcal {E}}_{i}]\cdot \Pr[{\mathcal {E}}_{i}]}$.
${\displaystyle \square }$

The law of total probability provides us a standard tool for breaking a probability into sub-cases. Sometimes, it helps the analysis.

## A Chain of Conditioning

By the definition of conditional probability, ${\displaystyle \Pr[A\mid B]={\frac {\Pr[A\wedge B]}{\Pr[B]}}}$. Thus, ${\displaystyle \Pr[A\wedge B]=\Pr[B]\cdot \Pr[A\mid B]}$. This hints us that we can compute the probability of the AND of events by conditional probabilities. Formally, we have the following theorem:

 Theorem Let ${\displaystyle {\mathcal {E}}_{1},{\mathcal {E}}_{2},\ldots ,{\mathcal {E}}_{n}}$ be any ${\displaystyle n}$ events. Then {\displaystyle {\begin{aligned}\Pr \left[\bigwedge _{i=1}^{n}{\mathcal {E}}_{i}\right]&=\prod _{k=1}^{n}\Pr \left[{\mathcal {E}}_{k}\mid \bigwedge _{i
Proof.
 It holds that ${\displaystyle \Pr[A\wedge B]=\Pr[B]\cdot \Pr[A\mid B]}$. Thus, let ${\displaystyle A={\mathcal {E}}_{n}}$ and ${\displaystyle B={\mathcal {E}}_{1}\wedge {\mathcal {E}}_{2}\wedge \cdots \wedge {\mathcal {E}}_{n-1}}$, then {\displaystyle {\begin{aligned}\Pr[{\mathcal {E}}_{1}\wedge {\mathcal {E}}_{2}\wedge \cdots \wedge {\mathcal {E}}_{n}]&=\Pr[{\mathcal {E}}_{1}\wedge {\mathcal {E}}_{2}\wedge \cdots \wedge {\mathcal {E}}_{n-1}]\cdot \Pr \left[{\mathcal {E}}_{n}\mid \bigwedge _{i Recursively applying this equation to ${\displaystyle \Pr[{\mathcal {E}}_{1}\wedge {\mathcal {E}}_{2}\wedge \cdots \wedge {\mathcal {E}}_{n-1}]}$ until there is only ${\displaystyle {\mathcal {E}}_{1}}$ left, the theorem is proved.
${\displaystyle \square }$

# Polynomial Identity Testing (PIT)

Consider the following problem of Polynomial Identity Testing (PIT):

• Input: two ${\displaystyle n}$-variate polynomials ${\displaystyle f,g\in \mathbb {F} [x_{1},x_{2},\ldots ,x_{n}]}$ of degree ${\displaystyle d}$.
• Output: "yes" if ${\displaystyle f\equiv g}$, and "no" if otherwise.

The ${\displaystyle \mathbb {F} [x_{1},x_{2},\ldots ,x_{n}]}$ is the ring of multi-variate polynomials over field ${\displaystyle \mathbb {F} }$. The most natural way to represent an ${\displaystyle n}$-variate polynomial of degree ${\displaystyle d}$ is to write it as a sum of monomials:

${\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=\sum _{i_{1},i_{2},\ldots ,i_{n}\geq 0 \atop i_{1}+i_{2}+\cdots +i_{n}\leq d}a_{i_{1},i_{2},\ldots ,i_{n}}x_{1}^{i_{1}}x_{2}^{i_{2}}\cdots x_{n}^{i_{n}}}$.

The degree or total degree of a monomial ${\displaystyle a_{i_{1},i_{2},\ldots ,i_{n}}x_{1}^{i_{1}}x_{2}^{i_{2}}\cdots x_{n}^{i_{n}}}$ is given by ${\displaystyle i_{1}+i_{2}+\cdots +i_{n}}$ and the degree of a polynomial ${\displaystyle f}$ is the maximum degree of monomials of nonzero coefficients.

Alternatively, we can consider the following equivalent problem:

• Input: a polynomial ${\displaystyle f\in \mathbb {F} [x_{1},x_{2},\ldots ,x_{n}]}$ of degree ${\displaystyle d}$.
• Output: "yes" if ${\displaystyle f\equiv 0}$, and "no" if otherwise.

If ${\displaystyle f}$ is written explicitly as a sum of monomials, then the problem is trivial. Again we allow ${\displaystyle f}$ to be represented in product form.

 Example The Vandermonde matrix ${\displaystyle M=M(x_{1},x_{2},\ldots ,x_{n})}$ is defined as that ${\displaystyle M_{ij}=x_{i}^{j-1}}$, that is ${\displaystyle M={\begin{bmatrix}1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n-1}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n-1}\\1&x_{3}&x_{3}^{2}&\dots &x_{3}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}&x_{n}^{2}&\dots &x_{n}^{n-1}\end{bmatrix}}}$. Let ${\displaystyle f}$ be the polynomial defined as ${\displaystyle f(x_{1},\ldots ,x_{n})=\det(M)=\prod _{j It is pretty easy to evaluate ${\displaystyle f(x_{1},x_{2},\ldots ,x_{n})}$ on any particular ${\displaystyle x_{1},x_{2},\ldots ,x_{n}}$, however it is prohibitively expensive to symbolically expand ${\displaystyle f(x_{1},\ldots ,x_{n})}$ to its sum-of-monomial form.

## Schwartz-Zippel Theorem

Here is a very simple randomized algorithm, due to Schwartz and Zippel.

 Randomized algorithm for multi-variate PIT fix an arbitrary set ${\displaystyle S\subseteq \mathbb {F} }$ whose size to be fixed; pick ${\displaystyle r_{1},r_{2},\ldots ,r_{n}\in S}$ uniformly and independently at random; if ${\displaystyle f({\vec {r}})=f(r_{1},r_{2},\ldots ,r_{n})=0}$ then return “yes” else return “no”;

This algorithm requires only the evaluation of ${\displaystyle f}$ at a single point ${\displaystyle {\vec {r}}}$. And if ${\displaystyle f\equiv 0}$ it is always correct.

In the Theorem below, we’ll see that if ${\displaystyle f\not \equiv 0}$ then the algorithm is incorrect with probability at most ${\displaystyle {\frac {d}{|S|}}}$, where ${\displaystyle d}$ is the degree of the polynomial ${\displaystyle f}$.

 Schwartz-Zippel Theorem Let ${\displaystyle f\in \mathbb {F} [x_{1},x_{2},\ldots ,x_{n}]}$ be a multivariate polynomial of degree ${\displaystyle d}$ over a field ${\displaystyle \mathbb {F} }$ such that ${\displaystyle f\not \equiv 0}$. Fix any finite set ${\displaystyle S\subset \mathbb {F} }$, and let ${\displaystyle r_{1},r_{2}\ldots ,r_{n}}$ be chosen uniformly and independently at random from ${\displaystyle S}$. Then ${\displaystyle \Pr[f(r_{1},r_{2},\ldots ,r_{n})=0]\leq {\frac {d}{|S|}}.}$
Proof.
 We prove by induction on ${\displaystyle n}$ the number of variables. For ${\displaystyle n=1}$, assuming that ${\displaystyle f\not \equiv 0}$, due to the fundamental theorem of algebra, the degree-${\displaystyle d}$ polynomial ${\displaystyle f(x)}$ has at most ${\displaystyle d}$ roots, thus ${\displaystyle \Pr[f(r)=0]\leq {\frac {d}{|S|}}.}$ Assume the induction hypothesis for a multi-variate polynomial up to ${\displaystyle n-1}$ variable. An ${\displaystyle n}$-variate polynomial ${\displaystyle f(x_{1},x_{2},\ldots ,x_{n})}$ can be represented as ${\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=\sum _{i=0}^{k}x_{n}^{i}f_{i}(x_{1},x_{2},\ldots ,x_{n-1})}$, where ${\displaystyle k}$ is the largest power of ${\displaystyle x_{n}}$, which means that the degree of ${\displaystyle f_{k}}$ is at most ${\displaystyle d-k}$ and ${\displaystyle f_{k}\not \equiv 0}$. In particular, we write ${\displaystyle f}$ as a sum of two parts: ${\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=x_{n}^{k}f_{k}(x_{1},x_{2},\ldots ,x_{n-1})+{\bar {f}}(x_{1},x_{2},\ldots ,x_{n})}$, where both ${\displaystyle f_{k}}$ and ${\displaystyle {\bar {f}}}$ are polynomials, such that as argued above, the degree of ${\displaystyle f_{k}}$ is at most ${\displaystyle d-k}$ and ${\displaystyle f_{k}\not \equiv 0}$; ${\displaystyle {\bar {f}}(x_{1},x_{2},\ldots ,x_{n})=\sum _{i=0}^{k-1}x_{n}^{i}f_{i}(x_{1},x_{2},\ldots ,x_{n-1})}$, thus ${\displaystyle {\bar {f}}(x_{1},x_{2},\ldots ,x_{n})}$ has no ${\displaystyle x_{n}^{k}}$ factor in any term. By the law of total probability, it holds that {\displaystyle {\begin{aligned}&\Pr[f(r_{1},r_{2},\ldots ,r_{n})=0]\\=&\Pr[f({\vec {r}})=0\mid f_{k}(r_{1},r_{2},\ldots ,r_{n-1})=0]\cdot \Pr[f_{k}(r_{1},r_{2},\ldots ,r_{n-1})=0]\\&+\Pr[f({\vec {r}})=0\mid f_{k}(r_{1},r_{2},\ldots ,r_{n-1})\neq 0]\cdot \Pr[f_{k}(r_{1},r_{2},\ldots ,r_{n-1})\neq 0].\end{aligned}}} Note that ${\displaystyle f_{k}(r_{1},r_{2},\ldots ,r_{n-1})}$ is a polynomial on ${\displaystyle n-1}$ variables of degree ${\displaystyle d-k}$ such that ${\displaystyle f_{k}\not \equiv 0}$. By the induction hypothesis, we have {\displaystyle {\begin{aligned}(*)&\qquad &\Pr[f_{k}(r_{1},r_{2},\ldots ,r_{n-1})=0]\leq {\frac {d-k}{|S|}}.\end{aligned}}} For the second case, recall that ${\displaystyle {\bar {f}}(x_{1},\ldots ,x_{n})}$ has no ${\displaystyle x_{n}^{k}}$ factor in any term, thus the condition ${\displaystyle f_{k}(r_{1},r_{2},\ldots ,r_{n-1})\neq 0}$ guarantees that ${\displaystyle f(r_{1},\ldots ,r_{n-1},x_{n})=x_{n}^{k}f_{k}(r_{1},r_{2},\ldots ,r_{n-1})+{\bar {f}}(r_{1},r_{2},\ldots ,r_{n})=g_{r_{1},\ldots ,r_{n-1}}(x_{n})}$ is a single-variate polynomial such that the degree of ${\displaystyle g_{r_{1},\ldots ,r_{n-1}}(x_{n})}$ is ${\displaystyle k}$ and ${\displaystyle g_{r_{1},\ldots ,r_{n-1}}\not \equiv 0}$, for which we already known that the probability ${\displaystyle g_{r_{1},\ldots ,r_{n-1}}(r_{n})=0}$ is at most ${\displaystyle {\frac {k}{|S|}}}$. Therefore, {\displaystyle {\begin{aligned}(**)&\qquad &\Pr[f({\vec {r}})=0\mid f_{k}(r_{1},r_{2},\ldots ,r_{n-1})\neq 0]=\Pr[g_{r_{1},\ldots ,r_{n-1}}(r_{n})=0\mid f_{k}(r_{1},r_{2},\ldots ,r_{n-1})\neq 0]\leq {\frac {k}{|S|}}\end{aligned}}}. Substituting both ${\displaystyle (*)}$ and ${\displaystyle (**)}$ back in the total probability, we have ${\displaystyle \Pr[f(r_{1},r_{2},\ldots ,r_{n})=0]\leq {\frac {d-k}{|S|}}+{\frac {k}{|S|}}={\frac {d}{|S|}},}$ which proves the theorem. In above proof, for the second case that ${\displaystyle f_{k}(r_{1},\ldots ,r_{n-1})\neq 0}$, we use an "probabilistic arguement" to deal with the random choices in the condition. Here we give a more rigorous proof by enumerating all elementary events in applying the law of total probability. You make your own judgement which proof is better. By the law of total probability, {\displaystyle {\begin{aligned}&\Pr[f({\vec {r}})=0]\\=&\sum _{x_{1},\ldots ,x_{n-1}\in S}\Pr[f({\vec {r}})=0\mid \forall i We have argued that ${\displaystyle f_{k}\not \equiv 0}$ and the degree of ${\displaystyle f_{k}}$ is ${\displaystyle d-k}$. By the induction hypothesis, we have ${\displaystyle \Pr[f_{k}(r_{1},\ldots ,r_{n-1})=0]\leq {\frac {d-k}{|S|}}.}$ And for every fixed ${\displaystyle x_{1},\ldots ,x_{n-1}\in S}$ such that ${\displaystyle f_{k}(x_{1},\ldots ,x_{n-1})\neq 0}$, we have argued that ${\displaystyle f(x_{1},\ldots ,x_{n-1},x_{n})}$ is a polynomial in ${\displaystyle x_{n}}$ of degree ${\displaystyle k}$, thus ${\displaystyle \Pr[f(x_{1},\ldots ,x_{n-1},r_{n})=0]\leq {\frac {k}{|S|}},}$ which holds for all ${\displaystyle x_{1},\ldots ,x_{n-1}\in S}$ such that ${\displaystyle f_{k}(x_{1},\ldots ,x_{n-1})\neq 0}$, therefore the weighted average ${\displaystyle \sum _{x_{1},\ldots ,x_{n-1}\in S \atop f_{k}(x_{1},\ldots ,x_{n-1})\neq 0}\Pr[f(x_{1},\ldots ,x_{n-1},r_{n})=0]\cdot \Pr[\forall i Substituting these inequalities back to the total probability, we have ${\displaystyle \Pr[f({\vec {r}})=0]\leq {\frac {d-k}{|S|}}+{\frac {k}{|S|}}={\frac {d}{|S|}}.}$
${\displaystyle \square }$

# Min-Cut in a Graph

Let ${\displaystyle G(V,E)}$ be a multi-graph, which allows parallel edges between two distinct vertices ${\displaystyle u}$ and ${\displaystyle v}$ but does not allow any self-loop, i.e. an edge connect a vertex to itself. Such a multi-graph can be represented as data structures like adjacency matrix ${\displaystyle A}$, where ${\displaystyle A}$ is symmetric (undirected graph) with zero diagonal, and each entry ${\displaystyle A(u,v)}$ is a nonnegative integer giving the number of edges between vertices ${\displaystyle u}$ and ${\displaystyle v}$.

A cut in a multi-graph ${\displaystyle G(V,E)}$ is an edge set ${\displaystyle C\subseteq E}$, which can be equivalently defined as

• there exists a nonempty ${\displaystyle S\subset V}$, such that ${\displaystyle C=\{uv\in E\mid u\in S,v\not \in S\}}$; or
• removing of ${\displaystyle C}$ disconnects ${\displaystyle G}$, that is, ${\displaystyle G'(V,E\setminus C)}$ disconnects.

The min-cut or minimum cut problem is defined as follows:

• Input: a multi-graph ${\displaystyle G(V,E)}$;
• Output: a cut ${\displaystyle C}$ in ${\displaystyle G}$ with the minimum size ${\displaystyle |C|}$.

The problem itself is well-defined on simple graph (without parallel edges), and our main goal is indeed solving the min-cut in simple graphs, however, as we shall see the algorithm creates parallel edges during its running, even though we start with a simple graph without parallel edges.

A canonical deterministic algorithm for this problem is through the max-flow min-cut theorem. A global minimum cut is the minimum ${\displaystyle s}$-${\displaystyle t}$ min-cut, which is equal to the minimum ${\displaystyle s}$-${\displaystyle t}$ max-flow.

## Karger's Min-Cut Algorithm

We will introduce a very simple and elegant algorithm discovered by David Karger.

We define an operation on multi-graphs called contraction: For a multigraph ${\displaystyle G(V,E)}$, for any edge ${\displaystyle uv\in E}$, let ${\displaystyle contract(G,uv)}$ be a new multigraph obtained by:

• replacing the vertices ${\displaystyle u}$ and ${\displaystyle v}$ by a new vertex ${\displaystyle x\not \in V}$;
• for each ${\displaystyle w\not \in \{u,v\}}$ replacing any edge ${\displaystyle uw}$ or ${\displaystyle vw}$ by the edge ${\displaystyle xw}$;
• removing all parallel edges between ${\displaystyle u}$ and ${\displaystyle v}$ in ${\displaystyle E}$;
• the rest of the graph remains unchanged.

To conclude, the ${\displaystyle contract(G,uv)}$ operation merges the two vertices ${\displaystyle u}$ and ${\displaystyle v}$ into a new vertex which maintains the old neighborhoods of both ${\displaystyle u}$ and ${\displaystyle v}$ except for that all the parallel edges between ${\displaystyle u}$ and ${\displaystyle v}$ are removed.

Perhaps a better way to look at contraction is to interpret it as union of equivalent classes of vertices. Initially every vertex is in a dinstinct equivalent class. Upon call a ${\displaystyle contract(G,uv)}$, the two equivalent classes corresponding to ${\displaystyle u}$ and ${\displaystyle v}$ are unioned together, and only those edges crossing between different equivalent classes are counted as valid edges in the graph.

 RandomContract (Karger 1993) while ${\displaystyle |V|>2}$ do choose an edge ${\displaystyle uv\in E}$ uniformly at random; ${\displaystyle G=contract(G,uv)}$; return ${\displaystyle C=E}$ (the parallel edges between the only remaining vertices in ${\displaystyle V}$);

A multi-graph can be maintained by appropriate data strucrtures such that each contraction takes ${\displaystyle O(n)}$ time, where ${\displaystyle n}$ is the number of vertices, so the algorithm terminates in time ${\displaystyle O(n^{2})}$. We leave this as an exercise.

## Analysis of accuracy

For convenience, we assume that each edge has a unique "identity" ${\displaystyle e}$. And when an edge ${\displaystyle uv\in E}$ is contracted to new vertex ${\displaystyle x}$, and each adjacent edge ${\displaystyle uw}$ of ${\displaystyle u}$ (or adjacent edge ${\displaystyle vw}$ of ${\displaystyle v}$) is replaced by ${\displaystyle xw}$, the identity ${\displaystyle e}$ of the edge ${\displaystyle uw}$ (or ${\displaystyle vw}$) is transfered to the new edge ${\displaystyle xw}$ replacing it. When referring a cut ${\displaystyle C}$, we consider ${\displaystyle C}$ as a set of edge identities ${\displaystyle e}$, so that a cut ${\displaystyle C}$ is changed by the algorithm only if some of its edges are removed during contraction.

We first prove some lemma.

 Lemma 1 If ${\displaystyle C}$ is a cut in a multi-graph ${\displaystyle G}$ and ${\displaystyle e\not \in C}$, then ${\displaystyle C}$ is still a cut in ${\displaystyle G'=contract(G,e)}$.
Proof.
 It is easy to verify that ${\displaystyle C}$ is a cut in ${\displaystyle G'=contract(G,e)}$ if none of its edges is lost during the contraction. Since ${\displaystyle C}$ is a cut in ${\displaystyle G(V,E)}$, there exists a nonempty vertex set ${\displaystyle S\subset V}$ and its complement ${\displaystyle {\bar {S}}=V\setminus S}$ such that ${\displaystyle C=\{uv\mid u\in S,v\in {\bar {S}}\}}$. And if ${\displaystyle e\not \in C}$, it must hold that either ${\displaystyle e\in G[S]}$ or ${\displaystyle e\in G[{\bar {S}}]}$ where ${\displaystyle G[S]}$ and ${\displaystyle G[{\bar {S}}]}$ are the subgraphs induced by ${\displaystyle S}$ and ${\displaystyle {\bar {S}}}$ respectively. In both cases none of edges in ${\displaystyle C}$ is removed in ${\displaystyle G'=contract(G,e)}$.
${\displaystyle \square }$
 Lemma 2 The size of min-cut in ${\displaystyle G'=contract(G,e)}$ is at least as large as the size of min-cut in ${\displaystyle G}$, i.e. contraction never reduces the size of min-cut.
Proof.
 Note that every cut in the contracted graph ${\displaystyle G'}$ is also a cut in the original graph ${\displaystyle G}$.
${\displaystyle \square }$
 Lemma 3 If ${\displaystyle C}$ is a min-cut in a multi-graph ${\displaystyle G(V,E)}$, then ${\displaystyle |E|\geq {\frac {|V||C|}{2}}}$.
Proof.
 It must hold that the degree of each vertex ${\displaystyle v\in V}$ is at least ${\displaystyle |C|}$, or otherwise the set of adjacent edges of ${\displaystyle v}$ forms a cut which separates ${\displaystyle v}$ from the rest of ${\displaystyle V}$ and has size less than ${\displaystyle |C|}$, contradicting the assumption that ${\displaystyle |C|}$ is a min-cut. And the bound ${\displaystyle |E|\geq {\frac {|V||C|}{2}}}$ follows directly from the fact that every vertex in ${\displaystyle G}$ has degree at least ${\displaystyle |C|}$.
${\displaystyle \square }$

For a multigraph ${\displaystyle G(V,E)}$, fixed a minimum cut ${\displaystyle C}$ (there might be more than one minimum cuts), we analyze the probability that ${\displaystyle C}$ is returned by the above algorithm.

Initially ${\displaystyle |V|=n}$. We say that the min-cut ${\displaystyle C}$ "survives" a random contraction if none of the edges in ${\displaystyle C}$ is chosen to be contracted. After ${\displaystyle (i-1)}$ contractions, denote the current multigraph as ${\displaystyle G_{i}(V_{i},E_{i})}$. Supposed that ${\displaystyle C}$ survives the first ${\displaystyle (i-1)}$ contractions, according to Lemma 1 and 2, ${\displaystyle C}$ must be a minimum cut in the current multi-graph ${\displaystyle G_{i}}$. Then due to Lemma 3, the current edge number is ${\displaystyle |E_{i}|\geq |V_{i}||C|/2}$. Uniformly choosing an edge ${\displaystyle e\in E_{i}}$ to contract, the probability that the ${\displaystyle i}$th contraction contracts an edge in ${\displaystyle C}$ is given by:

{\displaystyle {\begin{aligned}\Pr _{e\in E_{i}}[e\in C]&={\frac {|C|}{|E_{i}|}}&\leq |C|\cdot {\frac {2}{|V_{i}||C|}}&={\frac {2}{|V_{i}|}}.\end{aligned}}}

Therefore, conditioning on that ${\displaystyle C}$ survives the first ${\displaystyle (i-1)}$ contractions, the probability that ${\displaystyle C}$ survives the ${\displaystyle i}$th contraction is at least ${\displaystyle 1-2/|V_{i}|}$. Note that ${\displaystyle |V_{i}|=n-i+1}$, because each contraction decrease the vertex number by 1.

The probability that no edge in the minimum cut ${\displaystyle C}$ is ever contracted is:

{\displaystyle {\begin{aligned}&\quad \,\prod _{i=1}^{n-2}\Pr[\,C{\mbox{ survives all }}(n-2){\mbox{ contractions }}]\\&=\prod _{i=1}^{n-2}\Pr[\,C{\mbox{ survives the }}i{\mbox{-th contraction}}\mid C{\mbox{ survives the first }}(i-1){\mbox{-th contractions}}]\\&\geq \prod _{i=1}^{n-2}\left(1-{\frac {2}{|V_{i}|}}\right)\\&=\prod _{i=1}^{n-2}\left(1-{\frac {2}{n-i+1}}\right)\\&=\prod _{k=3}^{n}{\frac {k-2}{k}}\\&={\frac {2}{n(n-1)}}.\end{aligned}}}

This gives the following theorem.

 Theorem For any multigraph with ${\displaystyle n}$ vertices, the RandomContract algorithm returns a minimum cut with probability at least ${\displaystyle {\frac {2}{n(n-1)}}}$.

Run RandomContract independently for ${\displaystyle n(n-1)/2}$ times and return the smallest cut returned. The probability that a minimum cut is found is at least:

{\displaystyle {\begin{aligned}1-\Pr[{\mbox{failed every time}}]&=1-\Pr[{RandomContract}{\mbox{ fails}}]^{n(n-1)/2}\\&\geq 1-\left(1-{\frac {2}{n(n-1)}}\right)^{n(n-1)/2}\\&\geq 1-{\frac {1}{e}}.\end{aligned}}}

A constant probability!

## A Corollary by the Probabilistic Method

Karger's algorithm and its analysis implies the following combinatorial theorem regarding the number of distinct minimum cuts in a graph.

 Corollary For any graph ${\displaystyle G(V,E)}$ of ${\displaystyle n}$ vertices, the number of distinct minimum cuts in ${\displaystyle G}$ is at most ${\displaystyle {\frac {n(n-2)}{2}}}$.
Proof.
 For each minimum cut ${\displaystyle C}$ in ${\displaystyle G}$, we define ${\displaystyle {\mathcal {E}}_{C}}$ to be the event that RandomContract returns ${\displaystyle C}$. Due to the analysis of RandomContract, ${\displaystyle \Pr[{\mathcal {E}}_{C}]\geq {\frac {2}{n(n-1)}}}$. The events ${\displaystyle {\mathcal {E}}_{C}}$ are mutually disjoint for distinct ${\displaystyle C}$ and the event that RandomContract returns a min-cut is the disjoint union of ${\displaystyle {\mathcal {E}}_{C}}$ over all min-cut ${\displaystyle C}$. Therefore, {\displaystyle {\begin{aligned}&\Pr[{\mbox{ RandomContract returns a min-cut}}]\\=&\sum _{{\mbox{min-cut }}C{\mbox{ in }}G}\Pr[{\mathcal {E}}_{C}]\\\geq &\sum _{{\mbox{min-cut }}C{\mbox{ in }}G}{\frac {2}{n(n-1)}},\end{aligned}}} which must be no greater than 1 for a well-defined probability space. This means the total number of min-cut in ${\displaystyle G}$ must be no greater than ${\displaystyle {\frac {n(n-1)}{2}}}$.
${\displaystyle \square }$

Note that the statement of this theorem has no randomness at all, however the proof involves a randomized algorithm. This is an example of the probabilistic method.

## Fast Min-Cut

In the analysis of RandomContract, we have the following observation:

• The probability of success is only getting worse when the graph becomes small.

This motivates us to consider the following alternation to the algorithm: first using random contractions to reduce the number of vertices to a moderately small number, and then recursively finding a min-cut in this smaller instance. This seems just a restatement of exactly what we have been doing. Inspired by the idea of boosting the accuracy via independent repetition, here we apply the recursion on two smaller instances generated independently.

The algorithm obtained in this way is called FastCut. We first define a procedure to randomly contract edges until there are ${\displaystyle t}$ number of vertices left.

 RandomContract${\displaystyle (G,t)}$ while ${\displaystyle |V|>t}$ do choose an edge ${\displaystyle uv\in E}$ uniformly at random; ${\displaystyle G=contract(G,uv)}$; return ${\displaystyle G}$;

The FastCut algorithm is recursively defined as follows.

 FastCut${\displaystyle (G)}$ if ${\displaystyle |V|\leq 6}$ then return a mincut by brute force; else let ${\displaystyle t=\left\lceil 1+|V|/{\sqrt {2}}\right\rceil }$; ${\displaystyle G_{1}=RandomContract(G,t)}$; ${\displaystyle G_{2}=RandomContract(G,t)}$; return the smaller one of ${\displaystyle FastCut(G_{1})}$ and ${\displaystyle FastCut(G_{2})}$;

As before, all ${\displaystyle G}$ are multigraphs.

Let ${\displaystyle C}$ be a min-cut in the original multigraph ${\displaystyle G}$. By the same analysis as in the case of RandomContract, we have

{\displaystyle {\begin{aligned}&\Pr[C{\text{ survives all contractions in }}RandomContract(G,t)]\\=&\prod _{i=1}^{n-t}\Pr[C{\text{ survives the }}i{\text{-th contraction}}\mid C{\text{ survives the first }}(i-1){\text{-th contractions}}]\\\geq &\prod _{i=1}^{n-t}\left(1-{\frac {2}{n-i+1}}\right)\\=&\prod _{k=t+1}^{n}{\frac {k-2}{k}}\\=&{\frac {t(t-1)}{n(n-1)}}.\end{aligned}}}

When ${\displaystyle t=\left\lceil 1+n/{\sqrt {2}}\right\rceil }$, this probability is at least ${\displaystyle 1/2}$.

We use ${\displaystyle p(n)}$ to denote the probability that ${\displaystyle C}$ is returned by ${\displaystyle FastCut(G)}$, where ${\displaystyle G}$ is a multigraph of ${\displaystyle n}$ vertices. We then have the following recursion for ${\displaystyle p(n)}$.

{\displaystyle {\begin{aligned}p(n)&=\Pr[C{\text{ is returned by }}{\textit {FastCut}}(G)]\\&=1-\left(1-\Pr[C{\text{ survives in }}G_{1}\wedge C={\textit {FastCut}}(G_{1})]\right)^{2}\\&=1-\left(1-\Pr[C{\text{ survives in }}G_{1}]\Pr[C={\textit {FastCut}}(G_{1})\mid C{\text{ survives in }}G_{1}]\right)^{2}\\&\geq 1-\left(1-{\frac {1}{2}}p\left(\left\lceil 1+n/{\sqrt {2}}\right\rceil \right)\right)^{2},\end{aligned}}}

where the last inequality is due to the fact that ${\displaystyle \Pr[C{\text{ survives all contractions in }}RandomContract(G,t)]\geq 1/2}$ and our previous discussions in the analysis of RandomContract that if the min-cut ${\displaystyle C}$ survives all first ${\displaystyle (n-t)}$ contractions then ${\displaystyle C}$ must be a min-cut in the remaining multigraph.

The base case is that ${\displaystyle p(n)=1}$ for ${\displaystyle n\leq 6}$. Solving this recursion of ${\displaystyle p(n)}$ (or proving by induction) gives us that

${\displaystyle p(n)=\Omega \left({\frac {1}{\log n}}\right).}$

Recall that we can implement an edge contraction in ${\displaystyle O(n)}$ time, thus it is easy to verify the following recursion of time complexity:

${\displaystyle T(n)=2T\left(\left\lceil 1+n/{\sqrt {2}}\right\rceil \right)+O(n^{2}),}$

where ${\displaystyle T(n)}$ denotes the running time of ${\displaystyle FastCut(G)}$ on a multigraph ${\displaystyle G}$ of ${\displaystyle n}$ vertices.

Solving the recursion of ${\displaystyle T(n)}$ with the base case ${\displaystyle T(n)=O(1)}$ for ${\displaystyle n\leq 6}$, we have ${\displaystyle T(n)=O(n^{2}\log n)}$.

Therefore, for a multigraph ${\displaystyle G}$ of ${\displaystyle n}$ vertices, the algorithm ${\displaystyle FastCut(G)}$ returns a min-cut in ${\displaystyle G}$ with probability ${\displaystyle \Omega \left({\frac {1}{\log n}}\right)}$ in time ${\displaystyle O(n^{2}\log n)}$. Repeat this independently for ${\displaystyle O(logn)}$ times, we have an algorithm which runs in time ${\displaystyle O(n^{2}\log ^{2}n)}$ and returns a min-cut with probability ${\displaystyle 1-O(1/n)}$, a high probability.