随机算法 (Fall 2011)/Graph Coloring and 随机算法 (Spring 2013)/Introduction and Probability Space: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>Etone
 
Line 1: Line 1:
=Graph Colorings=
=Introduction=
A '''proper coloring''' of a graph <math>G(V,E)</math> is a mapping <math>f:V\rightarrow[q]</math> for some integer <math>q</math>, satisfying that <math>f(u)\neq f(v)</math> for all <math>uv\in E</math>.
This course will study ''Randomized Algorithms'', the algorithms that use randomness in computation.
;Why do we use randomness in computation?
* Randomized algorithms can be simpler than deterministic ones.
:(median selection, primality testing, load balancing, etc.)
* Randomized algorithms can be faster than the best known deterministic algorithms.
:(min-cut, checking matrix multiplication, polynomial identity testing, etc.)
* Randomized algorithms may lead us to smart deterministic algorithms.
:(hashing, derandomization, SL=L, Lovász Local Lemma, etc.)
* Randomized algorithms can do things that deterministic algorithms cannot do.
:(routing, volume estimation, communication complexity, data streams, etc.)
* Randomness is presented in the input.
:(average-case analysis, smoothed analysis, learning, etc.)
* Some deterministic problems are random in nature.
:(counting, inference, etc.)
* ...


We consider the problem of sampling a uniformly random proper coloring of a given graph. We will later see that this is useful for counting the number of proper colorings of a given graph, which is a fundamental combinatorial problem, having important applications in statistic physics.
;How is randomness used in computation?
* To hit a witness/certificate.
:(identity testing, fingerprinting, primality testing, etc.)
* To avoid worst case or to deal with adversaries.
:(randomized quick sort, perfect hashing, etc.)
* To simulate random samples.  
:(random walk, Markov chain Monte Carlo, approximate counting etc.)
* To enumerate/construct solutions.
:(the probabilistic method, min-cut, etc.)
* ...


Let's first consider the decision version of the problem. That is, given as input a graph <math>G(V,E)</math>, decide whether there exists a proper <math>q</math>-coloring of <math>G</math>. Denote by <math>\Delta</math> the maximum degree of <math>G</math>.
== Principles in probability theory  ==
* If <math>q\ge \Delta+1</math>, there always exists a proper coloring. Moreover, the proper coloring can be found by a simple greedy algorithm.
The course is organized by the advancedness of the probabilistic tools. We do this for two reasons: First, for randomized algorithms, analysis is usually more difficult and involved than the algorithm itself; and second, getting familiar with these probability principles will help you understand the true reasons for which the smart algorithms are designed.
* If <math>q=\Delta</math>, <math>G</math> has a proper coloring unless it contains a <math>(\Delta+1)</math>-clique or it is an odd cycle. ([http://en.wikipedia.org/wiki/Brooks'_theorem Brooks Theorem])
* '''Basic probability theory''': probability space, events, the union bound, independence, conditional probability.
* If <math>q<\Delta</math>, the problem is NP-hard.
* '''Moments and deviations''': random variables, expectation, linearity of expectation, Markov's inequality, variance, second moment method.
* '''The probabilistic method''': averaging principle, threshold phenomena, Lovász Local Lemma.
* '''Concentrations''': Chernoff-Hoeffding bound, martingales, Azuma's inequality, bounded difference method.
* '''Markov chains and random walks''': Markov chians, random walks, hitting/cover time, mixing time.


Sampling a random coloring is at least as hard as deciding its existence, so we don't expect to solve the sampling problem when <math>q<\Delta</math>. The decision problem for the case <math>q=\Delta</math> is also nontrivial. Thus people are interested only in the case when <math>q\ge \Delta+1</math>.
=Probability Space=
The axiom foundation of probability theory is laid by [http://en.wikipedia.org/wiki/Andrey_Kolmogorov Kolmogorov], one of the greatest mathematician of the 20th century, who advanced various very different fields of mathematics.


The following is a natural Markov chain for sampling proper colorings.
{{Theorem|Definition (Probability Space)|
A '''probability space''' is a triple <math>(\Omega,\Sigma,\Pr)</math>.
*<math>\Omega</math> is a set, called the '''sample space'''.
*<math>\Sigma\subseteq 2^{\Omega}</math> is the set of all '''events''', satisfying:
*:(K1). <math>\Omega\in\Sigma</math> and <math>\empty\in\Sigma</math>. (The ''certain'' event and the ''impossible'' event.)
*:(K2). If <math>A,B\in\Sigma</math>, then <math>A\cap B, A\cup B, A-B\in\Sigma</math>. (Intersection, union, and diference of two events are events).
* A '''probability measure''' <math>\Pr:\Sigma\rightarrow\mathbb{R}</math> is a function that maps each event to a nonnegative real number, satisfying
*:(K3). <math>\Pr(\Omega)=1</math>.
*:(K4). If <math>A\cap B=\emptyset</math> (such events are call ''disjoint'' events), then <math>\Pr(A\cup B)=\Pr(A)+\Pr(B)</math>.
*:(K5*). For a decreasing sequence of events <math>A_1\supset A_2\supset \cdots\supset A_n\supset\cdots</math> of events with <math>\bigcap_n A_n=\emptyset</math>, it holds that <math>\lim_{n\rightarrow \infty}\Pr(A_n)=0</math>.
}}
The sample space <math>\Omega</math> is the set of all possible outcomes of the random process modeled by the probability space. An event is a subset of <math>\Omega</math>. The statements (K1)--(K5) are axioms of probability. A probability space is well defined as long as these axioms are satisfied.
;Example
:Consider the probability space defined by rolling a dice with six faces. The sample space is <math>\Omega=\{1,2,3,4,5,6\}</math>, and <math>\Sigma</math> is the power set <math>2^{\Omega}</math>. For any event <math>A\in\Sigma</math>, its probability is given by <math>\Pr(A)=\frac{|A|}{6}.</math>
 
;Remark
* In general, the set <math>\Omega</math> may be continuous, but we only consider '''discrete''' probability in this lecture, thus we assume that <math>\Omega</math> is either finite or countably infinite.
* In many cases (such as the above example), <math>\Sigma=2^{\Omega}</math>, i.e. the events enumerates all subsets of <math>\Omega</math>. But in general, a probability space is well-defined by any <math>\Sigma</math> satisfying (K1) and (K2). Such <math>\Sigma</math> is called a <math>\sigma</math>-algebra defined on <math>\Omega</math>.
* The last axiom (K5*) is redundant if <math>\Sigma</math> is finite, thus it is only essential when there are infinitely many events. The role of axiom (K5*) in probability theory is like [http://en.wikipedia.org/wiki/Zorn's_lemma Zorn's Lemma] (or equivalently the [http://en.wikipedia.org/wiki/Axiom_of_choice Axiom of Choice]) in axiomatic set theory.
 
Laws for probability can be deduced from the above axiom system. Denote that <math>\bar{A}=\Omega-A</math>.
{{Theorem|Proposition|
:<math>\Pr(\bar{A})=1-\Pr(A)</math>.
}}
{{Proof|
Due to Axiom (K4), <math>\Pr(\bar{A})+\Pr(A)=\Pr(\Omega)</math> which is equal to 1 according to Axiom (K3), thus <math>\Pr(\bar{A})+\Pr(A)=1</math>. The proposition follows.
}}
 
Exercise: Deduce other useful laws for probability from the axioms. For example, <math>A\subseteq B\Longrightarrow\Pr(A)\le\Pr(B)</math>.
 
;Notation
An event <math>A\subseteq\Omega</math> can be represented as <math>A=\{a\in\Omega\mid \mathcal{E}(a)\}</math> with a predicate <math>\mathcal{E}</math>.
 
The predicate notation of probability is
:<math>\Pr[\mathcal{E}]=\Pr(\{a\in\Omega\mid \mathcal{E}(a)\})</math>.
 
During the lecture, we mostly use the predicate notation instead of subset notation.


{{Theorem|Markov Chain for Graph Coloring <math>\mathcal{M}_c</math>|
== Independence ==
:Start with a proper coloring of <math>G(V,E)</math>. At each step:
{{Theorem
# Pick a vertex <math>v\in V</math> and a color <math>c\in[q]</math> uniformly at random.
|Definition (Independent events)|
# Change the color of <math>v</math> to <math>c</math> if the resulting coloring is proper; do nothing if otherwise.
:Two events <math>\mathcal{E}_1</math> and <math>\mathcal{E}_2</math> are '''independent''' if and only if
::<math>\begin{align}
\Pr\left[\mathcal{E}_1 \wedge \mathcal{E}_2\right]
&=
\Pr[\mathcal{E}_1]\cdot\Pr[\mathcal{E}_2].
\end{align}</math>
}}
This definition can be generalized to any number of events:
{{Theorem
|Definition (Independent events)|
:Events <math>\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n</math> are '''mutually independent''' if and only if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]
&=
\prod_{i\in I}\Pr[\mathcal{E}_i].
\end{align}</math>
}}
}}


For a fixed graph <math>G(V,E)</math>, the state space of the above Markov chain is the set of all proper colorings of <math>G</math> with <math>q</math> colors.
Note that in probability theory, the "mutual independence" is <font color="red">not</font> equivalent with "pair-wise independence", which we will learn in the future.
 
=  Checking Matrix Multiplication=
Consider the following problem:
* '''Input''': Three <math>n\times n</math> matrices <math>A,B</math> and <math>C</math>.
* '''Output''': return "yes" if <math>C=AB</math> and "no" if otherwise.
 
A naive way of checking the equality is first computing <math>AB</math> and then comparing the result with <math>C</math>. The (asymptotically) fastest matrix multiplication algorithm known today runs in time <math>O(n^{2.376})</math>. The naive algorithm will take asymptotically the same amount of time.
 
== Freivalds Algorithm ==
The following is a very simple randomized algorithm, due to Freivalds, running in only <math>O(n^2)</math> time:
 
{{Theorem|Algorithm (Freivalds)|
*pick a vector <math>r \in\{0, 1\}^n</math> uniformly at random;
*if <math>A(Br) = Cr</math> then return "yes" else return "no";
}}
The product <math>A(Br)</math> is computed by first multiplying <math>Br</math> and then <math>A(Br)</math>.
The running time is <math>O(n^2)</math> because the algorithm does 3 matrix-vector multiplications in total.
 
If <math>AB=C</math> then <math>A(Br) = Cr</math> for any <math>r \in\{0, 1\}^n</math>, thus the algorithm will return a "yes" for any positive instance (<math>AB=C</math>).
But if <math>AB \neq C</math> then the algorithm will make a mistake if it chooses such an <math>r</math> that <math>ABr = Cr</math>. However, the following lemma states that the probability of this event is bounded.


{{Theorem|Lemma|
{{Theorem|Lemma|
The followings hold for <math>\mathcal{M}_c</math>.
:If <math>AB\neq C</math> then for a uniformly random <math>r \in\{0, 1\}^n</math>,
# Aperiodic.
::<math>\Pr[ABr = Cr]\le \frac{1}{2}</math>.
# The transition matrix is symmetric.
# Irreducible if <math>q\ge \Delta+2</math>.
}}
}}
{{Proof| Let <math>D=AB-C</math>. The event <math>ABr=Cr</math> is equivalent to that <math>Dr=0</math>. It is then sufficient to show that for a <math>D\neq \boldsymbol{0}</math>, it holds that <math>\Pr[Dr = \boldsymbol{0}]\le \frac{1}{2}</math>.
Since <math>D\neq \boldsymbol{0}</math>, it must have at least one non-zero entry. Suppose that <math>D(i,j)\neq 0</math>.


The followings are the two most important conjectures regarding the problem.
We assume the event that <math>Dr=\boldsymbol{0}</math>. In particular, the <math>i</math>-th entry of <math>Dr</math> is
{{Theorem|Conjecture|
:<math>(Dr)_{i}=\sum_{k=1}^nD(i,k)r_k=0.</math>
#<math>\mathcal{M}_c</math> has mixing time <math>O(n\ln n)</math> whenever <math>q\ge\Delta+2</math>.
The <math>r_j</math> can be calculated by
# Random sampling of proper graph colorings can be done in polynomial time whenever <math>q\ge\Delta+1</math>.
:<math>r_j=-\frac{1}{D(i,j)}\sum_{k\neq j}^nD(i,k)r_k.</math>
Once all <math>r_k</math> where <math>k\neq j</math> are fixed, there is a unique solution of <math>r_j</math>. That is to say, conditioning on any <math>r_k, k\neq j</math>, there is at most '''one''' value of <math>r_j\in\{0,1\}</math> satisfying <math>Dr=0</math>. On the other hand, observe that <math>r_j</math> is chosen from '''two''' values <math>\{0,1\}</math> uniformly and independently at random. Therefore, with at least <math>\frac{1}{2}</math> probability, the choice of <math>r</math> fails to give us a zero <math>Dr</math>. That is, <math>\Pr[ABr=Cr]=\Pr[Dr=0]\le\frac{1}{2}</math>.
}}
}}


These two conjectures are still open. People approach them by relax the requirement for the number of colors <math>q</math>. Intuitively, the larger the <math>q</math> is, the more freedom we have, the less dependency are there between non-adjacent vertices.
When <math>AB=C</math>, Freivalds algorithm always returns "yes"; and when <math>AB\neq C</math>, Freivalds algorithm returns "no" with probability at least 1/2.


=Coupling: <math>q\ge 4\Delta+1</math>=
To improve its accuracy, we can run Freivalds algorithm for <math>k</math> times, each time with an ''independent'' random <math>r\in\{0,1\}^n</math>, and return "yes" iff all running instances pass the test.
We couple the two copies of <math>\mathcal{M}_c</math>, <math>X_t</math> and <math>Y_t</math>, by the following coupling rule:
* At each step, let <math>X_t</math> and <math>Y_t</math> choose the same vertex <math>v</math> and same color <math>c</math>.


Note that by this coupling, it is possible that <math>v</math> is recolored to <math>c</math> in both chains, in one of the two chains, or in no chain. We may separate these cases by looking at the Hamming distance between two colorings.
{{Theorem|Freivalds' Algorithm (multi-round)|
*pick <math>k</math> vectors <math>r_1,r_2,\ldots,r_k \in\{0, 1\}^n</math> uniformly and independently at random;
*if <math>A(Br_i) = Cr_i</math> for all <math>i=1,\ldots,k</math> then return "yes" else return "no";
}}


Let <math>d(X_t,Y_t)</math> be the number of vertices whose colors in <math>X_t</math> and <math>Y_t</math> disagree. For each step, for any choice of vertex <math>v</math> and color <math>c</math>, there are three possible cases:
If <math>AB=C</math>, then the algorithm returns a "yes" with probability 1. If <math>AB\neq C</math>, then due to the independence, the probability that all <math>r_i</math> have <math>ABr_i=C_i</math> is at most <math>2^{-k}</math>, so the algorithm returns "no" with probability at least <math>1-2^{-k}</math>. Choose <math>k=O(\log n)</math>. The algorithm runs in time <math>O(n^2\log n)</math> and has a one-sided error (false positive) bounded by <math>\frac{1}{\mathrm{poly}(n)}</math>.
* '''"Good move"''' (<math>d(X_t,Y_t)</math> decreases by 1): <math>X_t</math> disagree with <math>Y_t</math> at <math>v</math>, and <math>c</math> is not used in the colors of <math>v</math> in both <math>X_t</math> and <math>Y_t</math>.
* '''"Bad move"''' (<math>d(X_t,Y_t)</math> increases by 1):
* '''"Neutral move"''' (<math>d(X_t,Y_t)</math> is unchanged):

Revision as of 11:47, 22 February 2013

Introduction

This course will study Randomized Algorithms, the algorithms that use randomness in computation.

Why do we use randomness in computation?
  • Randomized algorithms can be simpler than deterministic ones.
(median selection, primality testing, load balancing, etc.)
  • Randomized algorithms can be faster than the best known deterministic algorithms.
(min-cut, checking matrix multiplication, polynomial identity testing, etc.)
  • Randomized algorithms may lead us to smart deterministic algorithms.
(hashing, derandomization, SL=L, Lovász Local Lemma, etc.)
  • Randomized algorithms can do things that deterministic algorithms cannot do.
(routing, volume estimation, communication complexity, data streams, etc.)
  • Randomness is presented in the input.
(average-case analysis, smoothed analysis, learning, etc.)
  • Some deterministic problems are random in nature.
(counting, inference, etc.)
  • ...
How is randomness used in computation?
  • To hit a witness/certificate.
(identity testing, fingerprinting, primality testing, etc.)
  • To avoid worst case or to deal with adversaries.
(randomized quick sort, perfect hashing, etc.)
  • To simulate random samples.
(random walk, Markov chain Monte Carlo, approximate counting etc.)
  • To enumerate/construct solutions.
(the probabilistic method, min-cut, etc.)
  • ...

Principles in probability theory

The course is organized by the advancedness of the probabilistic tools. We do this for two reasons: First, for randomized algorithms, analysis is usually more difficult and involved than the algorithm itself; and second, getting familiar with these probability principles will help you understand the true reasons for which the smart algorithms are designed.

  • Basic probability theory: probability space, events, the union bound, independence, conditional probability.
  • Moments and deviations: random variables, expectation, linearity of expectation, Markov's inequality, variance, second moment method.
  • The probabilistic method: averaging principle, threshold phenomena, Lovász Local Lemma.
  • Concentrations: Chernoff-Hoeffding bound, martingales, Azuma's inequality, bounded difference method.
  • Markov chains and random walks: Markov chians, random walks, hitting/cover time, mixing time.

Probability Space

The axiom foundation of probability theory is laid by Kolmogorov, one of the greatest mathematician of the 20th century, who advanced various very different fields of mathematics.

Definition (Probability Space)

A probability space is a triple [math]\displaystyle{ (\Omega,\Sigma,\Pr) }[/math].

  • [math]\displaystyle{ \Omega }[/math] is a set, called the sample space.
  • [math]\displaystyle{ \Sigma\subseteq 2^{\Omega} }[/math] is the set of all events, satisfying:
    (K1). [math]\displaystyle{ \Omega\in\Sigma }[/math] and [math]\displaystyle{ \empty\in\Sigma }[/math]. (The certain event and the impossible event.)
    (K2). If [math]\displaystyle{ A,B\in\Sigma }[/math], then [math]\displaystyle{ A\cap B, A\cup B, A-B\in\Sigma }[/math]. (Intersection, union, and diference of two events are events).
  • A probability measure [math]\displaystyle{ \Pr:\Sigma\rightarrow\mathbb{R} }[/math] is a function that maps each event to a nonnegative real number, satisfying
    (K3). [math]\displaystyle{ \Pr(\Omega)=1 }[/math].
    (K4). If [math]\displaystyle{ A\cap B=\emptyset }[/math] (such events are call disjoint events), then [math]\displaystyle{ \Pr(A\cup B)=\Pr(A)+\Pr(B) }[/math].
    (K5*). For a decreasing sequence of events [math]\displaystyle{ A_1\supset A_2\supset \cdots\supset A_n\supset\cdots }[/math] of events with [math]\displaystyle{ \bigcap_n A_n=\emptyset }[/math], it holds that [math]\displaystyle{ \lim_{n\rightarrow \infty}\Pr(A_n)=0 }[/math].

The sample space [math]\displaystyle{ \Omega }[/math] is the set of all possible outcomes of the random process modeled by the probability space. An event is a subset of [math]\displaystyle{ \Omega }[/math]. The statements (K1)--(K5) are axioms of probability. A probability space is well defined as long as these axioms are satisfied.

Example
Consider the probability space defined by rolling a dice with six faces. The sample space is [math]\displaystyle{ \Omega=\{1,2,3,4,5,6\} }[/math], and [math]\displaystyle{ \Sigma }[/math] is the power set [math]\displaystyle{ 2^{\Omega} }[/math]. For any event [math]\displaystyle{ A\in\Sigma }[/math], its probability is given by [math]\displaystyle{ \Pr(A)=\frac{|A|}{6}. }[/math]
Remark
  • In general, the set [math]\displaystyle{ \Omega }[/math] may be continuous, but we only consider discrete probability in this lecture, thus we assume that [math]\displaystyle{ \Omega }[/math] is either finite or countably infinite.
  • In many cases (such as the above example), [math]\displaystyle{ \Sigma=2^{\Omega} }[/math], i.e. the events enumerates all subsets of [math]\displaystyle{ \Omega }[/math]. But in general, a probability space is well-defined by any [math]\displaystyle{ \Sigma }[/math] satisfying (K1) and (K2). Such [math]\displaystyle{ \Sigma }[/math] is called a [math]\displaystyle{ \sigma }[/math]-algebra defined on [math]\displaystyle{ \Omega }[/math].
  • The last axiom (K5*) is redundant if [math]\displaystyle{ \Sigma }[/math] is finite, thus it is only essential when there are infinitely many events. The role of axiom (K5*) in probability theory is like Zorn's Lemma (or equivalently the Axiom of Choice) in axiomatic set theory.

Laws for probability can be deduced from the above axiom system. Denote that [math]\displaystyle{ \bar{A}=\Omega-A }[/math].

Proposition
[math]\displaystyle{ \Pr(\bar{A})=1-\Pr(A) }[/math].
Proof.

Due to Axiom (K4), [math]\displaystyle{ \Pr(\bar{A})+\Pr(A)=\Pr(\Omega) }[/math] which is equal to 1 according to Axiom (K3), thus [math]\displaystyle{ \Pr(\bar{A})+\Pr(A)=1 }[/math]. The proposition follows.

[math]\displaystyle{ \square }[/math]

Exercise: Deduce other useful laws for probability from the axioms. For example, [math]\displaystyle{ A\subseteq B\Longrightarrow\Pr(A)\le\Pr(B) }[/math].

Notation

An event [math]\displaystyle{ A\subseteq\Omega }[/math] can be represented as [math]\displaystyle{ A=\{a\in\Omega\mid \mathcal{E}(a)\} }[/math] with a predicate [math]\displaystyle{ \mathcal{E} }[/math].

The predicate notation of probability is

[math]\displaystyle{ \Pr[\mathcal{E}]=\Pr(\{a\in\Omega\mid \mathcal{E}(a)\}) }[/math].

During the lecture, we mostly use the predicate notation instead of subset notation.

Independence

Definition (Independent events)
Two events [math]\displaystyle{ \mathcal{E}_1 }[/math] and [math]\displaystyle{ \mathcal{E}_2 }[/math] are independent if and only if
[math]\displaystyle{ \begin{align} \Pr\left[\mathcal{E}_1 \wedge \mathcal{E}_2\right] &= \Pr[\mathcal{E}_1]\cdot\Pr[\mathcal{E}_2]. \end{align} }[/math]

This definition can be generalized to any number of events:

Definition (Independent events)
Events [math]\displaystyle{ \mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n }[/math] are mutually independent if and only if, for any subset [math]\displaystyle{ I\subseteq\{1,2,\ldots,n\} }[/math],
[math]\displaystyle{ \begin{align} \Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right] &= \prod_{i\in I}\Pr[\mathcal{E}_i]. \end{align} }[/math]

Note that in probability theory, the "mutual independence" is not equivalent with "pair-wise independence", which we will learn in the future.

Checking Matrix Multiplication

Consider the following problem:

  • Input: Three [math]\displaystyle{ n\times n }[/math] matrices [math]\displaystyle{ A,B }[/math] and [math]\displaystyle{ C }[/math].
  • Output: return "yes" if [math]\displaystyle{ C=AB }[/math] and "no" if otherwise.

A naive way of checking the equality is first computing [math]\displaystyle{ AB }[/math] and then comparing the result with [math]\displaystyle{ C }[/math]. The (asymptotically) fastest matrix multiplication algorithm known today runs in time [math]\displaystyle{ O(n^{2.376}) }[/math]. The naive algorithm will take asymptotically the same amount of time.

Freivalds Algorithm

The following is a very simple randomized algorithm, due to Freivalds, running in only [math]\displaystyle{ O(n^2) }[/math] time:

Algorithm (Freivalds)
  • pick a vector [math]\displaystyle{ r \in\{0, 1\}^n }[/math] uniformly at random;
  • if [math]\displaystyle{ A(Br) = Cr }[/math] then return "yes" else return "no";

The product [math]\displaystyle{ A(Br) }[/math] is computed by first multiplying [math]\displaystyle{ Br }[/math] and then [math]\displaystyle{ A(Br) }[/math]. The running time is [math]\displaystyle{ O(n^2) }[/math] because the algorithm does 3 matrix-vector multiplications in total.

If [math]\displaystyle{ AB=C }[/math] then [math]\displaystyle{ A(Br) = Cr }[/math] for any [math]\displaystyle{ r \in\{0, 1\}^n }[/math], thus the algorithm will return a "yes" for any positive instance ([math]\displaystyle{ AB=C }[/math]). But if [math]\displaystyle{ AB \neq C }[/math] then the algorithm will make a mistake if it chooses such an [math]\displaystyle{ r }[/math] that [math]\displaystyle{ ABr = Cr }[/math]. However, the following lemma states that the probability of this event is bounded.

Lemma
If [math]\displaystyle{ AB\neq C }[/math] then for a uniformly random [math]\displaystyle{ r \in\{0, 1\}^n }[/math],
[math]\displaystyle{ \Pr[ABr = Cr]\le \frac{1}{2} }[/math].
Proof.
Let [math]\displaystyle{ D=AB-C }[/math]. The event [math]\displaystyle{ ABr=Cr }[/math] is equivalent to that [math]\displaystyle{ Dr=0 }[/math]. It is then sufficient to show that for a [math]\displaystyle{ D\neq \boldsymbol{0} }[/math], it holds that [math]\displaystyle{ \Pr[Dr = \boldsymbol{0}]\le \frac{1}{2} }[/math].

Since [math]\displaystyle{ D\neq \boldsymbol{0} }[/math], it must have at least one non-zero entry. Suppose that [math]\displaystyle{ D(i,j)\neq 0 }[/math].

We assume the event that [math]\displaystyle{ Dr=\boldsymbol{0} }[/math]. In particular, the [math]\displaystyle{ i }[/math]-th entry of [math]\displaystyle{ Dr }[/math] is

[math]\displaystyle{ (Dr)_{i}=\sum_{k=1}^nD(i,k)r_k=0. }[/math]

The [math]\displaystyle{ r_j }[/math] can be calculated by

[math]\displaystyle{ r_j=-\frac{1}{D(i,j)}\sum_{k\neq j}^nD(i,k)r_k. }[/math]

Once all [math]\displaystyle{ r_k }[/math] where [math]\displaystyle{ k\neq j }[/math] are fixed, there is a unique solution of [math]\displaystyle{ r_j }[/math]. That is to say, conditioning on any [math]\displaystyle{ r_k, k\neq j }[/math], there is at most one value of [math]\displaystyle{ r_j\in\{0,1\} }[/math] satisfying [math]\displaystyle{ Dr=0 }[/math]. On the other hand, observe that [math]\displaystyle{ r_j }[/math] is chosen from two values [math]\displaystyle{ \{0,1\} }[/math] uniformly and independently at random. Therefore, with at least [math]\displaystyle{ \frac{1}{2} }[/math] probability, the choice of [math]\displaystyle{ r }[/math] fails to give us a zero [math]\displaystyle{ Dr }[/math]. That is, [math]\displaystyle{ \Pr[ABr=Cr]=\Pr[Dr=0]\le\frac{1}{2} }[/math].

[math]\displaystyle{ \square }[/math]

When [math]\displaystyle{ AB=C }[/math], Freivalds algorithm always returns "yes"; and when [math]\displaystyle{ AB\neq C }[/math], Freivalds algorithm returns "no" with probability at least 1/2.

To improve its accuracy, we can run Freivalds algorithm for [math]\displaystyle{ k }[/math] times, each time with an independent random [math]\displaystyle{ r\in\{0,1\}^n }[/math], and return "yes" iff all running instances pass the test.

Freivalds' Algorithm (multi-round)
  • pick [math]\displaystyle{ k }[/math] vectors [math]\displaystyle{ r_1,r_2,\ldots,r_k \in\{0, 1\}^n }[/math] uniformly and independently at random;
  • if [math]\displaystyle{ A(Br_i) = Cr_i }[/math] for all [math]\displaystyle{ i=1,\ldots,k }[/math] then return "yes" else return "no";

If [math]\displaystyle{ AB=C }[/math], then the algorithm returns a "yes" with probability 1. If [math]\displaystyle{ AB\neq C }[/math], then due to the independence, the probability that all [math]\displaystyle{ r_i }[/math] have [math]\displaystyle{ ABr_i=C_i }[/math] is at most [math]\displaystyle{ 2^{-k} }[/math], so the algorithm returns "no" with probability at least [math]\displaystyle{ 1-2^{-k} }[/math]. Choose [math]\displaystyle{ k=O(\log n) }[/math]. The algorithm runs in time [math]\displaystyle{ O(n^2\log n) }[/math] and has a one-sided error (false positive) bounded by [math]\displaystyle{ \frac{1}{\mathrm{poly}(n)} }[/math].