# 随机算法 (Fall 2015)/Moment and Deviation

# Tail Inequalities

When applying probabilistic analysis, we often want a bound in form of for some random variable (think that is a cost such as running time of a randomized algorithm). We call this a **tail bound**, or a **tail inequality**.

Besides directly computing the probability , we want to have some general way of estimating tail probabilities from some measurable information regarding the random variables.

## Markov's Inequality

One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation.

**Theorem (Markov's Inequality)**- Let be a random variable assuming only nonnegative values. Then, for all ,

- Let be a random variable assuming only nonnegative values. Then, for all ,

**Proof.**Let be the indicator such that It holds that . Since is 0-1 valued, . Therefore,

### Example (from Las Vegas to Monte Carlo)

Let be a Las Vegas randomized algorithm for a decision problem , whose expected running time is within on any input of size . We transform to a Monte Carlo randomized algorithm with bounded one-sided error as follows:

- :
- Run for long where is the size of .
- If returned within time, then return what just returned, else return 1.

Since is Las Vegas, its output is always correct, thus only errs when it returns 1, thus the error is one-sided. The error probability is bounded by the probability that runs longer than . Since the expected running time of is at most , due to Markov's inequality,

thus the error probability is bounded.

### Generalization

For any random variable , for an arbitrary non-negative real function , the is a non-negative random variable. Applying Markov's inequality, we directly have that

This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function which extracts more information about the random variable, we can prove sharper tail inequalities.

## Variance

**Definition (variance)**- The
**variance**of a random variable is defined as - The
**standard deviation**of random variable is

- The

We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance.

**Definition (covariance)**- The
**covariance**of two random variables and is

- The

We have the following theorem for the variance of sum.

**Theorem**- For any two random variables and ,
- Generally, for any random variables ,

- For any two random variables and ,

**Proof.**The equation for two variables is directly due to the definition of variance and covariance. The equation for variables can be deduced from the equation for two variables.

We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity.

**Theorem**- For any two independent random variables and ,

- For any two independent random variables and ,

**Proof.**

With the above theorem, we can show that the covariance of two independent variables is always zero.

**Theorem**- For any two independent random variables and ,

- For any two independent random variables and ,

**Proof.**

We then have the following theorem for the variance of the sum of pairwise independent random variables.

**Theorem**- For
**pairwise**independent random variables ,

- For

- Remark
- The theorem holds for
**pairwise**independent random variables, a much weaker independence requirement than the**mutual**independence. This makes the variance-based probability tools work even for weakly random cases. We will see what it exactly means in the future lectures.

### Variance of binomial distribution

For a Bernoulli trial with parameter .

The variance is

Let be a binomial random variable with parameter and , i.e. , where 's are i.i.d. Bernoulli trials with parameter . The variance is

## Chebyshev's inequality

With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality.

**Theorem (Chebyshev's Inequality)**- For any ,

- For any ,

**Proof.**Observe that Since is a nonnegative random variable, we can apply Markov's inequality, such that

# Median Selection

The selection problem is the problem of finding the th smallest element in a set . A typical case of selection problem is finding the **median**.

**Definition**- The median of a set is the th element in the sorted order of .

The median can be found in time by sorting. There is a linear-time deterministic algorithm, "median of medians" algorithm, which is quite sophisticated. Here we introduce a much simpler randomized algorithm which also runs in linear time.

## The LazySelect algorithm

We introduce a randomized median selection algorithm called **LazySelect**, which is a variant on a randomized algorithm due to Floyd and Rivest

The idea of this algorithm is random sampling. For a set , let denote the median. We observe that if we can find two elements satisfying the following properties:

- The median is between and in the sorted order, i.e. ;
- The total number of elements between and is small, specially for , .

Provided and with these two properties, within linear time, we can compute the ranks of and in , construct , and sort . Therefore, the median of can be picked from in linear time.

So how can we select such elements and from ? Certainly sorting would give us the elements, but isn't that exactly what we want to avoid in the first place?

Observe that and are only asked to roughly satisfy some constraints. This hints us maybe we can construct a *sketch* of which is small enough to sort cheaply and roughly represents , and then pick and from this sketch. We construct the sketch by randomly sampling a relatively small number of elements from . Then the strategy of algorithm is outlined by:

- Sample a set of elements from .
- Sort and choose and somewhere around the median of .
- If and have the desirable properties, we can compute the median in linear time, or otherwise the algorithm fails.

The parameters to be fixed are: the size of (small enough to sort in linear time and large enough to contain sufficient information of ); and the order of and in (not too close to have between them, and not too far away to have sortable in linear time).

We choose the size of as , and and are within range around the median of .

**LazySelect****Input:**a set of elements over totally ordered domain.- Pick a multi-set of elements in , chosen independently and uniformly at random with replacement, and sort .
- Let be the -th smallest element in , and let be the -th smallest element in .
- Construct and compute the ranks and .
- If or or then return FAIL.
- Sort and return the th element in the sorted order of .

"Sample with replacement" (有放回采样) means that after sampling an element, we put the element back to the set. In this way, each sampled element is independently and identically distributed (*i.i.d*) (独立同分布). In the above algorithm, this is for our convenience of analysis.

## Analysis

The algorithm always terminates in linear time because each line of the algorithm costs at most linear time. The last three line guarantees that the algorithm returns the correct median if it does not fail.

We then only need to bound the probability that the algorithm returns a FAIL. Let be the median of . By Line 4, we know that the algorithm returns a FAIL if and only if at least one of the following events occurs:

- ;
- ;
- .

directly follows the third condition in Line 4. and are a bit tricky. The first condition in Line 4 is that , which looks not exactly the same as , but both and that are equivalent to the same event: the -th smallest element in is greater than , thus they are actually equivalent. Similarly, is equivalent to the second condition of Line 4.

We now bound the probabilities of these events one by one.

**Lemma 1**- .

**Proof.**Let be the th sampled element in Line 1 of the algorithm. Let be a indicator random variable such that It is obvious that , where is as defined in . For every , there are elements in that are less than or equal to the median. The probability that is

which is within the range of . Thus

The event is defined as that .

Note that 's are Bernoulli trials, and is the sum of Bernoulli trials, which follows binomial distribution with parameters and . Thus, the variance is

Applying Chebyshev's inequality,

By a similar analysis, we can obtain the following bound for the event .

**Lemma 2**- .

We now bound the probability of the event .

**Lemma 3**- .

**Proof.**The event is defined as that , which by the Pigeonhole Principle, implies that at leas one of the following must be true: - : at least elements of is greater than ;
- : at least elements of is smaller than .

We bound the probability that occurs; the second will have the same bound by symmetry.

Recall that is the region in between and . If there are at least elements of greater than the median of , then the rank of in the sorted order of must be at least and thus has at least samples among the largest elements in .

Let indicate whether the th sample is among the largest elements in . Let be the number of samples in among the largest elements in . It holds that

- .

is a binomial random variable with

and

Applying Chebyshev's inequality,

Symmetrically, we have that .

Applying the union bound

Combining the three bounds. Applying the union bound to them, the probability that the algorithm returns a FAIL is at most

Therefore the algorithm always terminates in linear time and returns the correct median with high probability.

# Erdős–Rényi Random Graphs

Consider a graph which is randomly generated as:

- ;
- , independently with probability .

Such graph is denoted as **. This is called the ****Erdős–Rényi model** or ** model** for random graphs.

Informally, the presence of every edge of is determined by an independent coin flipping (with probability of HEADs ).

## Monotone properties

A graph property is a predicate of graph which depends only on the structure of the graph.

**Definition**- Let , where , be the set of all possible graphs on vertices. A
**graph property**is a boolean function which is invariant under permutation of vertices, i.e. whenever is isomorphic to .

- Let , where , be the set of all possible graphs on vertices. A

We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property.

**Definition**- A graph property is
**monotone**if for any , both on vertices, having property implies having property .

- A graph property is

By seeing the property as a function mapping a set of edges to a numerical value in , a monotone property is just a monotonically increasing set function.

Some examples of monotone graph properties:

- Hamiltonian;
- -clique;
- contains a subgraph isomorphic to some ;
- non-planar;
- chromatic number (i.e., not -colorable);
- girth .

From the last two properties, you can see another reason that the Erdős theorem is unintuitive.

Some examples of **non-**monotone graph properties:

- Eulerian;
- contains an
*induced*subgraph isomorphic to some ;

For all monotone graph properties, we have the following theorem.

**Theorem**- Let be a monotone graph property. Suppose , , and . Then
- .

- Let be a monotone graph property. Suppose , , and . Then

Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of coupling, a proof technique in probability theory which compare two unrelated random variables by forcing them to be related.

**Proof.**For any , let be independently and uniformly distributed over the continuous interval . Let if and only if and let if and only if .

It is obvious that and . For any , means that , which implies that . Thus, .

Since is monotone, implies . Thus,

- .

# Threshold phenomenon

One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph suddenly changes from almost always not having the property to almost always having the property as grows in a very small range.

A monotone graph property is said to have the **threshold** if

- when , as (also called almost always does not have ); and
- when , as (also called almost always has ).

The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality).

## Threshold for 4-clique

**Theorem**- The threshold for a random graph to contain a 4-clique is .

We formulate the problem as such. For any -subset of vertices , let be the indicator random variable such that

Let be the total number of 4-cliques in .

It is sufficient to prove the following lemma.

**Lemma**- If , then as .
- If , then as .

**Proof.**The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality).

Every 4-clique has 6 edges, thus for any ,

- .

By the linearity of expectation,

- .

Applying Markov's inequality

- , if .

The first claim is proved.

To prove the second claim, it is equivalent to show that if . By the Chebyshev's inequality,

- ,

where the variance is computed as

- .

For any ,

- . Thus the first term of above formula is .

We now compute the covariances. For any that :

- Case.1: , so and do not share any edges. and are independent, thus .
- Case.2: , so and share an edge. Since , there are pairs of such and .

- since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is .

- Case.2: , so and share a triangle. Since , there are pairs of such and . By the same argument,

- since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is .

Putting all these together,

And

- ,

which is if . The second claim is also proved.

## Threshold for balanced subgraphs

The above theorem can be generalized to any "balanced" subgraphs.

**Definition**- The
**density**of a graph , denoted , is defined as . - A graph is
**balanced**if for all subgraphs of .

- The

Cliques are balanced, because for any . The threshold for 4-clique is a direct corollary of the following general theorem.

**Theorem (Erdős–Rényi 1960)**- Let be a balanced graph with vertices and edges. The threshold for the property that a random graph contains a (not necessarily induced) subgraph isomorphic to is .

**Sketch of proof.**For any , let indicate whether (the subgraph of induced by ) contain a subgraph . Then

- , since there are at most ways to match the substructure.

Note that does not depend on . Thus, . Let be the number of -subgraphs.

- .

By Markov's inequality, which is when .

By Chebyshev's inequality, where

- .

The first term .

For the covariances, only if for . Note that implies that . And for balanced , the number of edges of interest in and is . Thus, . And,

Therefore, when ,

- .