随机算法 (Fall 2015)/Concentration of Measure

From EtoneWiki
Jump to: navigation, search

Conditional Expectations

The conditional expectation of a random variable with respect to an event is defined by

In particular, if the event is , the conditional expectation

defines a function

Thus, can be regarded as a random variable .

Example
Suppose that we uniformly sample a human from all human beings. Let be his/her height, and let be the country where he/she is from. For any country , gives the average height of that country. And is the random variable which can be defined in either ways:
  • We choose a human uniformly at random from all human beings, and is the average height of the country where he/she comes from.
  • We choose a country at random with a probability proportional to its population, and is the average height of the chosen country.

The following proposition states some fundamental facts about conditional expectation.

Proposition (fundamental facts about conditional expectation)
Let and be arbitrary random variables. Let and be arbitrary functions. Then
  1. .
  2. .
  3. .

The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.

The first equation:

says that there are two ways to compute an average. Suppose again that is the height of a uniform random human and is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.

The second equation:

is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height and the country , let be the gender of the individual. Thus, is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.

The third equation:

.

looks obscure at the first glance, especially when considering that and are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any , the function value becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value ,

The proposition holds in more general cases when and are a sequence of random variables.

Martingales

"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after losses, if the th bet wins, then it gives a net profit of

which is a positive number.

However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.

Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables , where is his initial capital, and represents his capital after the th betting. Up to different betting strategies, can be arbitrarily dependent on . However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables , we will expect no change in the value of the present variable on average. Random variables satisfying this property is called a martingale sequence.

Definition (martingale)
A sequence of random variables is a martingale if for all ,

Examples

coin flips
A fair coin is flipped for a number of times. Let denote the outcome of the th flip. Let
.
The random variables defines a martingale.
Proof
We first observe that , which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the Markov property in statistic processes.
Polya's urn scheme
Consider an urn (just a container) that initially contains balck balls and white balls. At each step, we uniformly select a ball from the urn, and replace the ball with balls of the same color. Let , and be the fraction of black balls in the urn after the th step. The sequence is a martingale.
edge exposure in a random graph
Consider a random graph generated as follows. Let be the set of vertices, and let be the set of all possible edges. For convenience, we enumerate these potential edges by . For each potential edge , we independently flip a fair coin to decide whether the edge appears in . Let be the random variable that indicates whether . We are interested in some graph-theoretical parameter, say chromatic number, of the random graph . Let be the chromatic number of . Let , and for each , let , namely, the expected chromatic number of the random graph after fixing the first edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random graph.

It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.

Generalizations

The martingale can be generalized to be with respect to another sequence of random variables.

Definition (martingale, general version)
A sequence of random variables is a martingale with respect to the sequence if, for all , the following conditions hold:
  • is a function of ;

Therefore, a sequence is a martingale if it is a martingale with respect to itself.

The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.

Azuma's Inequality

We introduce a martingale tail inequality, called Azuma's inequality.

Azuma's Inequality
Let be a martingale such that, for all ,
Then

Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.

Second, the condition that

is central to the proof. This condition is sometimes called the bounded difference condition. If we think of the martingale as a process evolving through time, where gives some measurement at time , the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.

A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.

Corollary
Let be a martingale such that, for all ,
Then

This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates far away from the starting point after steps is bounded by .

Generalization

Azuma's inequality can be generalized to a martingale with respect another sequence.

Azuma's Inequality (general version)
Let be a martingale with respect to the sequence such that, for all ,
Then

The Proof of Azuma's Inueqality

We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence conditioning on sequence .

The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.

In order to bound the probability of , we first bound the upper tail . The bound of the lower tail can be symmetrically proved with the replaced by .

Represent the deviation as the sum of differences

We define the martingale difference sequence: for , let

It holds that

The second to the last equation is due to the fact that is a martingale and the definition of conditional expectation.

Let be the accumulated differences

The deviation can be computed by the accumulated differences:

We then only need to upper bound the probability of the event .

Apply Markov's inequality to the moment generating function

The event is equivalent to that for . Apply Markov's inequality, we have

This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function .

Bound the moment generating functions

The moment generating function

The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.

We then upper bound the by a constant. To do so, we need the following technical lemma which is proved by the convexity of .

Lemma
Let be a random variable such that and . Then for ,
Proof.
Observe that for , the function of the variable is convex in the interval . We draw a line between the two endpoints points and . The curve of lies entirely below this line. Thus,

Since , we have

By expanding both sides as Taylor's series, it can be verified that .

Apply the above lemma to the random variable

We have already shown that its expectation and by the bounded difference condition of Azuma's inequality, we have Thus, due to the above lemma, it holds that

Back to our analysis of the expectation , we have

Apply the same analysis to , we can solve the above recursion by

Go back to the Markov's inequality,

We then only need to choose a proper .

Optimization

By choosing , we have that

Thus, the probability

The upper tail of Azuma's inequality is proved. By replacing by , the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.

The Doob martingales

The following definition describes a very general approach for constructing an important type of martingales.

Definition (The Doob sequence)
The Doob sequence of a function with respect to a sequence of random variables is defined by
In particular, and .

The Doob sequence of a function defines a martingale. That is

for any .

To prove this claim, we recall the definition that , thus,

where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.

The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function of random variables . The Doob sequence represents a sequence of refined estimates of the value of , gradually using more information on the values of the random variables . The first element is just the expectation of . Element is the expected value of when the values of are known, and when is fully determined by .

The following two Doob martingales arise in evaluating the parameters of random graphs.

edge exposure martingale
Let be a random graph on vertices. Let be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that . Fix an arbitrary numbering of potential edges between the vertices, and denote the edges as . Let
Let and for , let .
The sequence gives a Doob martingale that is commonly called the edge exposure martingale.
vertex exposure martingale
Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is . Let be the subgraph of induced by the vertex set , i.e. the first vertices.
Let and for , let .
The sequence gives a Doob martingale that is commonly called the vertex exposure martingale.

Chromatic number

The random graph is the graph on vertices , obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability . We denote if is generated in this way.

Theorem [Shamir and Spencer (1987)]
Let . Let be the chromatic number of . Then
Proof.
Consider the vertex exposure martingale

where each exposes the induced subgraph of on vertex set . A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition

is satisfied. Now apply the Azuma's inequality for the martingale with respect to .

For , the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.

Hoeffding's Inequality

The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.

Hoeffding's inequality
Let , where are independent random variables with for each . Let . Then
Proof.
Define the Doob martingale sequence . Obviously and .

Apply Azuma's inequality for the martingale with respect to , the Hoeffding's inequality is proved.

The Bounded Difference Method

Combining Azuma's inequality with the construction of Doob martingales, we have the powerful Bounded Difference Method for concentration of measures.

For arbitrary random variables

Given a sequence of random variables and a function . The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).

Theorem (Method of averaged bounded differences)
Let be arbitrary random variables and let be a function of satisfying that, for all ,
Then
Proof.
Define the Doob Martingale sequence by setting and, for , . Then the above theorem is a restatement of the Azuma's inequality holding for .

For independent random variables

The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.

Definition (Lipschitz condition)
A function satisfies the Lipschitz condition, if for any and any ,

In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.

The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.

Definition (Lipschitz condition, general version)
A function satisfies the Lipschitz condition with constants , , if for any and any ,

The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.

Corollary (Method of bounded differences)
Let be independent random variables and let be a function satisfying the Lipschitz condition with constants , . Then
Proof.
For convenience, we denote that for any .

We first show that the Lipschitz condition with constants , , implies another condition called the averaged Lipschitz condition (ALC): for any , ,

And this condition implies the averaged bounded difference condition: for all ,

Then by applying the method of averaged bounded differences, the corollary can be proved.

For any , by the law of total expectation,

Let and , and take the diference. Then

Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.

By the law of total expectation,

We can trivially write as

Hence, the difference is

The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.

Applications

Occupancy problem

Throwing balls uniformly and independently at random to bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.

This problem can be described equivalently as follows. Let be a uniform random function from . We ask for the number of that is empty.

For any , let indicate the emptiness of bin . Let be the number of empty bins.

By the linearity of expectation,

We want to know how deviates from this expectation. The complication here is that are not independent. So we alternatively look at a sequence of independent random variables , where represents the bin into which the th ball falls. Clearly is function of .

We than observe that changing the value of any can change the value of by at most 1, because one ball can affect the emptiness of at most one bin. Thus as a function of independent random variables , satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that

Thus, for sufficiently large and , the number of empty bins is tightly concentrated around

Pattern Matching

Let be a sequence of characters chosen independently and uniformly at random from an alphabet , where . Let be an arbitrarily fixed string of characters from , called a pattern. Let be the number of occurrences of the pattern as a substring of the random string .

By the linearity of expectation, it is obvious that

We now look at the concentration of . The complication again lies in the dependencies between the matches. Yet we will see that is well tightly concentrated around its expectation if is relatively small compared to .

For a fixed pattern , the random variable is a function of the independent random variables . Any character participates in no more than matches, thus changing the value of any can affect the value of by at most . satisfies the Lipschitz condition with constant . Apply the method of bounded differences,

Combining unit vectors

Let be unit vectors from some normed space. That is, for any , where denote the vector norm (e.g. ) of the space.

Let be independently chosen and .

Let

and

This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable is well concentrated around its mean.

is a function of independent random variables . By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector can only change the value of for at most 2, thus satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:

The Johnson-Lindenstrauss Theorem

Consider a problem as follows: We have a set of points in a high-dimensional Euclidean space . We want to project the points onto a space of low dimension in such a way that pairwise distances of the points are approximately the same as before.

Formally, we are looking for a map such that for any pair of original points , distorts little from , where is the Euclidean norm, i.e. is the distance between and in Euclidean space.

This problem has various important applications in both theory and practice. In many tasks, the data points are drawn from a high dimensional space, however, computations on high-dimensional data are usually hard due to the infamous "curse of dimensionality". The computational tasks can be greatly eased if we can project the data points onto a space of low dimension while the pairwise relations between the points are approximately preserved.

The Johnson-Lindenstrauss Theorem states that it is possible to project points in a space of arbitrarily high dimension onto an -dimensional space, such that the pairwise distances between the points are approximately preserved.

Johnson-Lindenstrauss Theorem
For any and any positive integer , let be a positive integer such that
Then for any set of points in , there is a map such that for all ,
.
Furthermore, this map can be found in expected polynomial time.

The map is done by random projection. There are several ways of applying the random projection. We adopt the one in the original Johnson-Lindenstrauss paper.

The projection (due to Johnson-Lindenstrauss)
Let be a random matrix that projects onto a uniform random k-dimensional subspace.
Multiply by a fixed scalar . For every , is mapped to .

The projected point is a vector in .

The purpose of multiplying the scalar is to guarantee that .

Besides the uniform random subspace, there are other choices of random projections known to have good performances, including:

  • A matrix whose entries follow i.i.d. normal distributions. (Due to Indyk-Motwani)
  • A matrix whose entries are i.i.d. . (Due to Achlioptas)

In both cases, the matrix is also multiplied by a fixed scalar for normalization.


We present a proof due to Dasgupta-Gupta, which is much simpler than the original proof of Johnson-Lindenstrauss. The proof is for the projection onto uniform random subspace. The idea of the proof is outlined as follows:

  1. To bound the distortions to pairwise distances, it is sufficient to bound the distortions to the length of unit vectors.
  2. A uniform random subspace of a fixed unit vector is identically distributed as a fixed subspace of a uniform random unit vector. We can fix the subspace as the first k coordinates of the vector, thus it is sufficient to bound the length (norm) of the first k coordinates of a uniform random unit vector.
  3. Prove that for a uniform random unit vector, the length of its first k coordinates is concentrated to the expectation.
From pairwise distances to norms of unit vectors

Let be a vector in the original space, the random matrix projects onto a uniformly random k-dimensional subspace of . We only need to show that

Think of as a for some . Then by applying the union bound to all pairs of the points in , the random projection violates the distortion requirement with probability at most

so has the desirable low-distortion with probability at least . Thus, the low-distortion embedding can be found by trying for expected times (recalling the analysis fo geometric distribution).

We can further simplify the problem by normalizing the . Note that for nonzero 's, the statement that

is equivalent to that

Thus, we only need to bound the distortions for the unit vectors, i.e. the vectors that . The rest of the proof is to prove the following lemma for the unit vector in .

Lemma 3.1
For any unit vector , it holds that

As we argued above, this lemma implies the Johnson-Lindenstrauss Theorem.

Random projection of fixed unit vector fixed projection of random unit vector

Let be a fixed unit vector in . Let be a random matrix which projects the points in onto a uniformly random -dimensional subspace of .

Let be a uniformly random unit vector in . Let be such a fixed matrix which extracts the first coordinates of the vectors in , i.e. for any , .

In other words, is a random projection of a fixed unit vector; and is a fixed projection of a uniformly random unit vector.

A key observation is that:

Observation
The distribution of is the same as the distribution of .

The proof of this observation is omitted here.

With this observation, it is sufficient to work on the subspace of the first coordinates of the uniformly random unit vector . Our task is now reduced to the following lemma.

Lemma 3.2
Let be a uniformly random unit vector in . Let be the projection of to the subspace of the first -coordinates of .
Then

Due to the above observation, Lemma 3.2 implies Lemma 3.1 and thus proves the Johnson-Lindenstrauss theorem.

Note that . Due to the linearity of expectations,

.

Since is a uniform random unit vector, it holds that . And due to the symmetry, all 's are equal. Thus, for all . Therefore,

.

Lemma 3.2 actually states that is well-concentrated to its expectation.

Concentration of the norm of the first entries of uniform random unit vector

We now prove Lemma 3.2. Specifically, we will prove the direction:

.

The direction is proved with the same argument.

Due to the discussion in the last section, this can be interpreted as a concentration bound for , which is a sum of . This hints us to use Chernoff-like bounds. However, for uniformly random unit vector , 's are not independent (because of the constraint that ). We overcome this by generating uniform unit vectors from independent normal distributions.

The following is a very useful fact regarding the generation of uniform unit vectors.

Generating uniform unit vector
Let be i.i.d. random variables, each drawn from the normal distribution . Let . Then
is a uniformly random unit vector.

Then for ,

.

To avoid writing a lot of 's. We write . The first inequality (the lower tail) of Lemma 3.2 can be written as:

The probability is a tail probability of the sum of independent variables. The 's are not 0-1 variables, thus we cannot directly apply the Chernoff bounds. However, the following two key ingredients of the Chernoff bounds are satisfiable for the above sum:

  • The 's are independent.
  • Because 's are normal, it is known that the moment generating functions for 's can be computed as follows:
Fact 3.3
If follows the normal distribution , then , for

Therefore, we can re-apply the technique of the Chernoff bound (applying Markov's inequality to the moment generating function and optimizing the parameter ) to bound the probability :

The last term is minimized when

so that

which is is for the choice of k in the Johnson-Lindenstrauss theorem that

.

So we have proved that

.

With the same argument, the other direction can be proved so that

,

which is also for .

Lemma 3.2 is proved. As we discussed in the previous sections, Lemma 3.2 implies Lemma 3.1, which implies the Johnson-Lindenstrauss theorem.