随机算法 (Fall 2011)/The Method of Bounded Differences

From EtoneWiki
Jump to: navigation, search

Generalizations

The martingale can be generalized to be with respect to another sequence of random variables.

Definition (martingale, general version)
A sequence of random variables is a martingale with respect to the sequence if, for all , the following conditions hold:
  • is a function of ;

Therefore, a sequence is a martingale if it is a martingale with respect to itself.

The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.

The Doob martingales

The following definition describes a very general approach for constructing an important type of martingales.

Definition (The Doob sequence)
The Doob sequence of a function with respect to a sequence of random variables is defined by
In particular, and .

The Doob sequence of a function defines a martingale. That is

for any .

To prove this claim, we recall the definition that , thus,

where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.

The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function of random variables . The Doob sequence represents a sequence of refined estimates of the value of , gradually using more information on the values of the random variables . The first element is just the expectation of . Element is the expected value of when the values of are known, and when is fully determined by .

The following two Doob martingales arise in evaluating the parameters of random graphs.

Example: edge exposure martingale
Let be a random graph on vertices. Let be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that . Fix an arbitrary numbering of potential edges between the vertices, and denote the edges as . Let
Let and for , let .
The sequence gives a Doob martingale that is commonly called the edge exposure martingale.
Example: vertex exposure martingale
Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is . Let be the subgraph of induced by the vertex set , i.e. the first vertices.
Let and for , let .
The sequence gives a Doob martingale that is commonly called the vertex exposure martingale.

Azuma's inequality -- general version

Azuma's inequality can be generalized to a martingale with respect another sequence.

Azuma's Inequality (general version)
Let be a martingale with respect to the sequence such that, for all ,
Then

The proof is almost identical to the proof of the original Azuma's inequality. We also work on the sum of the martingale differences (this time the differences are ), yet conditioning on . The rest of the proof proceeds in the same way.

Application: Chromatic number

The random graph is the graph on vertices , obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability . We denote if is generated in this way.

Theorem [Shamir and Spencer (1987)]
Let . Let be the chromatic number of . Then
Proof.
Consider the vertex exposure martingale

where each exposes the induced subgraph of on vertex set . A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition

is satisfied. Now apply the Azuma's inequality for the martingale with respect to .

For , the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.

Application: Hoeffding's Inequality

The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.

Hoeffding's inequality
Let , where are independent random variables with for each . Let . Then
Proof.
Define the Doob martingale sequence . Obviously and .

Apply Azuma's inequality for the martingale with respect to , the Hoeffding's inequality is proved.

For arbitrary random variables

Given a sequence of random variables and a function . The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).

Theorem (Method of averaged bounded differences)
Let be arbitrary random variables and let be a function of satisfying that, for all ,
Then
Proof.
Define the Doob Martingale sequence by setting and, for , . Then the above theorem is a restatement of the Azuma's inequality holding for .

For independent random variables

The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.

Definition (Lipschitz condition)
A function satisfies the Lipschitz condition, if for any and any ,

In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.

The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.

Definition (Lipschitz condition, general version)
A function satisfies the Lipschitz condition with constants , , if for any and any ,

The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.

Corollary (Method of bounded differences)
Let be independent random variables and let be a function satisfying the Lipschitz condition with constants , . Then
Proof.
For convenience, we denote that for any .

We first show that the Lipschitz condition with constants , , implies another condition called the averaged Lipschitz condition (ALC): for any , ,

And this condition implies the averaged bounded difference condition: for all ,

Then by applying the method of averaged bounded differences, the corollary can be proved.

For any , by the law of total expectation,

Let and , and take the diference. Then

Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.

By the law of total expectation,

We can trivially write as

Hence, the difference is

The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.

Applications

Occupancy problem

Throwing balls uniformly and independently at random to bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.

This problem can be described equivalently as follows. Let be a uniform random function from . We ask for the number of that is empty.

For any , let indicate the emptiness of bin . Let be the number of empty bins.

By the linearity of expectation,

We want to know how deviates from this expectation. The complication here is that are not independent. So we alternatively look at a sequence of independent random variables , where represents the bin into which the th ball falls. Clearly is function of .

We than observe that changing the value of any can change the value of by at most 1, because one ball can affect the emptiness of at most one bin. Thus as a function of independent random variables , satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that

Thus, for sufficiently large and , the number of empty bins is tightly concentrated around

Pattern Matching

Let be a sequence of characters chosen independently and uniformly at random from an alphabet , where . Let be an arbitrarily fixed string of characters from , called a pattern. Let be the number of occurrences of the pattern as a substring of the random string .

By the linearity of expectation, it is obvious that

We now look at the concentration of . The complication again lies in the dependencies between the matches. Yet we will see that is well tightly concentrated around its expectation if is relatively small compared to .

For a fixed pattern , the random variable is a function of the independent random variables . Any character participates in no more than matches, thus changing the value of any can affect the value of by at most . satisfies the Lipschitz condition with constant . Apply the method of bounded differences,

Combining unit vectors

Let be unit vectors from some normed space. That is, for any , where denote the vector norm (e.g. ) of the space.

Let be independently chosen and .

Let

and

This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable is well concentrated around its mean.

is a function of independent random variables . By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector can only change the value of for at most 2, thus satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences: