Randomized Algorithms (Spring 2010)/Martingales
Martingales
Review of conditional expectations
The conditional expectation of a random variable
In particular, if the event
defines a function
Thus,
- Example
- Suppose that we uniformly sample a human from all human beings. Let
be his/her height, and let be the country where he/she is from. For any country , gives the average height of that country. And is the random variable which can be defined in either ways:- We choose a human uniformly at random from all human beings, and
is the average height of the country where he/she comes from. - We choose a country at random with a probability proportional to its population, and
is the average height of the chosen country.
- We choose a human uniformly at random from all human beings, and
The following proposition states some fundamental facts about conditional expectation.
Proposition (fundamental facts about conditional expectation)
|
The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.
The first equation:
says that there are two ways to compute an average. Suppose again that
The second equation:
is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height
The third equation:
.
looks obscure at the first glance, especially when considering that
The proposition holds in more general cases when
Martingales
"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after
which is a positive number.
However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life. And remember: gambling is bad!
Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables
Definition (martingale):
|
- Example (coin flips)
- A fair coin is flipped for a number of times. Let
denote the outcome of the th flip. Let .
- The random variables
defines a martingale. - Proof
- We first observe that
, which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the Markov property in statistic processes.
- Example (random walk)
- Consider an infinite grid. A random walk starts from the origin, and at each step moves to one of the four directions with equal probability. Let
be the distance from the origin, measured by -distance (the length of the shortest path on the grid). The sequence does NOT directly defines a martingale sequence. However, we can fix it by changing the rule of the random walk a little bit: when the current position is on the same horizontal or vertical line of the origin, with 1/2 probability the random walk moves towards the origin, and with 1/2 probability it moves to one of the three other directions; while the random walk at all other places, the rules remain the same (moving to one of the four direction uniformly at random).
- The fixed random walk is a martingale. It is due to the fact that conditioning on any previous walk, the expected change to the current distance from the origin is zero.
- Example (Polya's urn scheme)
- Consider an urn (just a container) that initially contains
balck balls and white balls. At each step, we uniformly select a ball from the urn, and replace the ball with balls of the same color. Let , and be the fraction of black balls in the urn after the th step. The sequence is a martingale.
- Example (edge exposure in a random graph)
- Consider a random graph
generated as follows. Let be the set of vertices, and let be the set of all possible edges. For convenience, we enumerate these potential edges by . For each potential edge , we independently flip a fair coin to decide whether the edge appears in . Let be the random variable that indicates whether . We are interested in some graph-theoretical parameter, say chromatic number, of the random graph . Let be the chromatic number of . Let , and for each , let , namely, the expected chromatic number of the random graph after fixing the first edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random grpah. - As shown by the above figure, the sequence
is a martingale. In particular, , and . The martingale moves from no information to full information (of the random graph ) in small steps.
It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.
Azuma's Inequality
We then introduce a martingale tail inequality, called Azuma's inequality.
Azuma's Inequality:
|
Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.
Second, the condition that
is central to the proof. This condition is sometimes called the bounded difference condition. If we think of the martingale
A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.
Corollary:
|
This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates
The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.
In order to bound the probability of
Represent the deviation as the sum of differences
We define the martingale difference sequence: for
It holds that
The second to the last equation is due to the fact that
Let
The deviation
We then only need to upper bound the probability of the event
Apply Markov's inequality to the moment generating function
The event
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function
Bound the moment generating functions
The moment generating function
The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.
We then upper bound the
Lemma
|
Proof: Observe that for
Since
By expanding both sides as Taylor's series, it can be verified that
Apply the above lemma to the random variable
We have already shown that its expectation
Back to our analysis of the expectation
Apply the same analysis to
Go back to the Markov's inequality,
We then only need to choose a proper
Optimization
By choosing
Thus, the probability
The upper tail of Azuma's inequality is proved. By replacing
Applications
- Coin flips
A fair coin is flipped for a number of times. Let
.
As we proved, the random variables
Due to Azuma's inequality:
- Random walk on a two-dimensional grid
Consider a problem we defined earlier: a random walk on an infinite grid. At each step, and the walk moves to one of the four directions which is chosen uniformly at random. Let
for any
Note that
The Method of Bounded Differences
Generalizations
The martingale can be generalized to be with respect to another sequence of random variables.
Definition (martingale, general version):
|
Therefore, a sequence
The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.
The Doob martingales
The following definition describes a very general approach for constructing an important type of martingales.
Definition (The Doob sequence):
|
The Doob sequence of a function defines a martingale. That is
for any
To prove this claim, we recall the definition that
where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.
The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function
The following two Doob martingales arise in evaluating the parameters of random graphs.
- Example: edge exposure martingale
- Let
be a random graph on vertices. Let be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that . Fix an arbitrary numbering of potential edges between the vertices, and denote the edges as . Let - Let
and for , let . - The sequence
gives a Doob martingale that is commonly called the edge exposure martingale.
- Example: vertex exposure martingale
- Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is
. Let be the subgraph of induced by the vertex set , i.e. the first vertices. - Let
and for , let . - The sequence
gives a Doob martingale that is commonly called the vertex exposure martingale.
Azuma's inequality -- general version
Azuma's inequality can be generalized to a martingale with respect another sequence.
Azuma's Inequality (general version):
|
The proof is almost identical to the proof of the original Azuma's inequality. We also work on the sum of the martingale differences (this time the differences are
- Application: Chromatic number
The random graph
Theorem [Shamir and Spencer (1987)]
|
Proof: Consider the vertex exposure martingale
where each
is satisfied. Now apply the Azuma's inequality for the martingale
For
- Application: Hoeffding's Inequality
The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.
Hoeffding's inequality
|
Proof: Define the Doob martingale sequence
Apply Azuma's inequality for the martingale
For arbitrary random variables
Given a sequence of random variables
Theorem (Method of averaged bounded differences):
|
Proof
Define the Doob Martingale sequence
For independent random variables
The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.
Definition (Lipschitz condition):
|
In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.
The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.
Definition (Lipschitz condition, general version):
|
The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.
Corollary (Method of bounded differences):
|
Proof: For convenience, we denote that
We first show that the Lipschitz condition with constants
And this condition implies the averaged bounded difference condition: for all
Then by applying the method of averaged bounded differences, the corollary can be proved.
For any
Let
Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.
By the law of total expectation,
We can trivially write
Hence, the difference is
The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.
Applications
Occupancy problem
Throwing
This problem can be described equivalently as follows. Let
For any
By the linearity of expectation,
We want to know how
We than observe that changing the value of any
Thus, for sufficiently large
Pattern Matching
Let
By the linearity of expectation, it is obvious that
We now look at the concentration of
For a fixed pattern
Combining unit vectors
Let
Let
Let
and
This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable