随机算法 (Spring 2014)/Chernoff Bound

From EtoneWiki
Jump to: navigation, search

The Chernoff Bound

Suppose that we have a fair coin. If we toss it once, then the outcome is completely unpredictable. But if we toss it, say for 1000 times, then the number of HEADs is very likely to be around 500. This striking phenomenon, illustrated in the right figure, is called the concentration. The Chernoff bound captures the concentration of independent trials.

Coinflip.png

The Chernoff bound is also a tail bound for the sum of independent random variables which may give us exponentially sharp bounds.

Before proving the Chernoff bound, we should talk about the moment generating functions.

Moment generating functions

The more we know about the moments of a random variable , the more information we would have about . There is a so-called moment generating function, which "packs" all the information about the moments of into one function.

Definition
The moment generating function of a random variable is defined as where is the parameter of the function.

By Taylor's expansion and the linearity of expectations,

The moment generating function is a function of .

The Chernoff bound

The Chernoff bounds are exponentially sharp tail inequalities for the sum of independent trials. The bounds are obtained by applying Markov's inequality to the moment generating function of the sum of independent trials, with some appropriate choice of the parameter .

Chernoff bound (the upper tail)
Let , where are independent Poisson trials. Let .
Then for any ,
Proof.
For any , is equivalent to that , thus

where the last step follows by Markov's inequality.

Computing the moment generating function :

Let for . Then,

.

We bound the moment generating function for each individual as follows.

where in the last step we apply the Taylor's expansion so that where . (By doing this, we can transform the product to the sum of , which is .)

Therefore,

Thus, we have shown that for any ,

.

For any , we can let to get

The idea of the proof is actually quite clear: we apply Markov's inequality to and for the rest, we just estimate the moment generating function . To make the bound as tight as possible, we minimized the by setting , which can be justified by taking derivatives of .


We then proceed to the lower tail, the probability that the random variable deviates below the mean value:

Chernoff bound (the lower tail)
Let , where are independent Poisson trials. Let .
Then for any ,
Proof.
For any , by the same analysis as in the upper tail version,

For any , we can let to get


Some useful special forms of the bounds can be derived directly from the above general forms of the bounds. We now know better why we say that the bounds are exponentially sharp.

Useful forms of the Chernoff bound
Let , where are independent Poisson trials. Let . Then
1. for ,
2. for ,
Proof.
To obtain the bounds in (1), we need to show that for , and . We can verify both inequalities by standard analysis techniques.

To obtain the bound in (2), let . Then . Hence,

Balls into bins, revisited

Throwing balls uniformly and independently to bins, what is the maximum load of all bins with high probability? In the last class, we gave an analysis of this problem by using a counting argument.

Now we give a more "advanced" analysis by using Chernoff bounds.


For any and , let be the indicator variable for the event that ball is thrown to bin . Obviously

Let be the load of bin .


Then the expected load of bin is

For the case , it holds that

Note that is a sum of mutually independent indicator variable. Applying Chernoff bound, for any particular bin ,

When

When , . Write . The above bound can be written as

Let , we evaluate by taking logarithm to its reciprocal.

Thus,

Applying the union bound, the probability that there exists a bin with load is

.

Therefore, for , with high probability, the maximum load is .

For larger

When , then according to ,

We can apply an easier form of the Chernoff bounds,

By the union bound, the probability that there exists a bin with load is,

.

Therefore, for , with high probability, the maximum load is .

Packet Routing

The problem raises from parallel computing. Consider that we have processors, connected by a communication network. The processors communicate with each other by sending and receiving packets through the network. We consider the following packet routing problem:

  • Every processor is sending a packet to a unique destination. Therefore for the set of processors, the destinations are given by a permutation of , such that for every processor , the processor is sending a packet to processor .
  • The communication is synchronized, such that for each round, every link (an edge of the graph) can forward at most one packet.

With a complete graph as the network. For any permutation of , all packets can be routed to their destinations in parallel with one round of communication. However, such an ideal connectivity is usually not available in reality, either because they are too expensive, or because they are physically impossible. We are interested in the case the graph is sparse, such that the number of edges is significantly smaller than the complete graph, yet the distance between any pair of vertices is small, so that the packets can be efficiently routed between pairs of vertices.

Routing on a hypercube

A hypercube (sometimes called Boolean cube, Hamming cube, or just cube) is defined over nodes, for a power of 2. We assume that . A hypercube of dimensions, or a -cube, is an undirected graph with the vertex set , such that for any , and are adjacent if and only if , where is the Hamming distance between and .

A -cube is a -degree regular graph over vertices. For any pair of vertices, the distance between and is at most . (How do we know this? Since it takes at most steps to fix any binary string of length bit-by-bit to any other.) This directly gives us the following very natural routing algorithm.

Bit-Fixing Routing Algorithm

For each packet:

  1. Let be the origin and destination of the packet respectively.
  2. For to , do:
if then traverse the edge .
Oblivious routing algorithms
This algorithm is blessed with a desirable property: at each routing step, the choice of link depends only on the the current node and the destination. We call the algorithms with this property oblivious routing algorithms. (Actually, the standard definition of obliviousness allows the choice also depends on the origin. The bit-fixing algorithm is even more oblivious than this standard definition.) Compared to the routing algorithms which are adaptive to the path that the packet traversed, oblivious routing is more simple thus can be implemented by smaller routing table (or simple devices called switches).
Queuing policies
When routing packets in parallel, it is possible that more than one packets want to use the same edge at the same time. We assume that a queue is associated to each edge, such that the packets to be delivered through an edge are put into the queue associated with the edge. With some queuing policy (e.g. FIFO, or furthest to do), the queued packets are delivered through the edge by at most one packet per each round.

For the bit-fixing routing algorithm defined above, regardless of the queuing policy, there always exists a bad permutation which specifies the destinations, such that it takes steps by the bit-fixing algorithm to route all packets to their destinations. (You can prove this by yourself.)

This is pretty bad, because we expect that the routing time is comparable to the diameter of the network, which is only for hypercube.

The lower bound actually applies generally for any deterministic oblivious routing algorithms:

Theorem [Kaklamanis, Krizanc, Tsantilas, 1991]
In any -node communication network with maximum degree , any deterministic oblivious algorithm for routing an arbitrary permutation requires parallel communication steps in the worst case.

The proof of the lower bound is rather technical and complicated. However, the intuition is quite clear: for any oblivious rule for routing, there always exists a permutation which causes a very high congestion, such that many packets have to be delivered through the same edge, thus no matter what queuing policy is used, the maximum delay must be very high.

Average-case analysis for independent destinations

We analyze the average-case performance of the bit-fixing routing algorithm. We relax the problem to non-permutation destinations. That is, instead of restricting that every processor has a distinct destination, we now allow each processor choose an arbitrary destination in .

For the average case, for each node , its destination is a uniformly and independently random node from .

For each node , let denote the route for to its random destination . is a sequence of edges along the bit-fixing route from to .

Reduce the delay of a route to the number of packets that pass through the route

We consider the delay incurred by each node, which is the total time that its packet is waiting in the queue. The total running time of the algorithm is bounded by the maximum delay plus .

We assume that the queueing policy satisfies a very natural requirement:

Natural queuing assumption
If a queue is not empty at the beginning of a time step, some packet is sent along the edge associated with that queue during that time step.
Lemma 2.1
With the above assumption of the queuing policy, the delay inccured by is at most the number of packets whose routes pass through at least one edge in .
Proof.
See Lemma 4.5 in the textbook [MR].

Represent the delay as the sum of independent trials

Let the random variable indicate whether and share at least one edge. That is,

Fix a node and the corresponding route . The random variable gives the total number of packets whose routes pass through . Due to Lemma 2.1, gives an upper bound on the delay inccured by .

We will then bound . Note that for , are independent trials (because the destinations of and are independent), thus we can apply the Chernoff bound. To do so, we must estimate the expectation .

Estimate the expectation of the sum

For any edge in the hypercube, let the random variable denote the number of routes that pass through . As we argued above that is the number of packets that pass though the route , then obviously

where we abuse the notation to denote the edge appeared in the route .

Therefore,

For every node , the length of the route , denoted , is the number of different bits between and the last node in the route (because of the "bit-fixing"). For the uniformly random destination, (a random node in expectedly flips bits in any fixed ). Thus,

It is obvious that we can count the sum of lengths of a set of routes by accumulating their passes through edges, that is

Therefore,

where the sum is taken over all edges in the hypercube.

An important observation is that the distribution of 's are all symmetric, thus all 's are equal. The number of edges in the hypercube is . Therefore, for every edge in the hupercube,

The length of is at most . Due to , the expectation of is .

Apply the Chernoff bound

We apply the following form of the Chernoff bound:

Chernoff bound
Let , where are independent Poisson trials. Let . Then for ,

It holds that . By applying the Chernoff bound,

.

Note that only gives the bound on the delay incurred by a particular node . By the union bound,

The running time is the maximum delay plus the length of a route, thus is with probability .

A two-phase randomized routing algorithm

The above analysis of the performance of bit-fixing for independent random destinations hints us that we can first route the packets to random "relay"s to avoid the high congestion. This was first discovered by Leslie Valiant who uses the idea to give a simple and elegant randomized routing algorithm for permutation routing.

The algorithm works in two phases.

Two-Phase Routing Algorithm

For each packet:

Phase I: Route the packet to a random destination using the bit-fixing algorithm.

Phase II: Route the packet from the random location to its final destination using the bit-fixing algorithm.

It looks counter-intuitive that first routing the packets to irrelevant intermediate nodes actually improves the overall performance.

To simplify the analysis, we assume that no packet is sent in Phase II before all packets have finished Phase I.

Phase I is exactly the bit-fixing routing for uniformly and independently random destinations, which as we analyzed in the last section, has a running time within for probability at least .

The Phase II is a "backward" running of Phase I. All the analysis of Phase I can be directly applied to Phase II. Thus, the running time of Phase II is with probability . By the union bound, the total running time of the randomized routing algorithm is no more than with high probability.

Set Balancing

Supposed that we have an matrix with 0-1 entries. We are looking for a that minimizes .

Recall that is the infinity norm (also called norm) of a vector, and for the vector ,

.

We can also describe this problem as an optimization:

This problem is called set balancing for a reason.

The problem arises in designing statistical experiments. Suppose that we have subjects, each of which may have up to features. This gives us an matrix :

where each column represents a subject and each row represent a feature. An entry indicates whether subject has feature .

By multiplying a vector

the subjects are partitioned into two disjoint groups: one for -1 and other other for +1. Each gives the difference between the numbers of subjects with feature in the two groups. By minimizing , we ask for an optimal partition so that each feature is roughly as balanced as possible between the two groups.

In a scientific experiment, one of the group serves as a control group (对照组). Ideally, we want the two groups are statistically identical, which is usually impossible to achieve in practice. The requirement of minimizing actually means the statistical difference between the two groups are minimized.


We propose an extremely simple "randomized algorithm" for computing a : for each , let be independently chosen from , such that

This procedure can hardly be called as an "algorithm", because its decision is made disregard of the input . We then show that despite of this obliviousness, the algorithm chooses a good enough , such that for any , with high probability.

Theorem
Let be an matrix with 0-1 entries. For a random vector with entries chosen independently and with equal probability from ,
.
Proof.

Consider particularly the -th row of . The entry of contributed by row is .

Let be the non-zero entries in the row. If , then clearly is no greater than . On the other hand if then the nonzero terms in the sum

are independent, each with probability 1/2 of being either +1 or -1.

Thus, for these nonzero terms, each is either positive or negative independently with equal probability. There are expectedly positive 's among these terms, and only occurs when there are less than positive 's, where . Applying Chernoff bound, this event occurs with probability at most

The same argument can be applied to negative 's, so that the probability that is at most . Therefore, by the union bound,

.

Apply the union bound to all rows.

.


How good is this randomized algorithm? In fact when there exists a matrix such that for any choice of .