随机算法 (Fall 2011)/Problem set 2: Difference between revisions
imported>Etone |
imported>Etone |
||
Line 19: | Line 19: | ||
Formally, a '''permutation''' is a bijection <math>\pi:[n]\xrightarrow[\text{1-1}]{\text{onto}}[n]</math>. A '''fixed point''' for permutation <math>\pi</math> is an <math>i\in[n]</math> such that <math>\pi(i)=i</math>. Then <math>p(n)</math> is the probability that a uniformly random permutation has no fixed point. Such permutations are called '''derangements'''. | Formally, a '''permutation''' is a bijection <math>\pi:[n]\xrightarrow[\text{1-1}]{\text{onto}}[n]</math>. A '''fixed point''' for permutation <math>\pi</math> is an <math>i\in[n]</math> such that <math>\pi(i)=i</math>. Then <math>p(n)</math> is the probability that a uniformly random permutation has no fixed point. Such permutations are called '''derangements'''. | ||
The derangement problem is taught in undergraduate class Combinatorics at Nanjing University. See the [[组合数学 (Fall 2011)/Sieve_methods#Derangements|lecture notes]] for details. By the Principle of Inclusion-Exclusion (容斥原则), we know that <math>p(n)=\sum_{k=0}^n\frac{(-1)^k}{k!}</math>. | The derangement problem is taught in undergraduate class [[组合数学 (Fall 2011)|Combinatorics]] at Nanjing University. See the [[组合数学 (Fall 2011)/Sieve_methods#Derangements|lecture notes]] for details. By the Principle of Inclusion-Exclusion (容斥原则), we know that <math>p(n)=\sum_{k=0}^n\frac{(-1)^k}{k!}</math>. | ||
Recall that due to Taylor's expansion <math>\mathrm{e}=\sum_{k=0}^\infty\frac{(-1)^k}{k!}=p(\infty)</math>. | Recall that due to Taylor's expansion <math>\mathrm{e}=\sum_{k=0}^\infty\frac{(-1)^k}{k!}=p(\infty)</math>. |
Revision as of 08:12, 9 October 2011
Problem 1
Some known facts:
- Balls-into-bins: [math]\displaystyle{ m }[/math] balls are uniformly and independently thrown to [math]\displaystyle{ n }[/math] bins. If [math]\displaystyle{ m=\Theta(n) }[/math], then the maximum load is [math]\displaystyle{ \Theta\left(\frac{\ln n}{\ln\ln n}\right) }[/math] with high probability.
- Power of two choices: [math]\displaystyle{ m }[/math] balls are sequentially thrown to [math]\displaystyle{ n }[/math] bins. For each ball, uniformly and independently choose random bins [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] (might be the same bin), and throw the ball to the currently less loaded bin among bin [math]\displaystyle{ i }[/math] and bin [math]\displaystyle{ j }[/math]. Assuming that [math]\displaystyle{ m=\Theta(n) }[/math], after all [math]\displaystyle{ m }[/math] balls are thrown to bins, the maximum load is [math]\displaystyle{ \Theta\left(\ln \ln n\right) }[/math] with high probability.
Questions: Assume [math]\displaystyle{ n }[/math] balls and [math]\displaystyle{ n }[/math] bins. We throw the [math]\displaystyle{ n }[/math] balls sequentially.
- If we throw the first [math]\displaystyle{ \frac{n}{2} }[/math] balls uniformly and then throw the rest balls as the way of power-of-two-choices, what is the asymptotic maximum load with high probability?
- For the [math]\displaystyle{ k }[/math]th ball, if [math]\displaystyle{ k }[/math] is even, we throw the ball to a uniform bin; and if [math]\displaystyle{ k }[/math] is odd, we throw the ball as the way of power-of-two-choices. What is the asymptotic maximum load with high probability?
Problem 2
- Generalize the LazySelect algorithm for the general selection problem: finding the [math]\displaystyle{ k }[/math]th smallest element in an array. Choose appropriate parameters to do the job.
- Use the Chernoff bounds instead of Chebyshev's inequality in the analysis of the above algorithm and try to use a smaller number of random samples.
Problem 3
- The Monte Carlo method for computing [math]\displaystyle{ \mathrm{e} }[/math]
Suppose there are [math]\displaystyle{ n }[/math] peoples, each wearing a hat. We collect all their hats and randomly distribute the hats back to them, with each people receiving one hat. We ask for the probability that none of the people receive his/her own hat. We denote this probability by [math]\displaystyle{ p(n) }[/math]
Formally, a permutation is a bijection [math]\displaystyle{ \pi:[n]\xrightarrow[\text{1-1}]{\text{onto}}[n] }[/math]. A fixed point for permutation [math]\displaystyle{ \pi }[/math] is an [math]\displaystyle{ i\in[n] }[/math] such that [math]\displaystyle{ \pi(i)=i }[/math]. Then [math]\displaystyle{ p(n) }[/math] is the probability that a uniformly random permutation has no fixed point. Such permutations are called derangements.
The derangement problem is taught in undergraduate class Combinatorics at Nanjing University. See the lecture notes for details. By the Principle of Inclusion-Exclusion (容斥原则), we know that [math]\displaystyle{ p(n)=\sum_{k=0}^n\frac{(-1)^k}{k!} }[/math].
Recall that due to Taylor's expansion [math]\displaystyle{ \mathrm{e}=\sum_{k=0}^\infty\frac{(-1)^k}{k!}=p(\infty) }[/math].
Now suppose you are given access to a balckbox [math]\displaystyle{ derange(n) }[/math] which given as input a number [math]\displaystyle{ n }[/math], samples a uniform and independent permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ [n] }[/math], and returns true if [math]\displaystyle{ \pi }[/math] has no fixed point and false if otherwise.
Give a randomized algorithm [math]\displaystyle{ randE(\epsilon,\delta) }[/math], whose inputs are [math]\displaystyle{ 0\lt \epsilon\lt 0.1 }[/math] and [math]\displaystyle{ 0\lt \delta\lt \frac{1}{2} }[/math]. The returned value of the algorithm should satisfy that
- [math]\displaystyle{ \Pr[\mathrm{e}-\epsilon\le randE(\epsilon,\delta)\le\mathrm{e}+\epsilon]\ge1-\delta }[/math].
- Your algorithm should call [math]\displaystyle{ derange(n) }[/math] as subroutine. Try to make both [math]\displaystyle{ n }[/math] and the number of times calling [math]\displaystyle{ derange(n) }[/math] as small as possible.
- Please given detailed analysis of your algorithm.