# 随机算法 (Fall 2015)/Problem Set 1

## Problem 1

(Due to J. von Neumann.)

1. Suppose you are given a coin for which the probability of HEADS, say ${\displaystyle p}$, is unknown. How can you use this coin to generate unbiased (i.e., ${\displaystyle \Pr[{\mbox{HEADS}}]=\Pr[{\mbox{TAILS}}]=1/2}$) coin-flips? Give a scheme for which the expected number of flips of the biased coin for extracting one unbiased coin-flip is no more than ${\displaystyle 1/(p(1-p))}$.
2. (bonus question) Devise an extension of the scheme that extracts the largest possible number of independent, unbiased coin-flips from a given number of flips of the biased coin.

## Problem 2

(Due to D.E. Knuth and A. C-C. Yao.)

1. Suppose you are provided with a source of unbiased random bits. Explain how you will use this to generate uniform samples from the set ${\displaystyle S=\{0,\dots ,n-1\}}$. Determine the expected number of random bits required by your sampling algorithm.
2. What is the worst-case number of random bits required by your sampling algorithm (where the worst case is taken over all random choices)? Consider the case when ${\displaystyle n}$ is a power of ${\displaystyle 2}$, as well as the case when it is not.
3. Solve (1) and (2) when, instead of unbiased random bits, you are required to use as the source of randomness uniform random samples from the set ${\displaystyle \{0,\dots ,p-1\}}$; consider the case when ${\displaystyle n}$ is a power of ${\displaystyle p}$, as well as the case when it is not.

## Problem 3

We play the following game:

Start with ${\displaystyle n}$ people, each with 2 hands. None of these hands hold each other. At each round, uniformly pick 2 free hands and let these two hands hold together. Repeat this until no free hands left.

• What is the expected number of cycles made by people holding hands with each other at the end of the game? (One person with left hand holding right hand is also counted as a cycle.)

## Problem 4

In Balls-and-Bins model, we throw ${\displaystyle n}$ balls independently and uniformly at random into ${\displaystyle n}$ bins, then the maximum load is ${\displaystyle \Theta ({\frac {\ln n}{\ln \ln n}})}$ with high probability.

The two-choice paradigm is another way to throw ${\displaystyle n}$ balls into ${\displaystyle n}$ bins: each ball is thrown into the least loaded of 2 bins chosen independently and uniformly at random and breaks the tie arbitrarily. The maximum load of two-choice paradigm is ${\displaystyle \,\Theta (\ln \ln n)}$ with high probability, which is exponentially less than the previous one. This phenomenon is called the power of two choices.

Now consider the following three paradigms:

1. The first ${\displaystyle n/2}$ balls are thrown into bins independently and uniformly at random. The remaining ${\displaystyle n/2}$ balls are thrown into bins using two-choice paradigm.
2. The first ${\displaystyle n/2}$ balls are thrown into bins using two-choice paradigm. The remaining ${\displaystyle n/2}$ balls are thrown into bins independently and uniformly at random.
3. Assume all ${\displaystyle n}$ balls are in a sequence. For every ${\displaystyle 1\leq i\leq n}$, if ${\displaystyle i}$ is odd, we throw ${\displaystyle i}$th ball into bins independently and uniformly at random, otherwise, we throw it into bins using two-choice paradigm.

What is the maximum load with high probability in each of three paradigms. You need to give an asymptotically tight bound (i.e. ${\displaystyle \Theta (\cdot )}$).

## Problem 5

(Due to D.R. Karger and R. Motwani.)

1. Let ${\displaystyle S,T}$ be two disjoint subsets of a universe ${\displaystyle U}$ such that ${\displaystyle |S|=|T|=n}$. Suppose we select a random set ${\displaystyle R\subseteq U}$ by independently sampling each element of ${\displaystyle U}$ with probability ${\displaystyle p}$. We say that the random sample ${\displaystyle R}$ is good if the following two conditions hold: ${\displaystyle R\cap S=\emptyset }$ and ${\displaystyle R\cap T\neq \emptyset }$. Show that for ${\displaystyle p=1/n}$, the probability that ${\displaystyle R}$ is good is larger than some positive constant.
2. Suppose now that the random set ${\displaystyle R}$ is chosen by sampling the elements of ${\displaystyle U}$ with only pairwise independence. Show that for a suitable choice of the value of ${\displaystyle p}$, the probability that ${\displaystyle R}$ is good is larger than some positive constant.