# Erdős–Rényi Random Graphs

Consider a graph ${\displaystyle G(V,E)}$ which is randomly generated as:

• ${\displaystyle |V|=n}$;
• ${\displaystyle \forall \{u,v\}\in {V \choose 2}}$, ${\displaystyle uv\in E}$ independently with probability ${\displaystyle p}$.

Such graph is denoted as ${\displaystyle G(n,p)}$. This is called the Erdős–Rényi model or ${\displaystyle G(n,p)}$ model for random graphs.

Informally, the presence of every edge of ${\displaystyle G(n,p)}$ is determined by an independent coin flipping (with probability of HEADs ${\displaystyle p}$).

## Monotone properties

A graph property is a predicate of graph which depends only on the structure of the graph.

 Definition Let ${\displaystyle {\mathcal {G}}_{n}=2^{V \choose 2}}$, where ${\displaystyle |V|=n}$, be the set of all possible graphs on ${\displaystyle n}$ vertices. A graph property is a boolean function ${\displaystyle P:{\mathcal {G}}_{n}\rightarrow \{0,1\}}$ which is invariant under permutation of vertices, i.e. ${\displaystyle P(G)=P(H)}$ whenever ${\displaystyle G}$ is isomorphic to ${\displaystyle H}$.

We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property.

 Definition A graph property ${\displaystyle P}$ is monotone if for any ${\displaystyle G\subseteq H}$, both on ${\displaystyle n}$ vertices, ${\displaystyle G}$ having property ${\displaystyle P}$ implies ${\displaystyle H}$ having property ${\displaystyle P}$.

By seeing the property as a function mapping a set of edges to a numerical value in ${\displaystyle \{0,1\}}$, a monotone property is just a monotonically increasing set function.

Some examples of monotone graph properties:

• Hamiltonian;
• ${\displaystyle k}$-clique;
• contains a subgraph isomorphic to some ${\displaystyle H}$;
• non-planar;
• chromatic number ${\displaystyle >k}$ (i.e., not ${\displaystyle k}$-colorable);
• girth ${\displaystyle <\ell }$.

From the last two properties, you can see another reason that the Erdős theorem is unintuitive.

Some examples of non-monotone graph properties:

• Eulerian;
• contains an induced subgraph isomorphic to some ${\displaystyle H}$;

For all monotone graph properties, we have the following theorem.

 Theorem Let ${\displaystyle P}$ be a monotone graph property. Suppose ${\displaystyle G_{1}=G(n,p_{1})}$, ${\displaystyle G_{2}=G(n,p_{2})}$, and ${\displaystyle 0\leq p_{1}\leq p_{2}\leq 1}$. Then ${\displaystyle \Pr[P(G_{1})]\leq \Pr[P(G_{2})]}$.

Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of coupling, a proof technique in probability theory which compare two unrelated random variables by forcing them to be related.

Proof.
 For any ${\displaystyle \{u,v\}\in {[n] \choose 2}}$, let ${\displaystyle X_{\{u,v\}}}$ be independently and uniformly distributed over the continuous interval ${\displaystyle [0,1]}$. Let ${\displaystyle uv\in G_{1}}$ if and only if ${\displaystyle X_{\{u,v\}}\in [0,p_{1}]}$ and let ${\displaystyle uv\in G_{2}}$ if and only if ${\displaystyle X_{\{u,v\}}\in [0,p_{2}]}$. It is obvious that ${\displaystyle G_{1}\sim G(n,p_{1})\,}$ and ${\displaystyle G_{2}\sim G(n,p_{2})\,}$. For any ${\displaystyle \{u,v\}}$, ${\displaystyle uv\in G_{1}}$ means that ${\displaystyle X_{\{u,v\}}\in [0,p_{1}]\subseteq [0,p_{2}]}$, which implies that ${\displaystyle uv\in G_{2}}$. Thus, ${\displaystyle G_{1}\subseteq G_{2}}$. Since ${\displaystyle P}$ is monotone, ${\displaystyle P(G_{1})=1}$ implies ${\displaystyle P(G_{2})}$. Thus, ${\displaystyle \Pr[P(G_{1})=1]\leq \Pr[P(G_{2})=1]}$.
${\displaystyle \square }$

## Threshold phenomenon

One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph ${\displaystyle G(n,p)}$ suddenly changes from almost always not having the property to almost always having the property as ${\displaystyle p}$ grows in a very small range.

A monotone graph property ${\displaystyle P}$ is said to have the threshold ${\displaystyle p(n)}$ if

• when ${\displaystyle p\ll p(n)}$, ${\displaystyle \Pr[P(G(n,p))]=0}$ as ${\displaystyle n\rightarrow \infty }$ (also called ${\displaystyle G(n,p)}$ almost always does not have ${\displaystyle P}$); and
• when ${\displaystyle p\gg p(n)}$, ${\displaystyle \Pr[P(G(n,p))]=1}$ as ${\displaystyle n\rightarrow \infty }$ (also called ${\displaystyle G(n,p)}$ almost always has ${\displaystyle P}$).

The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality).

### Threshold for 4-clique

 Theorem The threshold for a random graph ${\displaystyle G(n,p)}$ to contain a 4-clique is ${\displaystyle p=n^{2/3}}$.

We formulate the problem as such. For any ${\displaystyle 4}$-subset of vertices ${\displaystyle S\in {V \choose 4}}$, let ${\displaystyle X_{S}}$ be the indicator random variable such that

${\displaystyle X_{S}={\begin{cases}1&S{\mbox{ is a clique}},\\0&{\mbox{otherwise}}.\end{cases}}}$

Let ${\displaystyle X=\sum _{S\in {V \choose 4}}X_{S}}$ be the total number of 4-cliques in ${\displaystyle G}$.

It is sufficient to prove the following lemma.

 Lemma If ${\displaystyle p=o(n^{-2/3})}$, then ${\displaystyle \Pr[X\geq 1]\rightarrow 0}$ as ${\displaystyle n\rightarrow \infty }$. If ${\displaystyle p=\omega (n^{-2/3})}$, then ${\displaystyle \Pr[X\geq 1]\rightarrow 1}$ as ${\displaystyle n\rightarrow \infty }$.
Proof.
 The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality). Every 4-clique has 6 edges, thus for any ${\displaystyle S\in {V \choose 4}}$, ${\displaystyle \mathbf {E} [X_{S}]=\Pr[X_{S}=1]=p^{6}}$. By the linearity of expectation, ${\displaystyle \mathbf {E} [X]=\sum _{S\in {V \choose 4}}\mathbf {E} [X_{S}]={n \choose 4}p^{6}}$. Applying Markov's inequality ${\displaystyle \Pr[X\geq 1]\leq \mathbf {E} [X]=O(n^{4}p^{6})=o(1)}$, if ${\displaystyle p=o(n^{-2/3})}$. The first claim is proved. To prove the second claim, it is equivalent to show that ${\displaystyle \Pr[X=0]=o(1)}$ if ${\displaystyle p=\omega (n^{-2/3})}$. By the Chebyshev's inequality, ${\displaystyle \Pr[X=0]\leq \Pr[|X-\mathbf {E} [X]|\geq \mathbf {E} [X]]\leq {\frac {\mathbf {Var} [X]}{(\mathbf {E} [X])^{2}}}}$, where the variance is computed as ${\displaystyle \mathbf {Var} [X]=\mathbf {Var} \left[\sum _{S\in {V \choose 4}}X_{S}\right]=\sum _{S\in {V \choose 4}}\mathbf {Var} [X_{S}]+\sum _{S,T\in {V \choose 4},S\neq T}\mathbf {Cov} (X_{S},X_{T})}$. For any ${\displaystyle S\in {V \choose 4}}$, ${\displaystyle \mathbf {Var} [X_{S}]=\mathbf {E} [X_{S}^{2}]-\mathbf {E} [X_{S}]^{2}\leq \mathbf {E} [X_{S}^{2}]=\mathbf {E} [X_{S}]=p^{6}}$. Thus the first term of above formula is ${\displaystyle \sum _{S\in {V \choose 4}}\mathbf {Var} [X_{S}]=O(n^{4}p^{6})}$. We now compute the covariances. For any ${\displaystyle S,T\in {V \choose 4}}$ that ${\displaystyle S\neq T}$: Case.1: ${\displaystyle |S\cap T|\leq 1}$, so ${\displaystyle S}$ and ${\displaystyle T}$ do not share any edges. ${\displaystyle X_{S}}$ and ${\displaystyle X_{T}}$ are independent, thus ${\displaystyle \mathbf {Cov} (X_{S},X_{T})=0}$. Case.2: ${\displaystyle |S\cap T|=2}$, so ${\displaystyle S}$ and ${\displaystyle T}$ share an edge. Since ${\displaystyle |S\cup T|=6}$, there are ${\displaystyle {n \choose 6}=O(n^{6})}$ pairs of such ${\displaystyle S}$ and ${\displaystyle T}$. ${\displaystyle \mathbf {Cov} (X_{S},X_{T})=\mathbf {E} [X_{S}X_{T}]-\mathbf {E} [X_{S}]\mathbf {E} [X_{T}]\leq \mathbf {E} [X_{S}X_{T}]=\Pr[X_{S}=1\wedge X_{T}=1]=p^{11}}$ since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is ${\displaystyle O(n^{6}p^{11})}$. Case.2: ${\displaystyle |S\cap T|=3}$, so ${\displaystyle S}$ and ${\displaystyle T}$ share a triangle. Since ${\displaystyle |S\cup T|=5}$, there are ${\displaystyle {n \choose 5}=O(n^{5})}$ pairs of such ${\displaystyle S}$ and ${\displaystyle T}$. By the same argument, ${\displaystyle \mathbf {Cov} (X_{S},X_{T})\leq \Pr[X_{S}=1\wedge X_{T}=1]=p^{9}}$ since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is ${\displaystyle O(n^{5}p^{9})}$. Putting all these together, ${\displaystyle \mathbf {Var} [X]=O(n^{4}p^{6}+n^{6}p^{11}+n^{5}p^{9}).}$ And ${\displaystyle \Pr[X=0]\leq {\frac {\mathbf {Var} [X]}{(\mathbf {E} [X])^{2}}}=O(n^{-4}p^{-6}+n^{-2}p^{-1}+n^{-3}p^{-3})}$, which is ${\displaystyle o(1)}$ if ${\displaystyle p=\omega (n^{-2/3})}$. The second claim is also proved.
${\displaystyle \square }$

### Threshold for balanced subgraphs

The above theorem can be generalized to any "balanced" subgraphs.

 Definition The density of a graph ${\displaystyle G(V,E)}$, denoted ${\displaystyle \rho (G)\,}$, is defined as ${\displaystyle \rho (G)={\frac {|E|}{|V|}}}$. A graph ${\displaystyle G(V,E)}$ is balanced if ${\displaystyle \rho (H)\leq \rho (G)}$ for all subgraphs ${\displaystyle H}$ of ${\displaystyle G}$.

Cliques are balanced, because ${\displaystyle {\frac {k \choose 2}{k}}\leq {\frac {n \choose 2}{n}}}$ for any ${\displaystyle k\leq n}$. The threshold for 4-clique is a direct corollary of the following general theorem.

 Theorem (Erdős–Rényi 1960) Let ${\displaystyle H}$ be a balanced graph with ${\displaystyle k}$ vertices and ${\displaystyle \ell }$ edges. The threshold for the property that a random graph ${\displaystyle G(n,p)}$ contains a (not necessarily induced) subgraph isomorphic to ${\displaystyle H}$ is ${\displaystyle p=n^{-k/\ell }\,}$.
Sketch of proof.
 For any ${\displaystyle S\in {V \choose k}}$, let ${\displaystyle X_{S}}$ indicate whether ${\displaystyle G_{S}}$ (the subgraph of ${\displaystyle G}$ induced by ${\displaystyle S}$) contain a subgraph ${\displaystyle H}$. Then ${\displaystyle p^{\ell }\leq \mathbf {E} [X_{S}]\leq k!p^{\ell }}$, since there are at most ${\displaystyle k!}$ ways to match the substructure. Note that ${\displaystyle k}$ does not depend on ${\displaystyle n}$. Thus, ${\displaystyle \mathbf {E} [X_{S}]=\Theta (p^{\ell })}$. Let ${\displaystyle X=\sum _{S\in {V \choose k}}X_{S}}$ be the number of ${\displaystyle H}$-subgraphs. ${\displaystyle \mathbf {E} [X]=\Theta (n^{k}p^{\ell })}$. By Markov's inequality, ${\displaystyle \Pr[X\geq 1]\leq \mathbf {E} [X]=\Theta (n^{k}p^{\ell })}$ which is ${\displaystyle o(1)}$ when ${\displaystyle p\ll n^{-\ell /k}}$. By Chebyshev's inequality, ${\displaystyle \Pr[X=0]\leq {\frac {\mathbf {Var} [X]}{\mathbf {E} [X]^{2}}}}$ where ${\displaystyle \mathbf {Var} [X]=\sum _{S\in {V \choose k}}\mathbf {Var} [X_{S}]+\sum _{S\neq T}\mathbf {Cov} (X_{S},X_{T})}$. The first term ${\displaystyle \sum _{S\in {V \choose k}}\mathbf {Var} [X_{S}]\leq \sum _{S\in {V \choose k}}\mathbf {E} [X_{S}^{2}]=\sum _{S\in {V \choose k}}\mathbf {E} [X_{S}]=\mathbf {E} [X]=\Theta (n^{k}p^{\ell })}$. For the covariances, ${\displaystyle \mathbf {Cov} (X_{S},X_{T})\neq 0}$ only if ${\displaystyle |S\cap T|=i}$ for ${\displaystyle 2\leq i\leq k-1}$. Note that ${\displaystyle |S\cap T|=i}$ implies that ${\displaystyle |S\cup T|=2k-i}$. And for balanced ${\displaystyle H}$, the number of edges of interest in ${\displaystyle S}$ and ${\displaystyle T}$ is ${\displaystyle 2\ell -i\rho (H_{S\cap T})\geq 2\ell -i\rho (H)=2\ell -i\ell /k}$. Thus, ${\displaystyle \mathbf {Cov} (X_{S},X_{T})\leq \mathbf {E} [X_{S}X_{T}]\leq p^{2\ell -i\ell /k}}$. And, ${\displaystyle \sum _{S\neq T}\mathbf {Cov} (X_{S},X_{T})=\sum _{i=2}^{k-1}O(n^{2k-i}p^{2\ell -i\ell /k})}$ Therefore, when ${\displaystyle p\gg n^{-\ell /k}}$, ${\displaystyle \Pr[X=0]\leq {\frac {\mathbf {Var} [X]}{\mathbf {E} [X]^{2}}}\leq {\frac {\Theta (n^{k}p^{\ell })+\sum _{i=2}^{k-1}O(n^{2k-i}p^{2\ell -i\ell /k})}{\Theta (n^{2k}p^{2\ell })}}=\Theta (n^{-k}p^{-\ell })+\sum _{i=2}^{k-1}O(n^{-i}p^{-i\ell /k})=o(1)}$.
${\displaystyle \square }$

# The Chernoff Bound

Suppose that we have a fair coin. If we toss it once, then the outcome is completely unpredictable. But if we toss it, say for 1000 times, then the number of HEADs is very likely to be around 500. This striking phenomenon, illustrated in the right figure, is called the concentration. The Chernoff bound captures the concentration of independent trials.

The Chernoff bound is also a tail bound for the sum of independent random variables which may give us exponentially sharp bounds.

Before proving the Chernoff bound, we should talk about the moment generating functions.

## Moment generating functions

The more we know about the moments of a random variable ${\displaystyle X}$, the more information we would have about ${\displaystyle X}$. There is a so-called moment generating function, which "packs" all the information about the moments of ${\displaystyle X}$ into one function.

 Definition The moment generating function of a random variable ${\displaystyle X}$ is defined as ${\displaystyle \mathbf {E} \left[\mathrm {e} ^{\lambda X}\right]}$ where ${\displaystyle \lambda }$ is the parameter of the function.

By Taylor's expansion and the linearity of expectations,

{\displaystyle {\begin{aligned}\mathbf {E} \left[\mathrm {e} ^{\lambda X}\right]&=\mathbf {E} \left[\sum _{k=0}^{\infty }{\frac {\lambda ^{k}}{k!}}X^{k}\right]\\&=\sum _{k=0}^{\infty }{\frac {\lambda ^{k}}{k!}}\mathbf {E} \left[X^{k}\right]\end{aligned}}}

The moment generating function ${\displaystyle \mathbf {E} \left[\mathrm {e} ^{\lambda X}\right]}$ is a function of ${\displaystyle \lambda }$.

## The Chernoff bound

The Chernoff bounds are exponentially sharp tail inequalities for the sum of independent trials. The bounds are obtained by applying Markov's inequality to the moment generating function of the sum of independent trials, with some appropriate choice of the parameter ${\displaystyle \lambda }$.

 Chernoff bound (the upper tail) Let ${\displaystyle X=\sum _{i=1}^{n}X_{i}}$, where ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ are independent Poisson trials. Let ${\displaystyle \mu =\mathbf {E} [X]}$. Then for any ${\displaystyle \delta >0}$, ${\displaystyle \Pr[X\geq (1+\delta )\mu ]\leq \left({\frac {e^{\delta }}{(1+\delta )^{(1+\delta )}}}\right)^{\mu }.}$
Proof.
 For any ${\displaystyle \lambda >0}$, ${\displaystyle X\geq (1+\delta )\mu }$ is equivalent to that ${\displaystyle e^{\lambda X}\geq e^{\lambda (1+\delta )\mu }}$, thus {\displaystyle {\begin{aligned}\Pr[X\geq (1+\delta )\mu ]&=\Pr \left[e^{\lambda X}\geq e^{\lambda (1+\delta )\mu }\right]\\&\leq {\frac {\mathbf {E} \left[e^{\lambda X}\right]}{e^{\lambda (1+\delta )\mu }}},\end{aligned}}} where the last step follows by Markov's inequality. Computing the moment generating function ${\displaystyle \mathbf {E} [e^{\lambda X}]}$: {\displaystyle {\begin{aligned}\mathbf {E} \left[e^{\lambda X}\right]&=\mathbf {E} \left[e^{\lambda \sum _{i=1}^{n}X_{i}}\right]\\&=\mathbf {E} \left[\prod _{i=1}^{n}e^{\lambda X_{i}}\right]\\&=\prod _{i=1}^{n}\mathbf {E} \left[e^{\lambda X_{i}}\right].&({\mbox{for independent random variables}})\end{aligned}}} Let ${\displaystyle p_{i}=\Pr[X_{i}=1]}$ for ${\displaystyle i=1,2,\ldots ,n}$. Then, ${\displaystyle \mu =\mathbf {E} [X]=\mathbf {E} \left[\sum _{i=1}^{n}X_{i}\right]=\sum _{i=1}^{n}\mathbf {E} [X_{i}]=\sum _{i=1}^{n}p_{i}}$. We bound the moment generating function for each individual ${\displaystyle X_{i}}$ as follows. {\displaystyle {\begin{aligned}\mathbf {E} \left[e^{\lambda X_{i}}\right]&=p_{i}\cdot e^{\lambda \cdot 1}+(1-p_{i})\cdot e^{\lambda \cdot 0}\\&=1+p_{i}(e^{\lambda }-1)\\&\leq e^{p_{i}(e^{\lambda }-1)},\end{aligned}}} where in the last step we apply the Taylor's expansion so that ${\displaystyle e^{y}\geq 1+y}$ where ${\displaystyle y=p_{i}(e^{\lambda }-1)\geq 0}$. (By doing this, we can transform the product to the sum of ${\displaystyle p_{i}}$, which is ${\displaystyle \mu }$.) Therefore, {\displaystyle {\begin{aligned}\mathbf {E} \left[e^{\lambda X}\right]&=\prod _{i=1}^{n}\mathbf {E} \left[e^{\lambda X_{i}}\right]\\&\leq \prod _{i=1}^{n}e^{p_{i}(e^{\lambda }-1)}\\&=\exp \left(\sum _{i=1}^{n}p_{i}(e^{\lambda }-1)\right)\\&=e^{(e^{\lambda }-1)\mu }.\end{aligned}}} Thus, we have shown that for any ${\displaystyle \lambda >0}$, {\displaystyle {\begin{aligned}\Pr[X\geq (1+\delta )\mu ]&\leq {\frac {\mathbf {E} \left[e^{\lambda X}\right]}{e^{\lambda (1+\delta )\mu }}}\\&\leq {\frac {e^{(e^{\lambda }-1)\mu }}{e^{\lambda (1+\delta )\mu }}}\\&=\left({\frac {e^{(e^{\lambda }-1)}}{e^{\lambda (1+\delta )}}}\right)^{\mu }\end{aligned}}}. For any ${\displaystyle \delta >0}$, we can let ${\displaystyle \lambda =\ln(1+\delta )>0}$ to get ${\displaystyle \Pr[X\geq (1+\delta )\mu ]\leq \left({\frac {e^{\delta }}{(1+\delta )^{(1+\delta )}}}\right)^{\mu }.}$
${\displaystyle \square }$

The idea of the proof is actually quite clear: we apply Markov's inequality to ${\displaystyle e^{\lambda X}}$ and for the rest, we just estimate the moment generating function ${\displaystyle \mathbf {E} [e^{\lambda X}]}$. To make the bound as tight as possible, we minimized the ${\displaystyle {\frac {e^{(e^{\lambda }-1)}}{e^{\lambda (1+\delta )}}}}$ by setting ${\displaystyle \lambda =\ln(1+\delta )}$, which can be justified by taking derivatives of ${\displaystyle {\frac {e^{(e^{\lambda }-1)}}{e^{\lambda (1+\delta )}}}}$.

We then proceed to the lower tail, the probability that the random variable deviates below the mean value:

 Chernoff bound (the lower tail) Let ${\displaystyle X=\sum _{i=1}^{n}X_{i}}$, where ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ are independent Poisson trials. Let ${\displaystyle \mu =\mathbf {E} [X]}$. Then for any ${\displaystyle 0<\delta <1}$, ${\displaystyle \Pr[X\leq (1-\delta )\mu ]\leq \left({\frac {e^{-\delta }}{(1-\delta )^{(1-\delta )}}}\right)^{\mu }.}$
Proof.
 For any ${\displaystyle \lambda <0}$, by the same analysis as in the upper tail version, {\displaystyle {\begin{aligned}\Pr[X\leq (1-\delta )\mu ]&=\Pr \left[e^{\lambda X}\geq e^{\lambda (1-\delta )\mu }\right]\\&\leq {\frac {\mathbf {E} \left[e^{\lambda X}\right]}{e^{\lambda (1-\delta )\mu }}}\\&\leq \left({\frac {e^{(e^{\lambda }-1)}}{e^{\lambda (1-\delta )}}}\right)^{\mu }.\end{aligned}}} For any ${\displaystyle 0<\delta <1}$, we can let ${\displaystyle \lambda =\ln(1-\delta )<0}$ to get ${\displaystyle \Pr[X\geq (1-\delta )\mu ]\leq \left({\frac {e^{-\delta }}{(1-\delta )^{(1-\delta )}}}\right)^{\mu }.}$
${\displaystyle \square }$

Some useful special forms of the bounds can be derived directly from the above general forms of the bounds. We now know better why we say that the bounds are exponentially sharp.

 Useful forms of the Chernoff bound Let ${\displaystyle X=\sum _{i=1}^{n}X_{i}}$, where ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ are independent Poisson trials. Let ${\displaystyle \mu =\mathbf {E} [X]}$. Then 1. for ${\displaystyle 0<\delta \leq 1}$, ${\displaystyle \Pr[X\geq (1+\delta )\mu ]<\exp \left(-{\frac {\mu \delta ^{2}}{3}}\right);}$ ${\displaystyle \Pr[X\leq (1-\delta )\mu ]<\exp \left(-{\frac {\mu \delta ^{2}}{2}}\right);}$ 2. for ${\displaystyle t\geq 2e\mu }$, ${\displaystyle \Pr[X\geq t]\leq 2^{-t}.}$
Proof.
 To obtain the bounds in (1), we need to show that for ${\displaystyle 0<\delta <1}$, ${\displaystyle {\frac {e^{\delta }}{(1+\delta )^{(1+\delta )}}}\leq e^{-\delta ^{2}/3}}$ and ${\displaystyle {\frac {e^{-\delta }}{(1-\delta )^{(1-\delta )}}}\leq e^{-\delta ^{2}/2}}$. We can verify both inequalities by standard analysis techniques. To obtain the bound in (2), let ${\displaystyle t=(1+\delta )\mu }$. Then ${\displaystyle \delta =t/\mu -1\geq 2e-1}$. Hence, {\displaystyle {\begin{aligned}\Pr[X\geq (1+\delta )\mu ]&\leq \left({\frac {e^{\delta }}{(1+\delta )^{(1+\delta )}}}\right)^{\mu }\\&\leq \left({\frac {e}{1+\delta }}\right)^{(1+\delta )\mu }\\&\leq \left({\frac {e}{2e}}\right)^{t}\\&\leq 2^{-t}\end{aligned}}}
${\displaystyle \square }$

# Balls into bins, revisited

Throwing ${\displaystyle m}$ balls uniformly and independently to ${\displaystyle n}$ bins, what is the maximum load of all bins with high probability? In the last class, we gave an analysis of this problem by using a counting argument.

Now we give a more "advanced" analysis by using Chernoff bounds.

For any ${\displaystyle i\in [n]}$ and ${\displaystyle j\in [m]}$, let ${\displaystyle X_{ij}}$ be the indicator variable for the event that ball ${\displaystyle j}$ is thrown to bin ${\displaystyle i}$. Obviously

${\displaystyle \mathbf {E} [X_{ij}]=\Pr[{\mbox{ball }}j{\mbox{ is thrown to bin }}i]={\frac {1}{n}}}$

Let ${\displaystyle Y_{i}=\sum _{j\in [m]}X_{ij}}$ be the load of bin ${\displaystyle i}$.

Then the expected load of bin ${\displaystyle i}$ is

${\displaystyle (*)\qquad \mu =\mathbf {E} [Y_{i}]=\mathbf {E} \left[\sum _{j\in [m]}X_{ij}\right]=\sum _{j\in [m]}\mathbf {E} [X_{ij}]=m/n.}$

For the case ${\displaystyle m=n}$, it holds that ${\displaystyle \mu =1}$

Note that ${\displaystyle Y_{i}}$ is a sum of ${\displaystyle m}$ mutually independent indicator variable. Applying Chernoff bound, for any particular bin ${\displaystyle i\in [n]}$,

${\displaystyle \Pr[Y_{i}>(1+\delta )\mu ]\leq \left({\frac {e^{\delta }}{(1+\delta )^{1+\delta }}}\right)^{\mu }.}$

## When ${\displaystyle m=n}$

When ${\displaystyle m=n}$, ${\displaystyle \mu =1}$. Write ${\displaystyle c=1+\delta }$. The above bound can be written as

${\displaystyle \Pr[Y_{i}>c]\leq {\frac {e^{c-1}}{c^{c}}}.}$

Let ${\displaystyle c={\frac {e\ln n}{\ln \ln n}}}$, we evaluate ${\displaystyle {\frac {e^{c-1}}{c^{c}}}}$ by taking logarithm to its reciprocal.

{\displaystyle {\begin{aligned}\ln \left({\frac {c^{c}}{e^{c-1}}}\right)&=c\ln c-c+1\\&=c(\ln c-1)+1\\&={\frac {e\ln n}{\ln \ln n}}\left(\ln \ln n-\ln \ln \ln n\right)+1\\&\geq {\frac {e\ln n}{\ln \ln n}}\cdot {\frac {2}{e}}\ln \ln n+1\\&\geq 2\ln n.\end{aligned}}}

Thus,

${\displaystyle \Pr \left[Y_{i}>{\frac {e\ln n}{\ln \ln n}}\right]\leq {\frac {1}{n^{2}}}.}$

Applying the union bound, the probability that there exists a bin with load ${\displaystyle >12\ln n}$ is

${\displaystyle n\cdot \Pr \left[Y_{1}>{\frac {e\ln n}{\ln \ln n}}\right]\leq {\frac {1}{n}}}$.

Therefore, for ${\displaystyle m=n}$, with high probability, the maximum load is ${\displaystyle O\left({\frac {e\ln n}{\ln \ln n}}\right)}$.

## For larger ${\displaystyle m}$

When ${\displaystyle m\geq n\ln n}$, then according to ${\displaystyle (*)}$, ${\displaystyle \mu ={\frac {m}{n}}\geq \ln n}$

We can apply an easier form of the Chernoff bounds,

${\displaystyle \Pr[Y_{i}\geq 2e\mu ]\leq 2^{-2e\mu }\leq 2^{-2e\ln n}<{\frac {1}{n^{2}}}.}$

By the union bound, the probability that there exists a bin with load ${\displaystyle \geq 2e{\frac {m}{n}}}$ is,

${\displaystyle n\cdot \Pr \left[Y_{1}>2e{\frac {m}{n}}\right]=n\cdot \Pr \left[Y_{1}>2e\mu \right]\leq {\frac {1}{n}}}$.

Therefore, for ${\displaystyle m\geq n\ln n}$, with high probability, the maximum load is ${\displaystyle O\left({\frac {m}{n}}\right)}$.