高级算法 (Fall 2019)/Problem Set 2 and 组合数学 (Fall 2019)/作业2已提交名单: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>TCSseminar
 
imported>Haimin
No edit summary
 
Line 1: Line 1:
*作业电子版于2019/11/5 23:59 之前提交到邮箱 <font color=blue>njuadvalg@163.com</font>
邮件提交名单如下:
*每道题目的解答都要有<font color="red" size=5>完整的解题过程</font>。中英文不限。
{| class="wikitable"
 
|-
== Problem 1==
| 161180076
Let <math>X</math> be a real-valued random variable with finite <math>\mathbb{E}[X]</math> and finite <math>\mathbb{E}\left[\mathrm{e}^{\lambda X}\right]</math> for all <math>\lambda\ge 0</math>. We define the '''log-moment-generating function''' as
|-
:<math>\Psi_X(\lambda):=\ln\mathbb{E}[\mathrm{e}^{\lambda X}] \quad\text{ for all }\lambda\ge 0</math>,
| 161220017
and its ''dual function'':
|-
:<math>\Psi_X^*(t):=\sup_{\lambda\ge 0}(\lambda t-\Psi_X(\lambda))</math>.
| 161220010
Assume that <math>X</math> is NOT almost surely constant. Then due to the convexity of <math>\mathrm{e}^{\lambda X}</math> with respect to <math>\lambda</math>, the function <math>\Psi_X(\lambda)</math> is ''strictly'' convex over <math>\lambda\ge 0</math>.
|-
*Prove the following Chernoff bound:
| 161220020
::<math>\Pr[X\ge t]\le\exp(-\Psi_X^*(t))</math>.
|-
:In particular if <math>\Psi_X(\lambda)</math> is continuously differentiable, prove that the supreme in <math>\Psi_X^*(t)</math> is achieved at the unique <math>\lambda\ge 0</math> satisfying
| 161220070
::<math>\Psi_X'(\lambda)=t</math>
|-
:where <math>\Psi_X'(\lambda)</math> denotes the derivative of <math>\Psi_X(\lambda)</math> with respect to <math>\lambda</math>.
| 161240004
 
|-
*'''Normal random variables.''' Let <math>X\sim \mathrm{N}(\mu,\sigma)</math> be a Gaussian random variable with mean <math>\mu</math> and standard deviation <math>\sigma</math>. What are the <math>\Psi_X(\lambda)</math> and <math>\Psi_X^*(t)</math>? And give a tail inequality to upper bound the probability <math>\Pr[X\ge t]</math>.
| 161240031
 
|-
*'''Poisson random variables.''' Let <math>X\sim \mathrm{Pois}(\nu)</math> be a Poisson random variable with parameter <math>\nu</math>, that is, <math>\Pr[X=k]=\mathrm{e}^{-\nu}\nu^k/k!</math> for all <math>k=0,1,2,\ldots</math>. What are the <math>\Psi_X(\lambda)</math> and <math>\Psi_X^*(t)</math>? And give a tail inequality to upper bound the probability <math>\Pr[X\ge t]</math>.
| 161240059
 
|-
*'''Bernoulli random variables.''' Let <math>X\in\{0,1\}</math> be a single Bernoulli trial with probability of success <math>p</math>, that is, <math>\Pr[X=1]=1-\Pr[X=0]=p</math>. Show that for any <math>t\in(p,1)</math>, we have <math>\Psi_X^*(t)=D(Y \| X)</math> where <math>Y\in\{0,1\}</math> is a Bernoulli random variable with parameter <math>t</math> and <math>D(Y \| X)=(1-t)\ln\frac{1-t}{1-p}+t\ln\frac{t}{p}</math> is the [https://en.wikipedia.org/wiki/Kullback–Leibler_divergence '''Kullback-Leibler divergence'''] between <math>Y</math> and <math>X</math>.
| 161290005
 
|-
*'''Sum of independent random variables.''' Let <math>X=\sum_{i=1}^nX_i</math> be the sum of <math>n</math> independently and identically distributed random variables <math>X_1,X_2,\ldots, X_n</math>. Show that <math>\Psi_X(\lambda)=\sum_{i=1}^n\Psi_{X_i}(\lambda)</math> and <math>\Psi_X^*(t)=n\Psi^*_{X_i}(\frac{t}{n})</math>. Also for binomial random variable <math>X\sim \mathrm{Bin}(n,p)</math>, give an upper bound to the tail inequality <math>\Pr[X\ge t]</math> in terms of KL-divergence.
| 171240536
 
|-
:Give an upper bound to <math>\Pr[X\ge t]</math> when every <math>X_i</math> follows the geometric distribution with a probability <math>p</math> of success.
| 171240540
 
|-
 
| 171860558
== Problem 2 ==
|-
An <math>n</math>-dimensional hypercube <math>Q_n</math> is a graph with <math>2^n</math> vertices, where each vertex is represented by an <math>n</math>-bit vector, and there is an edge between two vertices if and only if their bit-vectors differ in exactly one bit.
| 171860573
 
|-
Given an <math>n</math>-dimensional hypercube with some non-empty subset of vertices <math>S</math>, which is called marked black. Let <math>f(u)</math> denote the shortest distance from vertex <math>u</math> to any black vertex. Formally,
| mf1833054
 
|-
<math>f(u) = \min_{v \in S}\mathrm{dist}_{Q_n}(u,v),</math>
| mg1833029
 
|-
where <math>\mathrm{dist}_{Q_n}(u,v)</math> denotes the length of the shortest path between <math>u</math> and <math>v</math> in graph <math>Q_n</math> .
| mg1833039
 
|-
Prove that if we choose <math>u</math> from all <math>2^n</math> vertices uniformly at random, then, with high probability, <math>f(u)</math> can not deviate from its expectation too much:
| mg1833045
 
|-
<math>\mathrm{Pr}[|f(u) - \mathbb{E}[f(u)]| \geq t\sqrt{n \log n}] \leq n^{-c}.</math>
| mg1833048
 
|-
Give the relation between <math>c</math> and <math>t</math>.
| mg1933059
 
|}
== Problem 3 ==
Let <math>Y_1,Y_2,Y_3,\ldots,Y_n</math> be a set of <math>n</math> random variables where each <math>Y_i \in \{0,1\}</math>. All variables <math>Y_i</math> follow some joint distribution over <math>\{0,1\}^n</math> and they may NOT be mutually independent. Assume the following property holds for <math>(Y_i)_{1\leq i \leq n}</math>. For any <math>1\leq i \leq n</math> and it holds that
 
<math>\forall c_1\in \{0,1\},c_2\in \{0,1\},\ldots,c_{i-1}\in \{0,1\} , \quad \Pr[Y_i = 1 \mid \forall 1\leq j<i,  Y_j = c_j  ] \leq p.</math>
 
Let <math>X_1,X_2,X_3,\ldots,X_n</math> be a set of <math>n</math> mutually independent random variables where each <math>X_i \in \{0,1\}</math>. Assume
 
<math>\forall 1\leq i \leq n:\quad \Pr[X_i = 1] = p.</math>
 
Prove that for any <math>a \geq 0</math>, it holds that
 
<math>\Pr\left[\sum_{i= 1}^n Y_i \geq a\right] \leq \Pr\left[\sum_{i= 1}^n X_i \geq a\right].</math>
 
Prove that for any <math>t \geq 0</math>, it holds that
 
<math>\Pr\left[\sum_{i= 1}^n Y_i \geq np + t\right] \leq \exp\left( -\frac{2t^2}{n}\right).</math>
 
 
<strong>Hint</strong>: Although random variables <math>Y_1,Y_2,Y_3,\ldots,Y_n</math> may not be mutually independent, we can still bound the tail probability for <math>\sum_{i=1}^nY_i</math>. This tool is called the stochastic dominance. 
 
To prove the first inequality, you only need to construct a coupling <math>\mathcal{C}</math> between <math>(X_i)_{1\leq i \leq n}</math>  and <math> (Y_i)_{ 1\leq i\leq n }</math> such that
 
<math> \Pr_{\mathcal{C}}[\,\forall 1\leq i \leq n, Y_i \leq X_i\,] = 1.</math>
 
This implies the random sequence  <math>(Y_i)_{1\leq i \leq n}</math> is stochastically dominated by the random sequence  <math>(X_i)_{1\leq i \leq n}</math>.
 
== Problem 4 ==
Let <math>U</math> be a universal set.
We use <math>2^U</math> to denote the collection of all subsets of <math>U</math>.
Let <math>\mathcal{F}</math> be a family of hash functions, in which each hash function <math>h:2^U \rightarrow \{0,1\}^m</math> maps subsets of <math>U</math> to 0-1 vectors of length <math>m</math>.
A locality sensitive hashing scheme is a distribution on a family <math>\mathcal{F}</math> of hash functions operating on <math>2^U</math>, such that for two subsets <math>A,B \in 2^U</math>,
 
<math>(1) \qquad \Pr_{h\in\mathcal{F}}[h(A)=h(B)]=sim(A,B).</math>
 
Here <math>sim:2^U \times 2^U \rightarrow [0,1] </math>  is called the similarity function. Given a hash function family <math> \mathcal{F} </math> that satisfies Equation (1), we will say that <math>\mathcal{F}</math> is a locality sensitive hash function family corresponding to similarity function <math>sim(\cdot,\cdot)</math>.
 
*For any similarity function <math> sim(A,B)</math>  that admits a locality sensitive hash function family as defined in Equation (1), prove that the distance function <math> d(A,B) \triangleq 1-sim(A,B)</math>  satisfies triangle inequality, formally,
 
<math> \forall A,B,C \in 2^U :\quad d(A,B) + d(B,C) \geq d(A,C).</math>
 
*Show that there is no locality sensitive hash function family corresponding to Dice's and the Overlap coefficient. Dice's coefficient is defined as:
 
<math>sim_{Dice}(A,B)=\frac{2|A\cap B|}{|A| + |B|}.</math>
 
Overlap coefficient is defined as:
 
<math>sim_{Ovl}(A,B) = \frac{|A\cap B|}{\min(|A|, |B|)}.</math>
 
<font color = "blue">Hint: use the triangle inequality result. </font>
 
* Construct a collection of hash function <math>\mathcal{B}</math> where  <math>f : \{0,1\}^m \rightarrow \{0,1\}</math> for each <math>f \in \mathcal{B}</math>, together with a probability distribution on <math>\mathcal{B}</math> such that
 
<math>\forall x, y \in \{0,1\}^m:\quad
  \Pr_{f \in \mathcal{B}}[f(x) = f(y)] = \begin{cases}
1 &\text{ if } x= y;\\
\frac{1}{2} &\text{ if } x \neq y.
\end{cases}
</math>
 
Then use the hash function family <math>\mathcal{B}</math> to prove the following result.
Given a locality sensitive hash function family <math>\mathcal{F}</math> (<math>h:2^U \rightarrow \{0,1\}^m</math> for each <math>h \in \mathcal{F}</math>) corresponding to a similarity function <math>sim(A,B)</math>, we can obtain a locality sensitive hash function <math>\mathcal{F}'</math> (<math>h':2^U \rightarrow \{0,1\}</math> for each <math>h' \in \mathcal{F}'</math>) corresponding to the similarity function <math>\frac{1+sim(A,B)}{2}</math>.

Revision as of 09:03, 23 October 2019

邮件提交名单如下:

161180076
161220017
161220010
161220020
161220070
161240004
161240031
161240059
161290005
171240536
171240540
171860558
171860573
mf1833054
mg1833029
mg1833039
mg1833045
mg1833048
mg1933059