高级算法 (Fall 2020) and 高级算法 (Fall 2020)/Problem Set 1: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>TCSseminar
 
imported>TCSseminar
 
Line 1: Line 1:
{{Infobox
*每道题目的解答都要有<font color="red" size=5>完整的解题过程</font>。中英文不限。
|name        = Infobox
|bodystyle    =
|title        = <font size=3>高级算法
<br>Advanced Algorithms</font>
|titlestyle  =


|image        =  
== Problem 1 ==
|imagestyle  =  
Modify the Karger's Contraction algorithm so that it works for the ''weighted min-cut problem''. Prove that the modified algorithm returns a weighted minimum cut with probability at least <math>\frac{2}{n(n-1)}</math>.
|caption      =  
The weighted min-cut problem is defined as follows.
|captionstyle =  
|headerstyle  = background:#ccf;
|labelstyle  = background:#ddf;
|datastyle    =


|header1 =Instructor
*'''Input''': an undirected weighted graph <math>G(V, E)</math>, where every edge <math>e \in E</math> is associated with a positive real weight <math>w_e</math>;
|label1  =
*'''Output''': a cut <math>C</math> in <math>G</math> such that <math>\sum_{e \in C} w_e</math> is minimized.
|data1  =
== Problem 2 ==
|header2 =
Let <math>X</math> be a real-valued random variable with finite <math>\mathbb{E}[X]</math> and finite <math>\mathbb{E}\left[\mathrm{e}^{\lambda X}\right]</math> for all <math>\lambda\ge 0</math>. We define the '''log-moment-generating function''' as
|label2  =
:<math>\Psi_X(\lambda):=\ln\mathbb{E}[\mathrm{e}^{\lambda X}] \quad\text{ for all }\lambda\ge 0</math>,
|data2  = 尹一通
and its ''dual function'':
|header3 =
:<math>\Psi_X^*(t):=\sup_{\lambda\ge 0}(\lambda t-\Psi_X(\lambda))</math>.
|label3  = Email
Assume that <math>X</math> is NOT almost surely constant. Then due to the convexity of <math>\mathrm{e}^{\lambda X}</math> with respect to <math>\lambda</math>, the function <math>\Psi_X(\lambda)</math> is ''strictly'' convex over <math>\lambda\ge 0</math>.
|data3  = yinyt@nju.edu.cn chaodong@nju.edu.cn 
*Prove the following Chernoff bound:
|header4 =
::<math>\Pr[X\ge t]\le\exp(-\Psi_X^*(t))</math>.
|label4= office
:In particular if <math>\Psi_X(\lambda)</math> is continuously differentiable, prove that the supreme in <math>\Psi_X^*(t)</math> is achieved at the unique <math>\lambda\ge 0</math> satisfying
|data4= 计算机系 804
::<math>\Psi_X'(\lambda)=t</math>
|header5 = Class
:where <math>\Psi_X'(\lambda)</math> denotes the derivative of <math>\Psi_X(\lambda)</math> with respect to <math>\lambda</math>.
|label5  =
|data5  =
|header6 =
|label6  = Class meetings
|data6  = Wednesday, 10am-12pm <br> Monday (biweekly), 8am-10am <br> 仙I-319
|header7 =
|label7  = Place
|data7  =  
|header8 =
|label8  = Office hours
|data8  = Wednesday, 4pm-6pm <br>804
|header9 = Textbooks
|label9  =
|data9  =
|header10 =
|label10  =
|data10  = [[File:MR-randomized-algorithms.png|border|100px]]
|header11 =
|label11  =
|data11  = Motwani and Raghavan. <br>''Randomized Algorithms''.<br> Cambridge Univ Press, 1995.
|header12 =
|label12  =
|data12  = [[File:Approximation_Algorithms.jpg|border|100px]]
|header13 =
|label13  =  
|data13  =  Vazirani. <br>''Approximation Algorithms''. <br> Springer-Verlag, 2001.
|belowstyle = background:#ddf;
|below =  
}}


This is the webpage for the ''Advanced Algorithms'' class of fall 2020. Students who take this class should check this page periodically for content updates and new announcements.  
*'''Normal random variables.''' Let <math>X\sim \mathrm{N}(\mu,\sigma)</math> be a Gaussian random variable with mean <math>\mu</math> and standard deviation <math>\sigma</math>. What are the <math>\Psi_X(\lambda)</math> and <math>\Psi_X^*(t)</math>? And give a tail inequality to upper bound the probability <math>\Pr[X\ge t]</math>.


= Announcement =
*'''Poisson random variables.''' Let <math>X\sim \mathrm{Pois}(\nu)</math> be a Poisson random variable with parameter <math>\nu</math>, that is, <math>\Pr[X=k]=\mathrm{e}^{-\nu}\nu^k/k!</math> for all <math>k=0,1,2,\ldots</math>. What are the <math>\Psi_X(\lambda)</math> and <math>\Psi_X^*(t)</math>? And give a tail inequality to upper bound the probability <math>\Pr[X\ge t]</math>.
*TBA


= Course info =
*'''Bernoulli random variables.''' Let <math>X\in\{0,1\}</math> be a single Bernoulli trial with probability of success <math>p</math>, that is, <math>\Pr[X=1]=1-\Pr[X=0]=p</math>. Show that for any <math>t\in(p,1)</math>, we have <math>\Psi_X^*(t)=D(Y \| X)</math> where <math>Y\in\{0,1\}</math> is a Bernoulli random variable with parameter <math>t</math> and <math>D(Y \| X)=(1-t)\ln\frac{1-t}{1-p}+t\ln\frac{t}{p}</math> is the [https://en.wikipedia.org/wiki/Kullback–Leibler_divergence '''Kullback-Leibler divergence'''] between <math>Y</math> and <math>X</math>.
* '''Instructor ''': 尹一通 ([http://tcs.nju.edu.cn/yinyt/ homepage])
:*'''email''': yinyt@nju.edu.cn
* '''Teaching Assistant''': TBA
* '''Class meeting''': Wednesday 10am-12pm, 仙I-319; Monday (biweekly) 8am-10am, 仙I-319.
* '''Office hour''': Wednesday 4pm-6pm, 计算机系 804.


= Syllabus =
*'''Sum of independent random variables.''' Let <math>X=\sum_{i=1}^nX_i</math> be the sum of <math>n</math> independently and identically distributed random variables <math>X_1,X_2,\ldots, X_n</math>. Show that <math>\Psi_X(\lambda)=\sum_{i=1}^n\Psi_{X_i}(\lambda)</math> and <math>\Psi_X^*(t)=n\Psi^*_{X_i}(\frac{t}{n})</math>. Also for binomial random variable <math>X\sim \mathrm{Bin}(n,p)</math>, give an upper bound to the tail inequality <math>\Pr[X\ge t]</math> in terms of KL-divergence.
随着计算机算法理论的不断发展,现代计算机算法的设计与分析大量地使用非初等的数学工具以及非传统的算法思想。“高级算法”这门课程就是面向计算机算法的这一发展趋势而设立的。课程将针对传统算法课程未系统涉及、却在计算机科学各领域的科研和实践中扮演重要角色的高等算法设计思想和算法分析工具进行系统讲授。


=== 先修课程 Prerequisites ===
:Give an upper bound to <math>\Pr[X\ge t]</math> when every <math>X_i</math> follows the geometric distribution with a probability <math>p</math> of success.
* 必须:离散数学,概率论,线性代数。
* 推荐:算法设计与分析。


=== Course materials ===
== Problem 3 ==
* [[高级算法 (Fall 2020) / Course materials|<font size=3>教材和参考书</font>]]
Let <math>X_1,X_2,\ldots,X_n</math> be <math>n</math> random variables, where each <math>X_i \in \{0, 1\}</math> follows the distribution <math>\mu_i</math>. For each <math>1\leq i \leq n</math>, let <math>\rho_i = \mathbb{E}[X_i]</math> and assume <math>\rho_i \geq \frac{1}{2}</math>. Consider the problem of estimating the value of
:<math>Z = \prod_{i = 1}^n \rho_i</math>.
For each <math>1\leq  i \leq n</math>, the algorithm draws <math>s</math> random samples <math>X_i^{(1)},X_i^{(2)},\ldots,X_i^{(s)}</math> independently from the distribution <math>\mu_i</math>, and computes
:<math>\widehat{\rho}_{i}=\frac{1}{s}\sum_{j=1}^s X_i^{(j)}</math>.
Finally, the algorithm outputs the product of all <math>\widehat{Z}_{i}</math>:
:<math>\widehat{Z}=\prod_{i= 1}^n\widehat{\rho}_i</math>.
Express <math>s</math> as a function of <math>n,\varepsilon,\delta</math> so that the output <math>\widehat{Z}</math> satisfies
:<math>\Pr\left[\mathrm{e}^{-\varepsilon}Z \leq \widehat{Z} \leq \mathrm{e}^{\varepsilon}Z\right] \geq 1- \delta</math>.
Try to make <math>s</math> as small as possible.


=== 成绩 Grades ===
== Problem 4 ==
* 课程成绩:本课程将会有若干次作业和一次期末考试。最终成绩将由平时作业成绩和期末考试成绩综合得出。
In Balls-and-Bins model, we throw <math>m</math> balls independently and uniformly at random into <math>n</math> bins. We know that the maximum load is <math>\Theta\left(\frac{\log n}{\log\log n}\right)</math> with high probability when <math>m=\Theta(n)</math>.
* 迟交:如果有特殊的理由,无法按时完成作业,请提前联系授课老师,给出正当理由。否则迟交的作业将不被接受。
The two-choice paradigm is another way to throw <math>m</math> balls into <math>n</math> bins: each ball is thrown into the least loaded of two bins chosen independently and uniformly at random(it could be the case that the two chosen bins are exactly the same, and then the ball will be thrown into that bin), and breaks the tie arbitrarily. When <math>m=\Theta(n)</math>, the maximum load of two-choice paradigm is known to be <math>\Theta(\log\log n)</math> with high probability, which is exponentially less than the maxim load when there is only one random choice. This phenomenon is called '''''the power of two choices'''''.


=== <font color=red> 学术诚信 Academic Integrity </font>===
Here are the questions:
学术诚信是所有从事学术活动的学生和学者最基本的职业道德底线,本课程将不遗余力的维护学术诚信规范,违反这一底线的行为将不会被容忍。
*Consider the following paradigm: we throw <math>n</math> balls into <math>n</math> bins. The first <math>\frac{n}{2}</math> balls are thrown into bins independently and uniformly at random. The remaining <math>\frac{n}{2}</math> balls are thrown into bins using the two-choice paradigm. What is the maximum load with high probability? You need to give an asymptotically tight bound (in the form of <math>\Theta(\cdot)</math>).


作业完成的原则:署你名字的工作必须是你个人的贡献。在完成作业的过程中,允许讨论,前提是讨论的所有参与者均处于同等完成度。但关键想法的执行、以及作业文本的写作必须独立完成,并在作业中致谢(acknowledge)所有参与讨论的人。不允许其他任何形式的合作——尤其是与已经完成作业的同学“讨论”。
*Replace the above paradigm to the following: the first <math>\frac{n}{2}</math> balls are thrown into bins using the  two-choice paradigm while the remaining <math>\frac{n}{2}</math> balls are thrown into bins independently and uniformly at random.  What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.


本课程将对剽窃行为采取零容忍的态度。在完成作业过程中,对他人工作(出版物、互联网资料、其他人的作业等)直接的文本抄袭和对关键思想、关键元素的抄袭,按照 [http://www.acm.org/publications/policies/plagiarism_policy ACM Policy on Plagiarism]的解释,都将视为剽窃。剽窃者成绩将被取消。如果发现互相抄袭行为,<font color=red> 抄袭和被抄袭双方的成绩都将被取消</font>。因此请主动防止自己的作业被他人抄袭。
*Replace the above paradigm to the following: assume all <math>n</math> balls are thrown in a sequence. For every <math>1\le i\le n</math>, if <math>i</math> is odd, we throw <math>i</math>-th ball into bins independently and uniformly at random, otherwise, we throw it into bins using the two-choice paradigm. What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.


学术诚信影响学生个人的品行,也关乎整个教育系统的正常运转。为了一点分数而做出学术不端的行为,不仅使自己沦为一个欺骗者,也使他人的诚实努力失去意义。让我们一起努力维护一个诚信的环境。
== Problem 5 ==


= Assignments =
Consider the function <math>f:\mathbb{R}^n\to\mathbb{R}</math> defined as
*[[高级算法 (Fall 2020)/Problem Set 1|Problem Set 1]]  due on 2020/10/14 上课前交作业.


= Lecture Notes =
:<math>f(\vec x)=f(x_1,x_2,\dots,x_n)=\prod_{i=1}^{n}(a_ix_i+b_i)</math>,
# [[高级算法 (Fall 2020)/Min-Cut and Max-Cut|Min-Cut and Max-Cut]] ([http://tcs.nju.edu.cn/slides/aa2019/Cut.pdf slides])
#: [[高级算法 (Fall 2020)/Probability Basics|Probability basics]]
# [[高级算法 (Fall 2020)/Balls into bins|Balls into bins and Chernoff bound]] ([https://chaodong.me/teaching/advalg/2020/slides/chernoff.pptx slides])
#:  [[高级算法 (Fall 2020)/Basic tail inequalities|Basic tail inequalities]]
# [[高级算法 (Fall 2020)/Concentration of measure|Concentration of measure]] ([https://chaodong.me/teaching/advalg/2020/slides/martingale.pptx slides])
#:  [[高级算法 (Fall 2020)/Conditional expectations|Conditional expectations]]
#  [[高级算法 (Fall 2020)/Fingerprinting| Fingerprinting]] ([http://tcs.nju.edu.cn/slides/aa2019/Fingerprinting.pdf slides])
#:  [[高级算法 (Fall 2020)/Finite Field Basics|Finite field basics]]
#  [[高级算法 (Fall 2020)/Hashing and Sketching|Hashing and Sketching]] ([http://tcs.nju.edu.cn/slides/aa2019/Hashing.pdf slides])


= Related Online Courses=
where <math>\{a_i\}_{1\le i\le n}</math> and <math>\{b_i\}_{1\le i\le n}</math> are '''unknown''' coefficients satisfy that <math>a_i, b_i\in \mathbb{Z}</math> and <math>0\le a_i, b_i \le n</math> for all <math>1\le i\le n</math>.
* [http://people.csail.mit.edu/moitra/854.html Advanced Algorithms] by Ankur Moitra at MIT.
 
* [http://courses.csail.mit.edu/6.854/current/ Advanced Algorithms] by David Karger and Aleksander Mądry at MIT.
Let <math>p>n</math> be the smallest prime strictly greater than <math>n</math>, <math>g:\mathbb{Z}_p^n\to\mathbb{Z}_p</math> defined as
* [http://web.stanford.edu/class/cs168/index.html The Modern Algorithmic Toolbox] by Tim Roughgarden and Gregory Valiant at Stanford.
 
* [https://www.cs.princeton.edu/courses/archive/fall15/cos521/ Advanced Algorithm Design] by Sanjeev Arora at Princeton.
:<math>g(\vec x)=g(x_1,x_2,\dots,x_n)=\prod_{i=1}^{n}(a_ix_i+b_i)</math>,
* [http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/ Linear and Semidefinite Programming (Advanced Algorithms)] by Anupam Gupta and Ryan O'Donnell at CMU.
 
* The [https://www.cs.cornell.edu/jeh/book.pdf "Foundations of Data Science" book] by Avrim Blum, John Hopcroft, and Ravindran Kannan.
where <math>+</math> and <math>\cdot</math> are defined over the finite field <math>\mathbb{Z}_p</math>.
 
By the properties of finite field, for any value <math>\vec r\in\mathbb{Z}_p^n</math>, it holds that <math>g(\vec r)=f(\vec r)\bmod p</math>.
 
Since the coefficients <math>\{a_i\}_{1\le i\le n}</math> and <math>\{b_i\}_{1\le i\le n}</math> are unknown, you can't calculate <math>f(\vec x)</math> directly. However, there exists an oracle <math>O</math>, each time <math>O</math> gets an input <math>\vec x</math>, it immediately outputs the value of <math>g(\vec x)</math>.
 
1. Prove that <math>f\not\equiv 0 \Rightarrow g\not\equiv 0</math>.
 
2. Use the oracle <math>O</math> to design an algorithm to determine whether <math>f\equiv 0</math>, with error probability at most <math>\epsilon</math>, where <math>\epsilon\in (0,1)</math> is a constant.
 
== Problem 6 ==

Revision as of 15:01, 29 September 2020

  • 每道题目的解答都要有完整的解题过程。中英文不限。

Problem 1

Modify the Karger's Contraction algorithm so that it works for the weighted min-cut problem. Prove that the modified algorithm returns a weighted minimum cut with probability at least [math]\displaystyle{ \frac{2}{n(n-1)} }[/math]. The weighted min-cut problem is defined as follows.

  • Input: an undirected weighted graph [math]\displaystyle{ G(V, E) }[/math], where every edge [math]\displaystyle{ e \in E }[/math] is associated with a positive real weight [math]\displaystyle{ w_e }[/math];
  • Output: a cut [math]\displaystyle{ C }[/math] in [math]\displaystyle{ G }[/math] such that [math]\displaystyle{ \sum_{e \in C} w_e }[/math] is minimized.

Problem 2

Let [math]\displaystyle{ X }[/math] be a real-valued random variable with finite [math]\displaystyle{ \mathbb{E}[X] }[/math] and finite [math]\displaystyle{ \mathbb{E}\left[\mathrm{e}^{\lambda X}\right] }[/math] for all [math]\displaystyle{ \lambda\ge 0 }[/math]. We define the log-moment-generating function as

[math]\displaystyle{ \Psi_X(\lambda):=\ln\mathbb{E}[\mathrm{e}^{\lambda X}] \quad\text{ for all }\lambda\ge 0 }[/math],

and its dual function:

[math]\displaystyle{ \Psi_X^*(t):=\sup_{\lambda\ge 0}(\lambda t-\Psi_X(\lambda)) }[/math].

Assume that [math]\displaystyle{ X }[/math] is NOT almost surely constant. Then due to the convexity of [math]\displaystyle{ \mathrm{e}^{\lambda X} }[/math] with respect to [math]\displaystyle{ \lambda }[/math], the function [math]\displaystyle{ \Psi_X(\lambda) }[/math] is strictly convex over [math]\displaystyle{ \lambda\ge 0 }[/math].

  • Prove the following Chernoff bound:
[math]\displaystyle{ \Pr[X\ge t]\le\exp(-\Psi_X^*(t)) }[/math].
In particular if [math]\displaystyle{ \Psi_X(\lambda) }[/math] is continuously differentiable, prove that the supreme in [math]\displaystyle{ \Psi_X^*(t) }[/math] is achieved at the unique [math]\displaystyle{ \lambda\ge 0 }[/math] satisfying
[math]\displaystyle{ \Psi_X'(\lambda)=t }[/math]
where [math]\displaystyle{ \Psi_X'(\lambda) }[/math] denotes the derivative of [math]\displaystyle{ \Psi_X(\lambda) }[/math] with respect to [math]\displaystyle{ \lambda }[/math].
  • Normal random variables. Let [math]\displaystyle{ X\sim \mathrm{N}(\mu,\sigma) }[/math] be a Gaussian random variable with mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math]. What are the [math]\displaystyle{ \Psi_X(\lambda) }[/math] and [math]\displaystyle{ \Psi_X^*(t) }[/math]? And give a tail inequality to upper bound the probability [math]\displaystyle{ \Pr[X\ge t] }[/math].
  • Poisson random variables. Let [math]\displaystyle{ X\sim \mathrm{Pois}(\nu) }[/math] be a Poisson random variable with parameter [math]\displaystyle{ \nu }[/math], that is, [math]\displaystyle{ \Pr[X=k]=\mathrm{e}^{-\nu}\nu^k/k! }[/math] for all [math]\displaystyle{ k=0,1,2,\ldots }[/math]. What are the [math]\displaystyle{ \Psi_X(\lambda) }[/math] and [math]\displaystyle{ \Psi_X^*(t) }[/math]? And give a tail inequality to upper bound the probability [math]\displaystyle{ \Pr[X\ge t] }[/math].
  • Bernoulli random variables. Let [math]\displaystyle{ X\in\{0,1\} }[/math] be a single Bernoulli trial with probability of success [math]\displaystyle{ p }[/math], that is, [math]\displaystyle{ \Pr[X=1]=1-\Pr[X=0]=p }[/math]. Show that for any [math]\displaystyle{ t\in(p,1) }[/math], we have [math]\displaystyle{ \Psi_X^*(t)=D(Y \| X) }[/math] where [math]\displaystyle{ Y\in\{0,1\} }[/math] is a Bernoulli random variable with parameter [math]\displaystyle{ t }[/math] and [math]\displaystyle{ D(Y \| X)=(1-t)\ln\frac{1-t}{1-p}+t\ln\frac{t}{p} }[/math] is the Kullback-Leibler divergence between [math]\displaystyle{ Y }[/math] and [math]\displaystyle{ X }[/math].
  • Sum of independent random variables. Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math] be the sum of [math]\displaystyle{ n }[/math] independently and identically distributed random variables [math]\displaystyle{ X_1,X_2,\ldots, X_n }[/math]. Show that [math]\displaystyle{ \Psi_X(\lambda)=\sum_{i=1}^n\Psi_{X_i}(\lambda) }[/math] and [math]\displaystyle{ \Psi_X^*(t)=n\Psi^*_{X_i}(\frac{t}{n}) }[/math]. Also for binomial random variable [math]\displaystyle{ X\sim \mathrm{Bin}(n,p) }[/math], give an upper bound to the tail inequality [math]\displaystyle{ \Pr[X\ge t] }[/math] in terms of KL-divergence.
Give an upper bound to [math]\displaystyle{ \Pr[X\ge t] }[/math] when every [math]\displaystyle{ X_i }[/math] follows the geometric distribution with a probability [math]\displaystyle{ p }[/math] of success.

Problem 3

Let [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math] be [math]\displaystyle{ n }[/math] random variables, where each [math]\displaystyle{ X_i \in \{0, 1\} }[/math] follows the distribution [math]\displaystyle{ \mu_i }[/math]. For each [math]\displaystyle{ 1\leq i \leq n }[/math], let [math]\displaystyle{ \rho_i = \mathbb{E}[X_i] }[/math] and assume [math]\displaystyle{ \rho_i \geq \frac{1}{2} }[/math]. Consider the problem of estimating the value of

[math]\displaystyle{ Z = \prod_{i = 1}^n \rho_i }[/math].

For each [math]\displaystyle{ 1\leq i \leq n }[/math], the algorithm draws [math]\displaystyle{ s }[/math] random samples [math]\displaystyle{ X_i^{(1)},X_i^{(2)},\ldots,X_i^{(s)} }[/math] independently from the distribution [math]\displaystyle{ \mu_i }[/math], and computes

[math]\displaystyle{ \widehat{\rho}_{i}=\frac{1}{s}\sum_{j=1}^s X_i^{(j)} }[/math].

Finally, the algorithm outputs the product of all [math]\displaystyle{ \widehat{Z}_{i} }[/math]:

[math]\displaystyle{ \widehat{Z}=\prod_{i= 1}^n\widehat{\rho}_i }[/math].

Express [math]\displaystyle{ s }[/math] as a function of [math]\displaystyle{ n,\varepsilon,\delta }[/math] so that the output [math]\displaystyle{ \widehat{Z} }[/math] satisfies

[math]\displaystyle{ \Pr\left[\mathrm{e}^{-\varepsilon}Z \leq \widehat{Z} \leq \mathrm{e}^{\varepsilon}Z\right] \geq 1- \delta }[/math].

Try to make [math]\displaystyle{ s }[/math] as small as possible.

Problem 4

In Balls-and-Bins model, we throw [math]\displaystyle{ m }[/math] balls independently and uniformly at random into [math]\displaystyle{ n }[/math] bins. We know that the maximum load is [math]\displaystyle{ \Theta\left(\frac{\log n}{\log\log n}\right) }[/math] with high probability when [math]\displaystyle{ m=\Theta(n) }[/math]. The two-choice paradigm is another way to throw [math]\displaystyle{ m }[/math] balls into [math]\displaystyle{ n }[/math] bins: each ball is thrown into the least loaded of two bins chosen independently and uniformly at random(it could be the case that the two chosen bins are exactly the same, and then the ball will be thrown into that bin), and breaks the tie arbitrarily. When [math]\displaystyle{ m=\Theta(n) }[/math], the maximum load of two-choice paradigm is known to be [math]\displaystyle{ \Theta(\log\log n) }[/math] with high probability, which is exponentially less than the maxim load when there is only one random choice. This phenomenon is called the power of two choices.

Here are the questions:

  • Consider the following paradigm: we throw [math]\displaystyle{ n }[/math] balls into [math]\displaystyle{ n }[/math] bins. The first [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins independently and uniformly at random. The remaining [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins using the two-choice paradigm. What is the maximum load with high probability? You need to give an asymptotically tight bound (in the form of [math]\displaystyle{ \Theta(\cdot) }[/math]).
  • Replace the above paradigm to the following: the first [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins using the two-choice paradigm while the remaining [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins independently and uniformly at random. What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.
  • Replace the above paradigm to the following: assume all [math]\displaystyle{ n }[/math] balls are thrown in a sequence. For every [math]\displaystyle{ 1\le i\le n }[/math], if [math]\displaystyle{ i }[/math] is odd, we throw [math]\displaystyle{ i }[/math]-th ball into bins independently and uniformly at random, otherwise, we throw it into bins using the two-choice paradigm. What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.

Problem 5

Consider the function [math]\displaystyle{ f:\mathbb{R}^n\to\mathbb{R} }[/math] defined as

[math]\displaystyle{ f(\vec x)=f(x_1,x_2,\dots,x_n)=\prod_{i=1}^{n}(a_ix_i+b_i) }[/math],

where [math]\displaystyle{ \{a_i\}_{1\le i\le n} }[/math] and [math]\displaystyle{ \{b_i\}_{1\le i\le n} }[/math] are unknown coefficients satisfy that [math]\displaystyle{ a_i, b_i\in \mathbb{Z} }[/math] and [math]\displaystyle{ 0\le a_i, b_i \le n }[/math] for all [math]\displaystyle{ 1\le i\le n }[/math].

Let [math]\displaystyle{ p\gt n }[/math] be the smallest prime strictly greater than [math]\displaystyle{ n }[/math], [math]\displaystyle{ g:\mathbb{Z}_p^n\to\mathbb{Z}_p }[/math] defined as

[math]\displaystyle{ g(\vec x)=g(x_1,x_2,\dots,x_n)=\prod_{i=1}^{n}(a_ix_i+b_i) }[/math],

where [math]\displaystyle{ + }[/math] and [math]\displaystyle{ \cdot }[/math] are defined over the finite field [math]\displaystyle{ \mathbb{Z}_p }[/math].

By the properties of finite field, for any value [math]\displaystyle{ \vec r\in\mathbb{Z}_p^n }[/math], it holds that [math]\displaystyle{ g(\vec r)=f(\vec r)\bmod p }[/math].

Since the coefficients [math]\displaystyle{ \{a_i\}_{1\le i\le n} }[/math] and [math]\displaystyle{ \{b_i\}_{1\le i\le n} }[/math] are unknown, you can't calculate [math]\displaystyle{ f(\vec x) }[/math] directly. However, there exists an oracle [math]\displaystyle{ O }[/math], each time [math]\displaystyle{ O }[/math] gets an input [math]\displaystyle{ \vec x }[/math], it immediately outputs the value of [math]\displaystyle{ g(\vec x) }[/math].

1. Prove that [math]\displaystyle{ f\not\equiv 0 \Rightarrow g\not\equiv 0 }[/math].

2. Use the oracle [math]\displaystyle{ O }[/math] to design an algorithm to determine whether [math]\displaystyle{ f\equiv 0 }[/math], with error probability at most [math]\displaystyle{ \epsilon }[/math], where [math]\displaystyle{ \epsilon\in (0,1) }[/math] is a constant.

Problem 6