高级算法 (Fall 2019)/Problem Set 1: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Haimin
No edit summary
imported>Haimin
No edit summary
Line 6: Line 6:
The weighted min-cut problem is defined as follows.
The weighted min-cut problem is defined as follows.


*'''Input''': an undirected weighted graph $G(V, E)$, where every edge <math>e \in E</math> is associated with a positive real weight <math>w_e</math>;
*'''Input''': an undirected weighted graph <math>G(V, E)</math>, where every edge <math>e \in E</math> is associated with a positive real weight <math>w_e</math>;
*'''Output''': a cut $C$ in $G$ such that <math>\sum_{e \in C} w_e</math> is minimized.
*'''Output''': a cut <math>C</math> in <math>G</math> such that <math>\sum_{e \in C} w_e</math> is minimized.




== Problem 2 ==
== Problem 2 ==
Let $G=(V,E)$ be a graph, where <math>n = |V|</math> and <math>m = |E|</math>. In class, we generate a random <math>S-T</math> cut by sampling <math>X_v \in \{0,1\}</math> uniformly and independently for each <math>v \in V</math> and constructing <math>S = \{v \in V \mid X_v = 1 \}$ and <math>T = \{v \in V \mid X_v = 0 \}</math>. Now, consider an alternative way to generate the random <math>S-T$ cut. Suppose <math>n$ is an even number. Define a collection of subset as
Let <math>G=(V,E)</math> be a graph, where <math>n = |V|</math> and <math>m = |E|</math>. In class, we generate a random <math>S-T</math> cut by sampling <math>X_v \in \{0,1\}</math> uniformly and independently for each <math>v \in V</math> and constructing <math>S = \{v \in V \mid X_v = 1 \}</math> and <math>T = \{v \in V \mid X_v = 0 \}</math>. Now, consider an alternative way to generate the random <math>S-T</math> cut. Suppose <math>n</math> is an even number. Define a collection of subset as
:<math>\mathcal{F} = \{H \subseteq V: |H| = n /2 \}</math>
:<math>\mathcal{F} = \{H \subseteq V: |H| = n /2 \}</math>
We sample a random subset <math>S \in \mathcal{F}</math> uniformly at random and construct <math>T = V \setminus S</math>.  
We sample a random subset <math>S \in \mathcal{F}</math> uniformly at random and construct <math>T = V \setminus S</math>.  
* Find the expected size of such random <math>S-T</math> cut.
* Find the expected size of such random <math>S-T</math> cut.
* Let <math>\mathcal{R}(\cdot)</math> be a random source. Given any <math>0\leq p\leq 1$, <math>\mathcal{R}(p)</math> returns an independent random sample <math>X \in \{0,1\}$ such that <math>\Pr[X= 1] = p</math>. Give an algorithm that uses <math>\mathcal{R}(\cdot)</math> to generate random subset <math>S \in \mathcal{F}$ uniformly at random. Analyze the time complexity of your algorithm.
* Let <math>\mathcal{R}(\cdot)</math> be a random source. Given any <math>0\leq p\leq 1</math>, <math>\mathcal{R}(p)</math> returns an independent random sample <math>X \in \{0,1\}</math> such that <math>\Pr[X= 1] = p</math>. Give an algorithm that uses <math>\mathcal{R}(\cdot)</math> to generate random subset <math>S \in \mathcal{F}</math> uniformly at random. Analyze the time complexity of your algorithm.
 
== Problem 3 ==
Two '''rooted''' trees $T_1$ and $T_2$ are said to be \textbf{isomorphic} if there exists a bijection $\phi$ that maps vertices of $T_1$ to those of $T_2$ satisfying the following condition: for each \emph{interval} vertex $v$ of $T_1$ with children $u_1, u_2, ..., u_k$, the set of children of vertex $\phi(v)$ in $T_2$ is precisely $\{\phi(u_1), \phi(u_2),...,\phi(u_k)\}$, no ordering among children assumed.
:Given an efficient randomized algorithm with bounded one-side error (false positive), for testing isomorphism between rooted trees with $n$ vertices. Analyze your algorithm.
 
== Problem 4 ==
Design a randomized algorithm to decide if an integer sequence $a_1,...,a_n$ is a permutation of another integer sequence $b_1,...,b_n$. Bound the probability of the error and analyze the time complexity.


== Problem 5 ==
== Problem 5 ==
Let $X_1,X_2,\ldots,X_n$ be $n$ random variables, where each $X_i \in \{0, 1\}$ follows the distribution $\mu_i$. For each $1\leq i \leq n$, let $\rho_i = \E{X_i}$ and assume $\rho_i \geq \frac{1}{2}$. Consider the problem of estimating the value of  
Let <math>X_1,X_2,\ldots,X_n</math> be <math>n</math> random variables, where each <math>X_i \in \{0, 1\}</math> follows the distribution <math>\mu_i</math>. For each <math>1\leq i \leq n</math>, let <math>\rho_i = E[X_i]</math> and assume <math>\rho_i \geq \frac{1}{2}</math>. Consider the problem of estimating the value of  
:Z = \prod_{i = 1}^n \rho_i.
:<math>Z = \prod_{i = 1}^n \rho_i</math>
For each $1\leq  i \leq n$, the algorithm draws $s$ random samples $X_i^{(1)},X_i^{(2)},\ldots,X_i^{(s)}$ independently from the distribution $\mu_i$, and computes  
For each <math>1\leq  i \leq n</math>, the algorithm draws <math>s</math> random samples <math>X_i^{(1)},X_i^{(2)},\ldots,X_i^{(s)}</math> independently from the distribution <math>\mu_i</math>, and computes  
:\widehat{Z}_{i}=\frac{1}{s}\sum_{j=1}^s X_i^{(j)}.
:<math>\widehat{Z}_{i}=\frac{1}{s}\sum_{j=1}^s X_i^{(j)}</math>
Finally, the algorithm outputs the product of all $\widehat{Z}_{i}$:
Finally, the algorithm outputs the product of all <math>\widehat{Z}_{i}</math>:
:\widehat{Z}=\prod_{i= 1}^n\widehat{Z}_i.
:<math>\widehat{Z}=\prod_{i= 1}^n\widehat{Z}_i</math>
Express $s$ as a function of $n,\varepsilon,\delta$ so that the output $\widehat{Z}$ satisfies
Express <math>s</math> as a function of <math>n,\varepsilon,\delta</math> so that the output <math>\widehat{Z}</math> satisfies
:\Pr\left[\mathrm{e}^{-\varepsilon}Z \leq \widehat{Z} \leq \mathrm{e}^{\varepsilon}Z\right] \geq 1- \delta.
:<math>\Pr\left[\mathrm{e}^{-\varepsilon}Z \leq \widehat{Z} \leq \mathrm{e}^{\varepsilon}Z\right] \geq 1- \delta</math>
Try to make $s$ as small as possible.
Try to make <math>s$ as small as possible.

Revision as of 11:12, 23 September 2019

Under construction

  • 每道题目的解答都要有完整的解题过程。中英文不限。

Problem 1

Modify the Karger's Contraction algorithm so that it works for the weighted min-cut problem. Prove that the modified algorithm returns a weighted minimum cut with probability at least [math]\displaystyle{ \frac{2}{n(n-1)} }[/math]. The weighted min-cut problem is defined as follows.

  • Input: an undirected weighted graph [math]\displaystyle{ G(V, E) }[/math], where every edge [math]\displaystyle{ e \in E }[/math] is associated with a positive real weight [math]\displaystyle{ w_e }[/math];
  • Output: a cut [math]\displaystyle{ C }[/math] in [math]\displaystyle{ G }[/math] such that [math]\displaystyle{ \sum_{e \in C} w_e }[/math] is minimized.


Problem 2

Let [math]\displaystyle{ G=(V,E) }[/math] be a graph, where [math]\displaystyle{ n = |V| }[/math] and [math]\displaystyle{ m = |E| }[/math]. In class, we generate a random [math]\displaystyle{ S-T }[/math] cut by sampling [math]\displaystyle{ X_v \in \{0,1\} }[/math] uniformly and independently for each [math]\displaystyle{ v \in V }[/math] and constructing [math]\displaystyle{ S = \{v \in V \mid X_v = 1 \} }[/math] and [math]\displaystyle{ T = \{v \in V \mid X_v = 0 \} }[/math]. Now, consider an alternative way to generate the random [math]\displaystyle{ S-T }[/math] cut. Suppose [math]\displaystyle{ n }[/math] is an even number. Define a collection of subset as

[math]\displaystyle{ \mathcal{F} = \{H \subseteq V: |H| = n /2 \} }[/math]

We sample a random subset [math]\displaystyle{ S \in \mathcal{F} }[/math] uniformly at random and construct [math]\displaystyle{ T = V \setminus S }[/math].

  • Find the expected size of such random [math]\displaystyle{ S-T }[/math] cut.
  • Let [math]\displaystyle{ \mathcal{R}(\cdot) }[/math] be a random source. Given any [math]\displaystyle{ 0\leq p\leq 1 }[/math], [math]\displaystyle{ \mathcal{R}(p) }[/math] returns an independent random sample [math]\displaystyle{ X \in \{0,1\} }[/math] such that [math]\displaystyle{ \Pr[X= 1] = p }[/math]. Give an algorithm that uses [math]\displaystyle{ \mathcal{R}(\cdot) }[/math] to generate random subset [math]\displaystyle{ S \in \mathcal{F} }[/math] uniformly at random. Analyze the time complexity of your algorithm.

Problem 5

Let [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math] be [math]\displaystyle{ n }[/math] random variables, where each [math]\displaystyle{ X_i \in \{0, 1\} }[/math] follows the distribution [math]\displaystyle{ \mu_i }[/math]. For each [math]\displaystyle{ 1\leq i \leq n }[/math], let [math]\displaystyle{ \rho_i = E[X_i] }[/math] and assume [math]\displaystyle{ \rho_i \geq \frac{1}{2} }[/math]. Consider the problem of estimating the value of

[math]\displaystyle{ Z = \prod_{i = 1}^n \rho_i }[/math]

For each [math]\displaystyle{ 1\leq i \leq n }[/math], the algorithm draws [math]\displaystyle{ s }[/math] random samples [math]\displaystyle{ X_i^{(1)},X_i^{(2)},\ldots,X_i^{(s)} }[/math] independently from the distribution [math]\displaystyle{ \mu_i }[/math], and computes

[math]\displaystyle{ \widehat{Z}_{i}=\frac{1}{s}\sum_{j=1}^s X_i^{(j)} }[/math]

Finally, the algorithm outputs the product of all [math]\displaystyle{ \widehat{Z}_{i} }[/math]:

[math]\displaystyle{ \widehat{Z}=\prod_{i= 1}^n\widehat{Z}_i }[/math]

Express [math]\displaystyle{ s }[/math] as a function of [math]\displaystyle{ n,\varepsilon,\delta }[/math] so that the output [math]\displaystyle{ \widehat{Z} }[/math] satisfies

[math]\displaystyle{ \Pr\left[\mathrm{e}^{-\varepsilon}Z \leq \widehat{Z} \leq \mathrm{e}^{\varepsilon}Z\right] \geq 1- \delta }[/math]

Try to make <math>s$ as small as possible.