随机算法 (Spring 2014)/Problem Set 3: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
Created page with "== Problem 1 == (Due to J. Naor.) The <i>Chernoff bound</i> is an exponentially decreasing bound on tail distributions. Let <math>X_1,\dots,X_n</math> be independent random vari…"
 
imported>Etone
No edit summary
 
(One intermediate revision by the same user not shown)
Line 13: Line 13:
# Show that for each <math>\delta</math>, there exists a choice of <math>k</math> such that the <math>k</math>th-moment bound is stronger than the Chernoff bound. (Hint: You may use the probabilistic method.)
# Show that for each <math>\delta</math>, there exists a choice of <math>k</math> such that the <math>k</math>th-moment bound is stronger than the Chernoff bound. (Hint: You may use the probabilistic method.)
# Why would we still prefer the Chernoff bound to the seemingly stronger <math>k</math>th-moment bound?
# Why would we still prefer the Chernoff bound to the seemingly stronger <math>k</math>th-moment bound?
== Problem 2 ==
Given a binary string, define a '''run''' as a <font color=red>maximal</font> sequence of contiguous 1s; for example, the following string
:<math>\underbrace{111}_{3}00\underbrace{11}_{2}00\underbrace{111111}_{6}0\underbrace{1}_{1}0\underbrace{11}_{2}</math>
contains 5 runs, of length 3, 2, 6, 1, and 2.
Let <math>S</math> be a binary string of length <math>n</math>, generated uniformly at random. Let <math>X_k</math> be the number of runs in <math>S</math> of length <math>k</math> or more.
*Compute the exact value of <math>\mathbb{E}[X_k]</math> as a function of <math>n</math> and <math>k</math>.
*Give the best concentration bound you can for <math>|X_k -\mathbb{E}[X_k]|</math>.
== Problem 3==
;The maximum directed cut problem (MAX-DICUT).
We are given as input a directed graph <math>G=(V,E)</math>, with each directed edge <math>(u,v)\in E</math> having a nonnegative weight <math>w_{uv}\ge 0</math>. The goal is to partition <math>V</math> into two sets <math>S\,</math> and <math>\bar{S}=V\setminus S</math> so as to maximize the value of <math>\sum_{(u,v)\in E\atop u\in S,v\not\in S}w_{uv}</math>, that is, the total weight of the edges going from <math>S\,</math> to <math>\bar{S}</math>.
* Give a randomized <math>\frac{1}{4}</math>-approximation algorithm based on random sampling.
* Prove that the following is an integer programming for the problem:
:<math>
\begin{align}
\text{maximize} && \sum_{(i,j)\in E}w_{ij}z_{ij}\\
\text{subject to} && z_{ij} &\le x_i, & \forall (i,j)&\in E,\\
&& z_{ij} &\le 1-x_j, & \forall (i,j)&\in E,\\
&& x_i &\in\{0,1\}, & \forall i&\in V,\\
&& 0 \le z_{ij}&\le 1, & \forall (i,j)&\in E.
\end{align}
</math>
* Consider a randomized rounding algorithm that solves an LP relaxation of the above integer programming and puts vertex <math>i</math> in <math>S</math> with probability <math>f(x_i^*)</math>. We may assume that <math>f(x)</math> is a linear function in the form <math>f(x)=ax+b</math> with some constant <math>a</math> and <math>b</math> to be fixed. Try to find good <math>a</math> and <math>b</math> so that the randomized rounding algorithm has a good approximation ratio.
==Problem 4 ==
The set cover problem is defined as follows:
*Let <math>U=\{u_1,u_2,\ldots,u_n\}</math> be a set of <math>n</math> elements, and let <math>\mathcal{S}=\{S_1,S_2,\ldots,S_m\}</math> be a family of subsets of <math>U</math>. For each <math>u_i\in U</math>, let <math>w_i</math> be a nonnegative weight of <math>u_i</math>. The goal is to find a subset <math>V\subseteq U</math> with the minimum total weight <math>\sum_{i\in V}w_i</math>, that intersects with all <math>S_i\in\mathcal{S}</math>.
This problem is '''NP-hard'''.
('''Remark''': There are two equivalent definitions of the set cover problem. We take the '''hitting set''' version.)
Questions:
* Prove that the following is an integer programming for the problem:
:<math>
\begin{align}
\text{minimize} &&  \sum_{(i,j)\in E}w_{i}x_{i}\\
\text{subject to} && \sum_{i:u_i\in S_j}x_i &\ge 1, &1\le j\le m,\\
&& x_i &\in\{0,1\}, & 1\le i\le n.
\end{align}
</math>
* Give a randomized rounding algorithm which returns an <math>O(\log m)</math>-approximate solution with probability at least <math>\frac{1}{2}</math>. (Hint: you may repeat the randomized rounding process if there remains some uncovered subsets after one time of applying the randomized rounding.)

Latest revision as of 09:36, 5 May 2014

Problem 1

(Due to J. Naor.)

The Chernoff bound is an exponentially decreasing bound on tail distributions. Let [math]\displaystyle{ X_1,\dots,X_n }[/math] be independent random variables and [math]\displaystyle{ \mathbf{E}[X_i]=0 }[/math] for all [math]\displaystyle{ 1\le i\le n }[/math]. Define [math]\displaystyle{ X=X_1+X_2+\dots+X_n }[/math]. We can use the following two kinds of tail inequalities for [math]\displaystyle{ X }[/math].

  • Chernoff Bounds:
[math]\displaystyle{ \Pr[|X|\ge\delta]\le\min_{t\ge 0}\frac{\mathbf{E}[e^{t|X|}]}{e^{t\delta}} }[/math].


  • [math]\displaystyle{ k }[/math]th-Moment Bound:
[math]\displaystyle{ \Pr[|X|\ge\delta]\le\frac{\mathbf{E}[|X|^k]}{\delta^k} }[/math].
  1. Show that for each [math]\displaystyle{ \delta }[/math], there exists a choice of [math]\displaystyle{ k }[/math] such that the [math]\displaystyle{ k }[/math]th-moment bound is stronger than the Chernoff bound. (Hint: You may use the probabilistic method.)
  2. Why would we still prefer the Chernoff bound to the seemingly stronger [math]\displaystyle{ k }[/math]th-moment bound?

Problem 2

Given a binary string, define a run as a maximal sequence of contiguous 1s; for example, the following string

[math]\displaystyle{ \underbrace{111}_{3}00\underbrace{11}_{2}00\underbrace{111111}_{6}0\underbrace{1}_{1}0\underbrace{11}_{2} }[/math]

contains 5 runs, of length 3, 2, 6, 1, and 2.

Let [math]\displaystyle{ S }[/math] be a binary string of length [math]\displaystyle{ n }[/math], generated uniformly at random. Let [math]\displaystyle{ X_k }[/math] be the number of runs in [math]\displaystyle{ S }[/math] of length [math]\displaystyle{ k }[/math] or more.

  • Compute the exact value of [math]\displaystyle{ \mathbb{E}[X_k] }[/math] as a function of [math]\displaystyle{ n }[/math] and [math]\displaystyle{ k }[/math].
  • Give the best concentration bound you can for [math]\displaystyle{ |X_k -\mathbb{E}[X_k]| }[/math].

Problem 3

The maximum directed cut problem (MAX-DICUT).

We are given as input a directed graph [math]\displaystyle{ G=(V,E) }[/math], with each directed edge [math]\displaystyle{ (u,v)\in E }[/math] having a nonnegative weight [math]\displaystyle{ w_{uv}\ge 0 }[/math]. The goal is to partition [math]\displaystyle{ V }[/math] into two sets [math]\displaystyle{ S\, }[/math] and [math]\displaystyle{ \bar{S}=V\setminus S }[/math] so as to maximize the value of [math]\displaystyle{ \sum_{(u,v)\in E\atop u\in S,v\not\in S}w_{uv} }[/math], that is, the total weight of the edges going from [math]\displaystyle{ S\, }[/math] to [math]\displaystyle{ \bar{S} }[/math].

  • Give a randomized [math]\displaystyle{ \frac{1}{4} }[/math]-approximation algorithm based on random sampling.
  • Prove that the following is an integer programming for the problem:
[math]\displaystyle{ \begin{align} \text{maximize} && \sum_{(i,j)\in E}w_{ij}z_{ij}\\ \text{subject to} && z_{ij} &\le x_i, & \forall (i,j)&\in E,\\ && z_{ij} &\le 1-x_j, & \forall (i,j)&\in E,\\ && x_i &\in\{0,1\}, & \forall i&\in V,\\ && 0 \le z_{ij}&\le 1, & \forall (i,j)&\in E. \end{align} }[/math]
  • Consider a randomized rounding algorithm that solves an LP relaxation of the above integer programming and puts vertex [math]\displaystyle{ i }[/math] in [math]\displaystyle{ S }[/math] with probability [math]\displaystyle{ f(x_i^*) }[/math]. We may assume that [math]\displaystyle{ f(x) }[/math] is a linear function in the form [math]\displaystyle{ f(x)=ax+b }[/math] with some constant [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] to be fixed. Try to find good [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] so that the randomized rounding algorithm has a good approximation ratio.

Problem 4

The set cover problem is defined as follows:

  • Let [math]\displaystyle{ U=\{u_1,u_2,\ldots,u_n\} }[/math] be a set of [math]\displaystyle{ n }[/math] elements, and let [math]\displaystyle{ \mathcal{S}=\{S_1,S_2,\ldots,S_m\} }[/math] be a family of subsets of [math]\displaystyle{ U }[/math]. For each [math]\displaystyle{ u_i\in U }[/math], let [math]\displaystyle{ w_i }[/math] be a nonnegative weight of [math]\displaystyle{ u_i }[/math]. The goal is to find a subset [math]\displaystyle{ V\subseteq U }[/math] with the minimum total weight [math]\displaystyle{ \sum_{i\in V}w_i }[/math], that intersects with all [math]\displaystyle{ S_i\in\mathcal{S} }[/math].

This problem is NP-hard.

(Remark: There are two equivalent definitions of the set cover problem. We take the hitting set version.)

Questions:

  • Prove that the following is an integer programming for the problem:
[math]\displaystyle{ \begin{align} \text{minimize} && \sum_{(i,j)\in E}w_{i}x_{i}\\ \text{subject to} && \sum_{i:u_i\in S_j}x_i &\ge 1, &1\le j\le m,\\ && x_i &\in\{0,1\}, & 1\le i\le n. \end{align} }[/math]
  • Give a randomized rounding algorithm which returns an [math]\displaystyle{ O(\log m) }[/math]-approximate solution with probability at least [math]\displaystyle{ \frac{1}{2} }[/math]. (Hint: you may repeat the randomized rounding process if there remains some uncovered subsets after one time of applying the randomized rounding.)