高级算法 (Fall 2020)/Problem Set 3

From TCS Wiki
Jump to navigation Jump to search

Problem 1

Given [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ S_1,S_2,\ldots, S_m\subseteq U }[/math] of a universe [math]\displaystyle{ U }[/math] of size [math]\displaystyle{ n }[/math], we want to find a [math]\displaystyle{ C\subseteq\{1,2,\ldots, {m}\} }[/math] of fixed size [math]\displaystyle{ k=|C| }[/math] with the maximum coverage [math]\displaystyle{ \left|\bigcup_{i\in C}S_i\right| }[/math].

  • Give a poly-time greedy algorithm for the problem. Prove that the approximation ratio is [math]\displaystyle{ 1-(1-1/k)^k\gt 1-1/e }[/math].


Problem 2

In the maximum directed cut (MAX-DICUT) problem, we are given as input a directed graph [math]\displaystyle{ G(V,E) }[/math]. The goal is to partition [math]\displaystyle{ V }[/math] into disjoint [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] so that the number of edges in [math]\displaystyle{ E(S,T)=\{(u,v)\in E\mid u\in S, v\in T\} }[/math] is maximized. The following is the integer program for MAX-DICUT:

[math]\displaystyle{ \begin{align} \text{maximize} &&& \sum_{(u,v)\in E}y_{u,v}\\ \text{subject to} && y_{u,v} &\le x_u, & \forall (u,v)&\in E,\\ && y_{u,v} &\le 1-x_v, & \forall (u,v)&\in E,\\ && x_v &\in\{0,1\}, & \forall v&\in V,\\ && y_{u,v} &\in\{0,1\}, & \forall (u,v)&\in E. \end{align} }[/math]

Let [math]\displaystyle{ x_v^*,y_{u,v}^* }[/math] denote the optimal solution to the LP-relaxation of the above integer program.

  • Apply the randomized rounding such that for every [math]\displaystyle{ v\in V }[/math], [math]\displaystyle{ \hat{x}_v=1 }[/math] independently with probability [math]\displaystyle{ x_v^* }[/math]. Analyze the approximation ratio (between the expected size of the random cut and OPT).
  • Apply another randomized rounding such that for every [math]\displaystyle{ v\in V }[/math], [math]\displaystyle{ \hat{x}_v=1 }[/math] independently with probability [math]\displaystyle{ 1/4+x_v^*/2 }[/math]. Analyze the approximation ratio for this algorithm.

Problem 3

Recall the MAX-SAT problem and its integer program:

[math]\displaystyle{ \begin{align} \text{maximize} &&& \sum_{j=1}^my_j\\ \text{subject to} &&& \sum_{i\in S_j^+}x_i+\sum_{i\in S_j^-}(1-x_i)\ge y_j, && 1\le j\le m,\\ &&& x_i\in\{0,1\}, && 1\le i\le n,\\ &&& y_j\in\{0,1\}, && 1\le j\le m. \end{align} }[/math]

Recall that [math]\displaystyle{ S_j^+,S_j^-\subseteq\{1,2,\ldots,n\} }[/math] are the respective sets of variables appearing positively and negatively in clause [math]\displaystyle{ j }[/math].

Let [math]\displaystyle{ x_i^*,y_j^* }[/math] denote the optimal solution to the LP-relaxation of the above integer program. In our class we learnt that if [math]\displaystyle{ \hat{x}_i }[/math] is round to 1 independently with probability [math]\displaystyle{ x_i^* }[/math], we have approximation ratio [math]\displaystyle{ 1-1/\mathrm{e} }[/math].

We consider a generalized rounding scheme such that every [math]\displaystyle{ \hat{x}_i }[/math] is round to 1 independently with probability [math]\displaystyle{ f(x_i^*) }[/math] for some function [math]\displaystyle{ f:[0,1]\to[0,1] }[/math] to be specified.

  • Suppose [math]\displaystyle{ f(x) }[/math] is an arbitrary function satisfying that [math]\displaystyle{ 1-4^{-x}\le f(x)\le 4^{x-1} }[/math] for any [math]\displaystyle{ x\in[0,1] }[/math]. Show that with this rounding scheme, the approximation ratio (between the expected number of satisfied clauses and OPT) is at least [math]\displaystyle{ 3/4 }[/math].
  • Derandomize this algorithm through conditional expectation and give a deterministic polynomial time algorithm with approximation ratio [math]\displaystyle{ 3/4 }[/math].
  • Is it possible that for some more clever [math]\displaystyle{ f }[/math] we can do better than this? Try to justify your argument.

Problem 4

The following is the weighted version of set cover problem:

Given [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ S_1,S_2,\ldots,S_m\subseteq U }[/math], where [math]\displaystyle{ U }[/math] is a universe of size [math]\displaystyle{ n=|U| }[/math], and each subset [math]\displaystyle{ S_i }[/math] is assigned a positive weight [math]\displaystyle{ w_i\gt 0 }[/math], the goal is to find a [math]\displaystyle{ C\subseteq\{1,2,\ldots,m\} }[/math] such that [math]\displaystyle{ U=\bigcup_{i\in C}S_i }[/math] and the total weight [math]\displaystyle{ \sum_{I\in C}w_i }[/math] is minimized.

  • Give an integer program for the problem and its LP relaxation.
  • Consider the following idea of randomized rounding: independently round each fractional value to [math]\displaystyle{ \{0,1\} }[/math] with the probability of the fractional value itself; and repeatedly apply this process to the variables rounded to 0 in previous iterations until [math]\displaystyle{ U }[/math] is fully covered. Show that this can return a set cover with [math]\displaystyle{ O(\log n) }[/math] approximation ratio with probability at least [math]\displaystyle{ 0.99 }[/math].

Problem 5

Consider the following greedy algorithm for the knapsack problem, which has been briefly mentioned in class:

Sort all items according to the ratio [math]\displaystyle{ r_i = v_i/w_i }[/math]
so that [math]\displaystyle{ r_1 ≥ r_2 ≥ … ≥ r_n }[/math];

for [math]\displaystyle{ i=1,2, ..., n }[/math]

item [math]\displaystyle{ i }[/math] joins [math]\displaystyle{ S }[/math] if the resulting total weight [math]\displaystyle{ ≤ B }[/math];
  • Show that the approximation ratio of this algorithm can be arbitrarily bad. Formally, given an arbitrary constant [math]\displaystyle{ \epsilon \in (0,1) }[/math], construct an instance that the solution of this algorithm [math]\displaystyle{ SOL }[/math] is less than [math]\displaystyle{ (1-\epsilon)OPT }[/math].
  • Consider the following modification to this algorithm. Let the sorted order of objects be [math]\displaystyle{ a_1,..., a_n }[/math]. Find the lowest [math]\displaystyle{ k }[/math] such that the size of the first [math]\displaystyle{ k }[/math] objects exceeds [math]\displaystyle{ B }[/math]. Now, pick the more valuable of [math]\displaystyle{ \{a_1,...,a_{k−1}\} }[/math] and [math]\displaystyle{ \{a_k\} }[/math] (we have assumed that the size of each object is at most [math]\displaystyle{ B }[/math]). Show that this algorithm achieves an approximation ratio of [math]\displaystyle{ 1/2 }[/math].