高级算法 (Fall 2025)/Problem Set 3

From TCS Wiki
Revision as of 14:25, 25 December 2025 by Zhangyiyao (talk | contribs) (Created page with "*每道题目的解答都要有完整的解题过程,中英文不限。 *我们推荐大家使用LaTeX, markdown等对作业进行排版。 == Problem 1 == Let <math>\mathcal{M} = (E, \mathcal{I})</math> be a matroid with rank function <math>r_{\mathcal{M}} : 2^E \to \mathbb{Z}_{\ge 0}</math>. Prove that <math>r_{\mathcal{M}}</math> is submodular. == Problem 2 == In class, you learned two greedy algorithms with the same framework: * For maximizing a non-negative add...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
  • 每道题目的解答都要有完整的解题过程,中英文不限。
  • 我们推荐大家使用LaTeX, markdown等对作业进行排版。

Problem 1

Let [math]\displaystyle{ \mathcal{M} = (E, \mathcal{I}) }[/math] be a matroid with rank function [math]\displaystyle{ r_{\mathcal{M}} : 2^E \to \mathbb{Z}_{\ge 0} }[/math]. Prove that [math]\displaystyle{ r_{\mathcal{M}} }[/math] is submodular.

Problem 2

In class, you learned two greedy algorithms with the same framework:

  • For maximizing a non-negative additive function over a matroid constraint, the greedy algorithm is exact.
  • For maximizing a non-negative monotone submodular function under a cardinality constraint, the algorithm gives a [math]\displaystyle{ (1-1/e) }[/math]-approximation.

Now we consider a common generalization of both problems: given a matroid [math]\displaystyle{ \mathcal{M} = (E, \mathcal{I}) }[/math] and a non-negative monotone submodular function [math]\displaystyle{ f : 2^E \to \mathbb{Z}_{\ge 0} }[/math], find a set [math]\displaystyle{ A \in \mathcal{I} }[/math] so as to maximize [math]\displaystyle{ f(A) }[/math]. Consider the same greedy algorithm:

  • [math]\displaystyle{ A \gets \emptyset }[/math]
  • While [math]\displaystyle{ A }[/math] is not a base of [math]\displaystyle{ \mathcal{M} }[/math]:
    1. [math]\displaystyle{ i^* \gets \arg\max_{i \in E \setminus A : A \cup \{i\} \in \mathcal{I}} f(A \cup \{i\}) }[/math]
    2. [math]\displaystyle{ A \gets A \cup \{i^*\} }[/math]
  • Return [math]\displaystyle{ A }[/math]
  • (2a) Prove that the algorithm is a [math]\displaystyle{ 1/2 }[/math]-approximation.
  • (2b) Show that the approximation ratio of [math]\displaystyle{ 1/2 }[/math] is tight for the algorithm.

Remark: For this problem, a [math]\displaystyle{ (1-1/e) }[/math]-approximation is also known, so the greedy algorithm is not optimal.

Problem 3

Consider the following course selection problem. We are given a directed acyclic graph [math]\displaystyle{ G = (V,E) }[/math], where each vertex [math]\displaystyle{ v \in V }[/math] represents a course. Each edge [math]\displaystyle{ (u,v) }[/math] indicates that the course [math]\displaystyle{ u }[/math] is a prerequisite for course [math]\displaystyle{ v }[/math]. Each vertex [math]\displaystyle{ v \in V }[/math] has a weight [math]\displaystyle{ w_v \in \mathbb{Z} }[/math], which may be positive or negative. The goal is to choose a maximum-total-weight set [math]\displaystyle{ S \subseteq V }[/math] of courses respecting the precedence constraints: if [math]\displaystyle{ v \in S }[/math], all prerequisites of [math]\displaystyle{ v }[/math] are also in [math]\displaystyle{ S }[/math].

Formulate the problem as a linear program, and show that the optimal solution of the LP is integral.

Problem 4

Recall that in the maximum coverage problem, we are given a ground set [math]\displaystyle{ U }[/math] of [math]\displaystyle{ n }[/math] elements, [math]\displaystyle{ S_1, S_2, \ldots, S_m \subseteq U }[/math], and an integer [math]\displaystyle{ k \in [m] }[/math]. The goal is to choose a set [math]\displaystyle{ C \subseteq [m] }[/math] with [math]\displaystyle{ |C| = k }[/math] that maximizes [math]\displaystyle{ \left|\bigcup_{i \in C} S_i\right| }[/math].

Consider the following linear program relaxation for this problem:

[math]\displaystyle{ \max \sum_{j \in U} x_j }[/math]

subject to [math]\displaystyle{ x_j - \sum_{i \in [m] : j \in S_i} y_i \le 0 \quad \forall j \in U \\ y_i \in [0,1] \quad \forall i \in [m] }[/math]

  • (4a) Show that the value of the LP is at least the value of the original maximum coverage instance.
  • (4b) Design a randomized rounding algorithm that, given a solution [math]\displaystyle{ (x^*, y^*) }[/math] to the LP,

produces a solution [math]\displaystyle{ C \subseteq [m] }[/math] with [math]\displaystyle{ |C| = k }[/math] such that [math]\displaystyle{ \mathbb{E}\!\left[\left|\bigcup_{i \in C} S_i\right|\right] \ge \left(1-\frac{1}{e}\right)\sum_{j \in U} x_j^* }[/math].

  • (4c) Derandomize the algorithm and obtain the same guarantee deterministically.

Problem 5

Consider the following linear program:

[math]\displaystyle{ \min \; c^{\mathsf T} x }[/math]

subject to [math]\displaystyle{ Ax \ge b \\ x_2, x_3, \ldots, x_n \ge 0 }[/math]

Notice that the variable [math]\displaystyle{ x_1 }[/math] is free.

  • (5a) Convert this LP into standard form.
  • (5b) Write the dual of the standard-form LP.
  • (5c) Simplify the dual LP.

Problem 6

Let [math]\displaystyle{ G = (L,R,E) }[/math] be a bipartite graph. Write down the linear program for maximum matching in [math]\displaystyle{ G }[/math], and the linear program for minimum vertex cover in [math]\displaystyle{ G }[/math]. Prove that the size of the maximum matching equals the size of the minimum vertex cover.

Problem 7

In the maximum 2-SAT problem, we are given [math]\displaystyle{ n }[/math] boolean variables [math]\displaystyle{ x_1,\ldots,x_n }[/math] and [math]\displaystyle{ m }[/math] clauses, each clause being the disjunction of two literals. Design a [math]\displaystyle{ 0.878 }[/math]-approximation algorithm using SDP.

Problem 8

In binary classification, we are given a dataset [math]\displaystyle{ \mathcal{D} }[/math] of [math]\displaystyle{ n }[/math] points and an unknown labeling [math]\displaystyle{ h : \mathcal{D} \to \{-1,1\} }[/math]. There is a family of classifiers [math]\displaystyle{ \{f_1,\ldots,f_M\} }[/math] and a weak learner that outputs a classifier correct on at least [math]\displaystyle{ 51\% }[/math] of the weighted data.

  • (8a) Show that there exists [math]\displaystyle{ \alpha \in \mathbb{R}_{\ge 0}^M }[/math] such that

[math]\displaystyle{ h(j) = \mathrm{sgn}\!\left(\sum_{i=1}^M \alpha_i f_i(j)\right) }[/math].

  • (8b) Give a polynomial-time algorithm to compute such an [math]\displaystyle{ \alpha }[/math].

Problem 9

Let [math]\displaystyle{ \mathcal{P}_1, \mathcal{P}_2 \subseteq \mathbb{R}^n }[/math] be polytopes. Define [math]\displaystyle{ \mathrm{conv}(\mathcal{P}_1,\mathcal{P}_2) = \{\alpha_1 x^{(1)} + \alpha_2 x^{(2)} : x^{(1)} \in \mathcal{P}_1,\; x^{(2)} \in \mathcal{P}_2,\; \alpha_1,\alpha_2 \ge 0,\; \alpha_1+\alpha_2=1\}. }[/math] Prove that [math]\displaystyle{ \mathrm{xc}(\mathrm{conv}(\mathcal{P}_1,\mathcal{P}_2)) \le \mathrm{xc}(\mathcal{P}_1)+\mathrm{xc}(\mathcal{P}_2) }[/math].

Problem 10

We analyze the following algorithm for finding a cut in triangle-free graphs of maximum degree [math]\displaystyle{ d }[/math].

  • (1) Sample a cut [math]\displaystyle{ (S,\bar S) }[/math] uniformly at random.
  • (2) Label a vertex happy if more than half of its neighboring edges are cut.
  • (3) If exactly half are cut, label it happy with probability [math]\displaystyle{ 1/2 }[/math].
  • (4) Otherwise label it unhappy.
  • (5) Keep happy vertices fixed; each unhappy vertex switches sides with probability [math]\displaystyle{ 1/2 }[/math].
  • (6) Output the resulting cut.

Fix an edge [math]\displaystyle{ (u,v) }[/math].

  • (a) Define [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] as in the description and show

[math]\displaystyle{ \Pr[(u,v)\text{ is cut}] = \frac12 + \frac{p-q}{2} }[/math].

  • (b) Define [math]\displaystyle{ q_u,q_v }[/math] and show [math]\displaystyle{ q=\frac12 q_u q_v }[/math].
  • (c) Define [math]\displaystyle{ p_u,p_v }[/math] and show [math]\displaystyle{ p=\frac12 p_u p_v }[/math].
  • (d) Show [math]\displaystyle{ p_u+q_u=1 }[/math], [math]\displaystyle{ p_v+q_v=1 }[/math] and derive the final expression.
  • (e) Compute [math]\displaystyle{ p_u }[/math] and estimate the improvement.

HINT: [math]\displaystyle{ {d \choose \lfloor d/2 \rfloor} = \Omega\!\left(\frac{2^d}{\sqrt d}\right) }[/math].