|
|
| Line 1: |
Line 1: |
| *每道题目的解答都要有完整的解题过程,中英文不限。
| |
|
| |
|
| *我们推荐大家使用LaTeX, markdown等对作业进行排版。
| |
|
| |
| == Problem 1 ==
| |
| Let <math>\mathcal{M} = (E, \mathcal{I})</math> be a matroid with rank function
| |
| <math>r_{\mathcal{M}} : 2^E \to \mathbb{Z}_{\ge 0}</math>.
| |
| Prove that <math>r_{\mathcal{M}}</math> is submodular.
| |
|
| |
| == Problem 2 ==
| |
| In class, you learned two greedy algorithms with the same framework:
| |
| * For maximizing a non-negative additive function over a matroid constraint, the greedy algorithm is exact.
| |
| * For maximizing a non-negative monotone submodular function under a cardinality constraint, the algorithm gives a <math>(1-1/e)</math>-approximation.
| |
|
| |
| Now we consider a common generalization of both problems: given a matroid
| |
| <math>\mathcal{M} = (E, \mathcal{I})</math>
| |
| and a non-negative monotone submodular function
| |
| <math>f : 2^E \to \mathbb{Z}_{\ge 0}</math>,
| |
| find a set <math>A \in \mathcal{I}</math> so as to maximize <math>f(A)</math>.
| |
| Consider the same greedy algorithm:
| |
|
| |
| * <math>A \gets \emptyset</math>
| |
| * While <math>A</math> is not a base of <math>\mathcal{M}</math>:
| |
| *# <math>i^* \gets \arg\max_{i \in E \setminus A : A \cup \{i\} \in \mathcal{I}} f(A \cup \{i\})</math>
| |
| *# <math>A \gets A \cup \{i^*\}</math>
| |
| * Return <math>A</math>
| |
|
| |
| * (2a) Prove that the algorithm is a <math>1/2</math>-approximation.
| |
| * (2b) Show that the approximation ratio of <math>1/2</math> is tight for the algorithm.
| |
|
| |
| Remark: For this problem, a <math>(1-1/e)</math>-approximation is also known, so the greedy algorithm is not optimal.
| |
|
| |
| == Problem 3 ==
| |
| Consider the following course selection problem.
| |
| We are given a directed acyclic graph <math>G = (V,E)</math>,
| |
| where each vertex <math>v \in V</math> represents a course.
| |
| Each edge <math>(u,v)</math> indicates that the course <math>u</math> is a prerequisite for course <math>v</math>.
| |
| Each vertex <math>v \in V</math> has a weight <math>w_v \in \mathbb{Z}</math>,
| |
| which may be positive or negative.
| |
| The goal is to choose a maximum-total-weight set <math>S \subseteq V</math>
| |
| of courses respecting the precedence constraints:
| |
| if <math>v \in S</math>, all prerequisites of <math>v</math> are also in <math>S</math>.
| |
|
| |
| Formulate the problem as a linear program, and show that the optimal solution of the LP is integral.
| |
|
| |
| == Problem 4 ==
| |
| Recall that in the maximum coverage problem, we are given a ground set <math>U</math> of <math>n</math> elements,
| |
| <math>S_1, S_2, \ldots, S_m \subseteq U</math>,
| |
| and an integer <math>k \in [m]</math>.
| |
| The goal is to choose a set <math>C \subseteq [m]</math> with <math>|C| = k</math>
| |
| that maximizes <math>\left|\bigcup_{i \in C} S_i\right|</math>.
| |
|
| |
| Consider the following linear program relaxation for this problem:
| |
|
| |
| <math>\max \sum_{j \in U} x_j</math>
| |
|
| |
| subject to
| |
| <math>
| |
| x_j - \sum_{i \in [m] : j \in S_i} y_i \le 0 \quad \forall j \in U \\
| |
| y_i \in [0,1] \quad \forall i \in [m]
| |
| </math>
| |
|
| |
| * (4a) Show that the value of the LP is at least the value of the original maximum coverage instance.
| |
| * (4b) Design a randomized rounding algorithm that, given a solution <math>(x^*, y^*)</math> to the LP,
| |
| produces a solution <math>C \subseteq [m]</math> with <math>|C| = k</math> such that
| |
| <math>\mathbb{E}\!\left[\left|\bigcup_{i \in C} S_i\right|\right]
| |
| \ge \left(1-\frac{1}{e}\right)\sum_{j \in U} x_j^*</math>.
| |
| * (4c) Derandomize the algorithm and obtain the same guarantee deterministically.
| |
|
| |
| == Problem 5 ==
| |
| Consider the following linear program:
| |
|
| |
| <math>\min \; c^{\mathsf T} x</math>
| |
|
| |
| subject to
| |
| <math>
| |
| Ax \ge b \\
| |
| x_2, x_3, \ldots, x_n \ge 0
| |
| </math>
| |
|
| |
| Notice that the variable <math>x_1</math> is free.
| |
|
| |
| * (5a) Convert this LP into standard form.
| |
| * (5b) Write the dual of the standard-form LP.
| |
| * (5c) Simplify the dual LP.
| |
|
| |
| == Problem 6 ==
| |
| Let <math>G = (L,R,E)</math> be a bipartite graph.
| |
| Write down the linear program for maximum matching in <math>G</math>,
| |
| and the linear program for minimum vertex cover in <math>G</math>.
| |
| Prove that the size of the maximum matching equals the size of the minimum vertex cover.
| |
|
| |
| == Problem 7 ==
| |
| In the maximum 2-SAT problem, we are given <math>n</math> boolean variables
| |
| <math>x_1,\ldots,x_n</math> and <math>m</math> clauses,
| |
| each clause being the disjunction of two literals.
| |
| Design a <math>0.878</math>-approximation algorithm using SDP.
| |
|
| |
| == Problem 8 ==
| |
| In binary classification, we are given a dataset <math>\mathcal{D}</math> of <math>n</math> points
| |
| and an unknown labeling <math>h : \mathcal{D} \to \{-1,1\}</math>.
| |
| There is a family of classifiers <math>\{f_1,\ldots,f_M\}</math>
| |
| and a weak learner that outputs a classifier correct on at least <math>51\%</math> of the weighted data.
| |
|
| |
| * (8a) Show that there exists <math>\alpha \in \mathbb{R}_{\ge 0}^M</math> such that
| |
| <math>h(j) = \mathrm{sgn}\!\left(\sum_{i=1}^M \alpha_i f_i(j)\right)</math>.
| |
| * (8b) Give a polynomial-time algorithm to compute such an <math>\alpha</math>.
| |
|
| |
| == Problem 9 ==
| |
| Let <math>\mathcal{P}_1, \mathcal{P}_2 \subseteq \mathbb{R}^n</math> be polytopes.
| |
| Define
| |
| <math>
| |
| \mathrm{conv}(\mathcal{P}_1,\mathcal{P}_2)
| |
| = \{\alpha_1 x^{(1)} + \alpha_2 x^{(2)} :
| |
| x^{(1)} \in \mathcal{P}_1,\;
| |
| x^{(2)} \in \mathcal{P}_2,\;
| |
| \alpha_1,\alpha_2 \ge 0,\;
| |
| \alpha_1+\alpha_2=1\}.
| |
| </math>
| |
| Prove that
| |
| <math>\mathrm{xc}(\mathrm{conv}(\mathcal{P}_1,\mathcal{P}_2))
| |
| \le \mathrm{xc}(\mathcal{P}_1)+\mathrm{xc}(\mathcal{P}_2)</math>.
| |
|
| |
| == Problem 10 ==
| |
| We analyze the following algorithm for finding a cut in triangle-free graphs of maximum degree <math>d</math>.
| |
|
| |
| * (1) Sample a cut <math>(S,\bar S)</math> uniformly at random.
| |
| * (2) Label a vertex happy if more than half of its neighboring edges are cut.
| |
| * (3) If exactly half are cut, label it happy with probability <math>1/2</math>.
| |
| * (4) Otherwise label it unhappy.
| |
| * (5) Keep happy vertices fixed; each unhappy vertex switches sides with probability <math>1/2</math>.
| |
| * (6) Output the resulting cut.
| |
|
| |
| Fix an edge <math>(u,v)</math>.
| |
|
| |
| * (a) Define <math>p</math> and <math>q</math> as in the description and show
| |
| <math>\Pr[(u,v)\text{ is cut}] = \frac12 + \frac{p-q}{2}</math>.
| |
| * (b) Define <math>q_u,q_v</math> and show <math>q=\frac12 q_u q_v</math>.
| |
| * (c) Define <math>p_u,p_v</math> and show <math>p=\frac12 p_u p_v</math>.
| |
| * (d) Show <math>p_u+q_u=1</math>, <math>p_v+q_v=1</math> and derive the final expression.
| |
| * (e) Compute <math>p_u</math> and estimate the improvement.
| |
|
| |
| HINT:
| |
| <math>{d \choose \lfloor d/2 \rfloor}
| |
| = \Omega\!\left(\frac{2^d}{\sqrt d}\right)</math>.
| |