高级算法 (Fall 2024)/Problem Set 1: Difference between revisions
Zhangyiyao (talk | contribs) |
Zhangyiyao (talk | contribs) |
||
Line 21: | Line 21: | ||
== Problem 5 (Dimension reduction) == | == Problem 5 (Dimension reduction) == | ||
In machine learning, the goal of many classification methods is to seperate data into classes using a hyperplane. A hyperplane in <math>\mathbb{R}^d</math> is characterized by a unit vector <math>a\in \mathbb{R}^d (\|a\|_2 = 1)</math> and <math>c\in \mathbb{R}</math>. It contains all <math>z\in \mathbb{R}^d</math> such that <math>a^\top z = c</math>. Suppose our dataset consists of <math>n</math> '''unit''' vectors in <math>\mathbb{R}^d</math>. These points can be separated into two linearly separable sets <math>X,Y</math> where <math>|X|+|Y| = n</math>. That is, for all <math>x\in X</math>, <math>a^\top x>c</math> and for all <math>y\in Y</math>, <math>a^\top y<c</math> (or vice versa). Furthermore, suppose that the <math>\ell_2</math> distance of each point in <math>X</math> and <math>Y</math> to this separating hyperplane is at least <math>\epsilon</math>. When this is the case, the hyperplane is said to have margin <math>\epsilon</math>. | |||
# Show that <math>X,Y</math> can be separated by the hyperplane characterized by <math>a\in \mathbb{R}^d (\|a\|_2 = 1)</math> and <math>c\in \mathbb{R}</math> with margin <math>\epsilon</math> is equivalent to the following condition: for all <math>x\in X</math>, <math>a^\top x >c+\epsilon</math> and for all <math>y\in Y</math>, <math>a^\top y < c-\epsilon</math> (or vice versa). | |||
# Show that if we use a Johnson-Lindenstrauss map <math>A\in \mathbb{R}^{k\times d}</math> (the scaled Gaussian matrix given in the lecture) to reduce our data points to <math>O(\log n/\epsilon^2)</math> dimensions, then with probability at least <math>9/10</math>, the dimension reduced data can still be separated by a hyperplane with margin <math>\epsilon/4</math>. ('''''Hint''''': use the fact that JLT preserves inner product.) |
Revision as of 03:08, 28 September 2024
- 每道题目的解答都要有完整的解题过程,中英文不限。
- 我们推荐大家使用LaTeX, markdown等对作业进行排版。
Problem 1 (Min-cut/Max-cut)
For any [math]\displaystyle{ \alpha \ge 1 }[/math], a cut is called an [math]\displaystyle{ \alpha }[/math]-approximate min-cut in a multigraph [math]\displaystyle{ G }[/math] if the number of edges in it is at most [math]\displaystyle{ \alpha }[/math] times that of the min-cut. Prove that the number of [math]\displaystyle{ \alpha }[/math]-approximate min-cuts in a multigraph [math]\displaystyle{ G }[/math] is at most [math]\displaystyle{ n^{2\alpha} / 2 }[/math]. (Hint: Run Karger's algorithm until it has [math]\displaystyle{ \lceil 2\alpha \rceil }[/math] supernodes. What is the chance that a particular [math]\displaystyle{ \alpha }[/math]-approximate min-cut is still available? How many possible cuts does this collapsed graph have?)
Problem 2 (Fingerprinting)
Two rooted trees [math]\displaystyle{ T_1 }[/math] and [math]\displaystyle{ T_2 }[/math] are said to be isomorphic if there exists a one to one mapping [math]\displaystyle{ f }[/math] from the nodes of [math]\displaystyle{ T_1 }[/math] to those of [math]\displaystyle{ T_2 }[/math] satisfying the following condition: [math]\displaystyle{ v }[/math] is a child of [math]\displaystyle{ w }[/math] in [math]\displaystyle{ T_1 }[/math] if and only if [math]\displaystyle{ f(v) }[/math] is a child of [math]\displaystyle{ f(w) }[/math] in [math]\displaystyle{ T_2 }[/math]. Observe that no ordering is assumed on the children of any vertex. Devise an efficient randomized algorithm for testing the isomorphism of rooted trees and analyze its performance. Hint: Recursively associate a polynomial [math]\displaystyle{ P_v }[/math] with each vertex [math]\displaystyle{ v }[/math] in a tree [math]\displaystyle{ T }[/math].
Problem 3 (Hashing)
Problem 4 (Concentration of measure)
Consider the Erdős–Rényi random graph [math]\displaystyle{ G(n, p) }[/math] where every two vertices in the graph are connected randomly and independently with probability [math]\displaystyle{ p }[/math]. We denote [math]\displaystyle{ G \sim G(n, p) }[/math] if [math]\displaystyle{ G }[/math] is generated in this way. Recall that [math]\displaystyle{ \chi(G) }[/math] is the chromatic number of the graph [math]\displaystyle{ G }[/math].
(a.) For [math]\displaystyle{ 0 \lt p_1 \lt p_2 \lt 1 }[/math], let [math]\displaystyle{ G_1 \sim G(n, p_1) }[/math] and let [math]\displaystyle{ G_2 \sim G(n, p_2) }[/math]. Compare [math]\displaystyle{ \mathbf{E}[\chi(G_1)] }[/math] and [math]\displaystyle{ \mathbf{E}[\chi(G_2)] }[/math] and prove it.
(b.) For [math]\displaystyle{ G \sim G(n, n^{-\alpha}) }[/math] with [math]\displaystyle{ \alpha \gt 5/6 }[/math], prove that there exists a constant [math]\displaystyle{ C }[/math] such that every subgraph of [math]\displaystyle{ G }[/math] on [math]\displaystyle{ C\sqrt{n} }[/math] vertices is [math]\displaystyle{ 3 }[/math]-colorable with probability [math]\displaystyle{ 1 - o(1) }[/math].
(c.) For [math]\displaystyle{ G \sim G(n, n^{-\alpha}) }[/math] with [math]\displaystyle{ \alpha \gt 5/6 }[/math], show that [math]\displaystyle{ \chi(G) }[/math] is concentrated on four values with probability at least [math]\displaystyle{ 1 - o(1) }[/math].
Problem 5 (Dimension reduction)
In machine learning, the goal of many classification methods is to seperate data into classes using a hyperplane. A hyperplane in [math]\displaystyle{ \mathbb{R}^d }[/math] is characterized by a unit vector [math]\displaystyle{ a\in \mathbb{R}^d (\|a\|_2 = 1) }[/math] and [math]\displaystyle{ c\in \mathbb{R} }[/math]. It contains all [math]\displaystyle{ z\in \mathbb{R}^d }[/math] such that [math]\displaystyle{ a^\top z = c }[/math]. Suppose our dataset consists of [math]\displaystyle{ n }[/math] unit vectors in [math]\displaystyle{ \mathbb{R}^d }[/math]. These points can be separated into two linearly separable sets [math]\displaystyle{ X,Y }[/math] where [math]\displaystyle{ |X|+|Y| = n }[/math]. That is, for all [math]\displaystyle{ x\in X }[/math], [math]\displaystyle{ a^\top x\gt c }[/math] and for all [math]\displaystyle{ y\in Y }[/math], [math]\displaystyle{ a^\top y\lt c }[/math] (or vice versa). Furthermore, suppose that the [math]\displaystyle{ \ell_2 }[/math] distance of each point in [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] to this separating hyperplane is at least [math]\displaystyle{ \epsilon }[/math]. When this is the case, the hyperplane is said to have margin [math]\displaystyle{ \epsilon }[/math].
- Show that [math]\displaystyle{ X,Y }[/math] can be separated by the hyperplane characterized by [math]\displaystyle{ a\in \mathbb{R}^d (\|a\|_2 = 1) }[/math] and [math]\displaystyle{ c\in \mathbb{R} }[/math] with margin [math]\displaystyle{ \epsilon }[/math] is equivalent to the following condition: for all [math]\displaystyle{ x\in X }[/math], [math]\displaystyle{ a^\top x \gt c+\epsilon }[/math] and for all [math]\displaystyle{ y\in Y }[/math], [math]\displaystyle{ a^\top y \lt c-\epsilon }[/math] (or vice versa).
- Show that if we use a Johnson-Lindenstrauss map [math]\displaystyle{ A\in \mathbb{R}^{k\times d} }[/math] (the scaled Gaussian matrix given in the lecture) to reduce our data points to [math]\displaystyle{ O(\log n/\epsilon^2) }[/math] dimensions, then with probability at least [math]\displaystyle{ 9/10 }[/math], the dimension reduced data can still be separated by a hyperplane with margin [math]\displaystyle{ \epsilon/4 }[/math]. (Hint: use the fact that JLT preserves inner product.)