高级算法 (Fall 2019)/Dimension Reduction: Difference between revisions
imported>Etone |
imported>Etone |
||
Line 92: | Line 92: | ||
==Concentration of <math>\chi^2</math>-distribution == | ==Concentration of <math>\chi^2</math>-distribution == | ||
By the above argument, the Johnson-Lindenstrauss theorem is proved by the following concentration inequality for the <math> | By the above argument, the Johnson-Lindenstrauss theorem is proved by the following concentration inequality for the sum of the squares of <math>k</math> Gaussian distributions. | ||
{{Theorem | {{Theorem | ||
|Chernoff bound for | |Chernoff bound for sum-of-squares of Gaussian distributions| | ||
:For any positive <math>\epsilon,\delta<1/2</math> there is a positive integer <math>k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right)</math> such that for i.i.d. Gaussian random variables <math>Y_1,Y_2,\ldots,Y_k\sim\mathcal{N}(0,1/k)</math>, | :For any positive <math>\epsilon,\delta<1/2</math> there is a positive integer <math>k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right)</math> such that for i.i.d. Gaussian random variables <math>Y_1,Y_2,\ldots,Y_k\sim\mathcal{N}(0,1/k)</math>, | ||
::<math>\Pr\left[\left|\sum_{i=1}^kY_i^2-1\right|>\epsilon\right]<\delta</math>. | ::<math>\Pr\left[\left|\sum_{i=1}^kY_i^2-1\right|>\epsilon\right]<\delta</math>. | ||
}} | }} | ||
First, we see that this is indeed a concentration inequality that bounds the deviation of | First, we see that this is indeed a concentration inequality that bounds the deviation of a random variable from its mean. | ||
For each <math>Y_i\sim\mathcal{N}\left(0,\frac{1}{k}\right)</math>, we know that <math>\mathbf{E}[Y_i]=0</math> and <math>\mathbf{Var}[Y_i]=\mathbf{E}[Y_i^2]-\mathbf{E}[Y_i]^2=\frac{1}{k}</math>, thus | For each <math>Y_i\sim\mathcal{N}\left(0,\frac{1}{k}\right)</math>, we know that <math>\mathbf{E}[Y_i]=0</math> and <math>\mathbf{Var}[Y_i]=\mathbf{E}[Y_i^2]-\mathbf{E}[Y_i]^2=\frac{1}{k}</math>, thus | ||
Line 105: | Line 105: | ||
By linearity of expectation, it holds that | By linearity of expectation, it holds that | ||
:<math>\mathbf{E}\left[\sum_{i=1}^kY_i^2\right]=\sum_{i=1}^k\mathbf{E}\left[Y_i^2\right]=1</math>. | :<math>\mathbf{E}\left[\sum_{i=1}^kY_i^2\right]=\sum_{i=1}^k\mathbf{E}\left[Y_i^2\right]=1</math>. | ||
We prove an equivalent concentration bound stated for the '''<math>\chi^2</math>-distribution''' (sum of the squares of <math>k</math> '''standard''' Gaussian distributions). | |||
{{Theorem | |||
|Chernoff bound for the <math>\chi^2</math>-distribution| | |||
:For any positive <math>\epsilon,\delta<1/2</math> there is a positive integer <math>k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right)</math> such that for i.i.d. standard Gaussian random variables <math>X_1,X_2,\ldots,X_k\sim\mathcal{N}(0,1)</math>, | |||
::<math>\Pr\left[\left|\sum_{i=1}^kX_i^2-k\right|>\epsilon k\right]<\delta</math>. | |||
}} | |||
= Nearest Neighbor Search (NNS)= | = Nearest Neighbor Search (NNS)= | ||
= Locality-Sensitive Hashing (LSH)= | = Locality-Sensitive Hashing (LSH)= |
Revision as of 13:40, 22 October 2019
Metric Embedding
A metric space is a pair [math]\displaystyle{ (X,d) }[/math], where [math]\displaystyle{ X }[/math] is a set and [math]\displaystyle{ d }[/math] is a metric (or distance) on [math]\displaystyle{ X }[/math], i.e., a function
- [math]\displaystyle{ d:X^2\to\mathbb{R}_{\ge 0} }[/math]
such that for any [math]\displaystyle{ x,y,z\in X }[/math], the following axioms hold:
- (identity of indiscernibles) [math]\displaystyle{ d(x,y)=0\Leftrightarrow x=y }[/math]
- (symmetry) [math]\displaystyle{ d(x,y)=d(y,x) }[/math]
- (triangle inequality) [math]\displaystyle{ d(x,z)\le d(x,y)+d(y,z) }[/math]
Let [math]\displaystyle{ (X,d_X) }[/math] and [math]\displaystyle{ (Y,d_Y) }[/math] be two metric spaces. A mapping
- [math]\displaystyle{ \phi:X\to Y }[/math]
is called an embedding of metric space [math]\displaystyle{ X }[/math] into [math]\displaystyle{ Y }[/math]. The embedding is said to be with distortion [math]\displaystyle{ \alpha\ge1 }[/math] if for any [math]\displaystyle{ x,y\in X }[/math] it holds that
- [math]\displaystyle{ \frac{1}{\alpha}\cdot d(x,y)\le d(\phi(x),\phi(y))\le \alpha\cdot d(x,y) }[/math].
In Computer Science, a typical scenario for the metric embedding is as follows. We want to solve some difficult computation problem on a metric space [math]\displaystyle{ (X,d) }[/math]. Instead of solving this problem directly on the original metric space, we embed the metric into a new metric space [math]\displaystyle{ (Y,d_Y) }[/math] (with low distortion) where the computation problem is much easier to solve.
One particular important case for the metric embedding is to embed a high-dimensional metric space to a new metric space whose dimension is much lower. This is called dimension reduction. This can be very helpful because various very common computation tasks can be very hard to solve on high-dimensional space due to the curse of dimensionality.
The Johnson-Lindenstrauss Theorem
The Johnson-Lindenstrauss Theorem or Johnson-Lindenstrauss Transformation (both shorten as JLT) is a fundamental result for dimension reduction in Euclidian space.
Recall that in Euclidian space [math]\displaystyle{ \mathbf{R}^d }[/math], for any two points [math]\displaystyle{ x,y\in\mathbf{R}^d }[/math], the Euclidian distance between them is given by [math]\displaystyle{ \|x-y\|=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+\cdots +(x_d-y_d)^2} }[/math], where [math]\displaystyle{ \|\cdot\|=\|\cdot\|_2 }[/math] denotes the Euclidian norm (a.k.a. the [math]\displaystyle{ \ell_2 }[/math]-norm).
The JLT says that in Euclidian space, it is always possible to embed a set of [math]\displaystyle{ n }[/math] points in arbitrary dimension to [math]\displaystyle{ O(\log n) }[/math] dimension with constant distortion. The theorem itself is stated formally as follows.
Johnson-Lindenstrauss Theorem, 1984 - For any [math]\displaystyle{ 0\lt \epsilon\lt 1/2 }[/math] and any positive integer [math]\displaystyle{ n }[/math], there is a positive integer [math]\displaystyle{ k=O(\epsilon^{-2}\log n) }[/math] such that the following holds:
- For any set [math]\displaystyle{ S\subset\mathbf{R}^d }[/math] with [math]\displaystyle{ |S|=n }[/math], where [math]\displaystyle{ d }[/math] is arbitrary, there is an embedding [math]\displaystyle{ \phi:\mathbf{R}^d\rightarrow\mathbf{R}^k }[/math] such that
- [math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|^2\le\|\phi(x)-\phi(y)\|^2\le(1+\epsilon)\|x-y\|^2 }[/math].
The Johnson-Lindenstrauss Theorem is usually stated for the [math]\displaystyle{ \ell_2^2 }[/math]-norm [math]\displaystyle{ \|\cdot\|^2 }[/math] instead of the Euclidian norm [math]\displaystyle{ \|\cdot\| }[/math] itself. Note that this does not change anything other than the constant faction in [math]\displaystyle{ k=O(\epsilon^{-2}\log n) }[/math] because [math]\displaystyle{ (1\pm\epsilon)^{\frac{1}{2}}=1\pm\Theta(\epsilon) }[/math]. The reason for stating the theorem in [math]\displaystyle{ \ell_2^2 }[/math]-norm is because [math]\displaystyle{ \|\cdot\|^2 }[/math] is a sum (rather than a square root) which is easier to analyze.
In fact, the embedding [math]\displaystyle{ \phi:\mathbf{R}^d\rightarrow\mathbf{R}^k }[/math] can be as simple as a linear transformation [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] so that [math]\displaystyle{ \phi(x)=Ax }[/math] for any [math]\displaystyle{ x\in\mathbf{R}^{d} }[/math]. Therefore, the above theorem can be stated more precisely as follows.
Johnson-Lindenstrauss Theorem (linear embedding) - For any [math]\displaystyle{ 0\lt \epsilon\lt 1/2 }[/math] and any positive integer [math]\displaystyle{ n }[/math], there is a positive integer [math]\displaystyle{ k=O(\epsilon^{-2}\log n) }[/math] such that the following holds:
- For any set [math]\displaystyle{ S\subset\mathbf{R}^d }[/math] with [math]\displaystyle{ |S|=n }[/math], where [math]\displaystyle{ d }[/math] is arbitrary, there is a linear transformation [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] such that
- [math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2 }[/math].
The theorem is proved by the probabilistic method. Specifically, we construct a random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] and show that with high probability ([math]\displaystyle{ 1-O(1/n) }[/math]) it is a good embedding satisfying:
- [math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|\le\|Ax-Ay\|\le(1+\epsilon)\|x-y\| }[/math].
Therefore, if such random matrix [math]\displaystyle{ A }[/math] is efficient to construct, it immediately gives us an efficient randomized algorithm for dimension reduction in the Euclidian space.
There are several such constructions of the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math], including:
- projection onto uniform random [math]\displaystyle{ k }[/math]-dimensional subspace of [math]\displaystyle{ \mathbf{R}^{d} }[/math]; (the original construction of Johnson and Lindenstrauss in 1984; a simplified analysis due to Dasgupta and Gupta in 1999)
- random matrix with i.i.d. Gaussian entries; (due to Indyk and Motwani in 1998)
- random matrix with i.i.d. -1/+1 entries; (due to Achlioptas in 2003)
JLT via Gaussian matrix
Here we prove the Johnson-Lindenstrauss theorem with the second construction, by the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] with i.i.d. Gaussian entries. The construction of [math]\displaystyle{ A }[/math] is very simple:
- Each entry of [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] is drawn independently from the Gaussian distribution [math]\displaystyle{ \mathcal{N}(0,1/k) }[/math].
Recall that a Gaussian distribution [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math] is specified by its mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math] such that for a random variable [math]\displaystyle{ X }[/math] distributed as [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math], we have
- [math]\displaystyle{ \mathbf{E}[X]=\mu }[/math] and [math]\displaystyle{ \mathbf{Var}[X]=\sigma^2 }[/math],
and the probability density function is given by [math]\displaystyle{ p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\mathrm{e}^{-\frac{(x-\mu)^2}{2\sigma^2}} }[/math], therefore
- [math]\displaystyle{ \Pr[X\le t]=\int_{-\infty}^t\frac{1}{\sqrt{2\pi\sigma^2}}\mathrm{e}^{-\frac{(x-\mu)^2}{2\sigma^2}}\,\mathrm{d}x }[/math].
Fix any two points [math]\displaystyle{ x,y }[/math] out of the [math]\displaystyle{ n }[/math] points in [math]\displaystyle{ S\subset \mathbf{R}^d }[/math], if we can show that
- [math]\displaystyle{ \Pr\left[(1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2\right]\ge 1-\frac{1}{n^3} }[/math],
then by the union bound, the following event
- [math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2 }[/math]
holds with probability [math]\displaystyle{ 1-O(1/n) }[/math]. The Johnson-Lindenstrauss theorem follows.
Furthermore, dividing both sides of the inequalities [math]\displaystyle{ (1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2 }[/math] by the factor [math]\displaystyle{ \|x-y\|^2 }[/math] gives us
- [math]\displaystyle{ (1-\epsilon)\le\frac{\|Ax-Ay\|^2}{\|x-y\|^2}=\left\|A\frac{(x-y)}{\|x-y\|^2}\right\|^2\le(1+\epsilon) }[/math],
where [math]\displaystyle{ \frac{(x-y)}{\|x-y\|^2} }[/math] is a unit vector.
Therefore, the Johnson-Lindenstrauss theorem is proved once the following theorem on the unit vector is proved.
Theorem (JLT on unit vector) - For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] with each entry drawn independently from the Gaussian distribution [math]\displaystyle{ \mathcal{N}(0,1/k) }[/math], for any unit vector [math]\displaystyle{ u\in\mathbf{R}^d }[/math] with [math]\displaystyle{ \|u\|=1 }[/math],
- [math]\displaystyle{ \Pr\left[\left|\|Au\|^2-1\right|\gt \epsilon\right]\lt \delta }[/math].
- For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] with each entry drawn independently from the Gaussian distribution [math]\displaystyle{ \mathcal{N}(0,1/k) }[/math], for any unit vector [math]\displaystyle{ u\in\mathbf{R}^d }[/math] with [math]\displaystyle{ \|u\|=1 }[/math],
For any [math]\displaystyle{ u\in\mathbf{R}^d }[/math], we have [math]\displaystyle{ \|Au\|^2=\sum_{i=1}^k(Au)_i^2 }[/math], where the [math]\displaystyle{ (Au)_i }[/math]'s are independent of each other, because each
- [math]\displaystyle{ (Au)_i=\langle A_{i\cdot},u\rangle=\sum_{j=1}^dA_{ij}u_j }[/math]
is determined by a distinct row vector [math]\displaystyle{ A_{i\cdot} }[/math] of the matrix [math]\displaystyle{ A }[/math] with independent entries.
Moreover, since each entry [math]\displaystyle{ A_{ij}\sim\mathcal{N}(0,1/k) }[/math] independently and recall that for any two independent Gaussian random variables [math]\displaystyle{ X_1\sim \mathcal{N}(\mu_1,\sigma_1^2) }[/math] and [math]\displaystyle{ X_2\sim \mathcal{N}(\mu_2,\sigma_2^2) }[/math], their weighted sum [math]\displaystyle{ aX_1+bX_2 }[/math] follows the Gaussian distribution [math]\displaystyle{ \mathcal{N}(a\mu_1+b\mu_2,a^2\sigma_1^2+b^2\sigma_2^2) }[/math], therefore [math]\displaystyle{ (Au)_i=\langle A_{i\cdot},u\rangle=\sum_{j=1}^dA_{ij}u_j }[/math] follows the Gaussian distribution
- [math]\displaystyle{ (Au)_i\sim\mathcal{N}\left(0,\sum_{j=1}^d\frac{u_j^2}{k}\right) }[/math],
which is precisely [math]\displaystyle{ \mathcal{N}\left(0,\frac{1}{k}\right) }[/math] if [math]\displaystyle{ u\in\mathbf{R}^d }[/math] is a unit vector, i.e. [math]\displaystyle{ \|u\|^2=\sum_{j=1}^du_j^2=1 }[/math].
In summary, the random variable [math]\displaystyle{ \|Au\|^2 }[/math] that we are interested in, can be represented as a sum-of-square form
- [math]\displaystyle{ \|Au\|^2=\sum_{i=1}^kY_i^2 }[/math],
where each [math]\displaystyle{ Y_i=(Au)_i }[/math] independently follows the Gaussian distribution [math]\displaystyle{ \mathcal{N}\left(0,\frac{1}{k}\right) }[/math]. The distribution of such [math]\displaystyle{ \sum_{i=1}^kY_i^2 }[/math] is known as the [math]\displaystyle{ \chi^2 }[/math]-distribution.
Concentration of [math]\displaystyle{ \chi^2 }[/math]-distribution
By the above argument, the Johnson-Lindenstrauss theorem is proved by the following concentration inequality for the sum of the squares of [math]\displaystyle{ k }[/math] Gaussian distributions.
Chernoff bound for sum-of-squares of Gaussian distributions - For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for i.i.d. Gaussian random variables [math]\displaystyle{ Y_1,Y_2,\ldots,Y_k\sim\mathcal{N}(0,1/k) }[/math],
- [math]\displaystyle{ \Pr\left[\left|\sum_{i=1}^kY_i^2-1\right|\gt \epsilon\right]\lt \delta }[/math].
- For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for i.i.d. Gaussian random variables [math]\displaystyle{ Y_1,Y_2,\ldots,Y_k\sim\mathcal{N}(0,1/k) }[/math],
First, we see that this is indeed a concentration inequality that bounds the deviation of a random variable from its mean.
For each [math]\displaystyle{ Y_i\sim\mathcal{N}\left(0,\frac{1}{k}\right) }[/math], we know that [math]\displaystyle{ \mathbf{E}[Y_i]=0 }[/math] and [math]\displaystyle{ \mathbf{Var}[Y_i]=\mathbf{E}[Y_i^2]-\mathbf{E}[Y_i]^2=\frac{1}{k} }[/math], thus
- [math]\displaystyle{ \mathbf{E}[Y_i^2]=\mathbf{Var}[Y_i]+\mathbb{E}[Y_i]^2=\frac{1}{k} }[/math].
By linearity of expectation, it holds that
- [math]\displaystyle{ \mathbf{E}\left[\sum_{i=1}^kY_i^2\right]=\sum_{i=1}^k\mathbf{E}\left[Y_i^2\right]=1 }[/math].
We prove an equivalent concentration bound stated for the [math]\displaystyle{ \chi^2 }[/math]-distribution (sum of the squares of [math]\displaystyle{ k }[/math] standard Gaussian distributions).
Chernoff bound for the [math]\displaystyle{ \chi^2 }[/math]-distribution - For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for i.i.d. standard Gaussian random variables [math]\displaystyle{ X_1,X_2,\ldots,X_k\sim\mathcal{N}(0,1) }[/math],
- [math]\displaystyle{ \Pr\left[\left|\sum_{i=1}^kX_i^2-k\right|\gt \epsilon k\right]\lt \delta }[/math].
- For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for i.i.d. standard Gaussian random variables [math]\displaystyle{ X_1,X_2,\ldots,X_k\sim\mathcal{N}(0,1) }[/math],