高级算法 (Fall 2019)/Dimension Reduction

From TCS Wiki
Revision as of 15:06, 23 October 2019 by imported>Etone (→‎Nearest Neighbor Search (NNS))
Jump to navigation Jump to search

Metric Embedding

A metric space is a pair [math]\displaystyle{ (X,d) }[/math], where [math]\displaystyle{ X }[/math] is a set and [math]\displaystyle{ d }[/math] is a metric (or distance) on [math]\displaystyle{ X }[/math], i.e., a function

[math]\displaystyle{ d:X^2\to\mathbb{R}_{\ge 0} }[/math]

such that for any [math]\displaystyle{ x,y,z\in X }[/math], the following axioms hold:

  1. (identity of indiscernibles) [math]\displaystyle{ d(x,y)=0\Leftrightarrow x=y }[/math]
  2. (symmetry) [math]\displaystyle{ d(x,y)=d(y,x) }[/math]
  3. (triangle inequality) [math]\displaystyle{ d(x,z)\le d(x,y)+d(y,z) }[/math]

Let [math]\displaystyle{ (X,d_X) }[/math] and [math]\displaystyle{ (Y,d_Y) }[/math] be two metric spaces. A mapping

[math]\displaystyle{ \phi:X\to Y }[/math]

is called an embedding of metric space [math]\displaystyle{ X }[/math] into [math]\displaystyle{ Y }[/math]. The embedding is said to be with distortion [math]\displaystyle{ \alpha\ge1 }[/math] if for any [math]\displaystyle{ x,y\in X }[/math] it holds that

[math]\displaystyle{ \frac{1}{\alpha}\cdot d(x,y)\le d(\phi(x),\phi(y))\le \alpha\cdot d(x,y) }[/math].

In Computer Science, a typical scenario for the metric embedding is as follows. We want to solve some difficult computation problem on a metric space [math]\displaystyle{ (X,d) }[/math]. Instead of solving this problem directly on the original metric space, we embed the metric into a new metric space [math]\displaystyle{ (Y,d_Y) }[/math] (with low distortion) where the computation problem is much easier to solve.

One particular important case for the metric embedding is to embed a high-dimensional metric space to a new metric space whose dimension is much lower. This is called dimension reduction. This can be very helpful because various very common computation tasks can be very hard to solve on high-dimensional space due to the curse of dimensionality.

The Johnson-Lindenstrauss Theorem

The Johnson-Lindenstrauss Theorem or Johnson-Lindenstrauss Transformation (both shorten as JLT) is a fundamental result for dimension reduction in Euclidian space.

Recall that in Euclidian space [math]\displaystyle{ \mathbf{R}^d }[/math], for any two points [math]\displaystyle{ x,y\in\mathbf{R}^d }[/math], the Euclidian distance between them is given by [math]\displaystyle{ \|x-y\|=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+\cdots +(x_d-y_d)^2} }[/math], where [math]\displaystyle{ \|\cdot\|=\|\cdot\|_2 }[/math] denotes the Euclidian norm (a.k.a. the [math]\displaystyle{ \ell_2 }[/math]-norm).

The JLT says that in Euclidian space, it is always possible to embed a set of [math]\displaystyle{ n }[/math] points in arbitrary dimension to [math]\displaystyle{ O(\log n) }[/math] dimension with constant distortion. The theorem itself is stated formally as follows.

Johnson-Lindenstrauss Theorem, 1984
For any [math]\displaystyle{ 0\lt \epsilon\lt 1/2 }[/math] and any positive integer [math]\displaystyle{ n }[/math], there is a positive integer [math]\displaystyle{ k=O(\epsilon^{-2}\log n) }[/math] such that the following holds:
For any set [math]\displaystyle{ S\subset\mathbf{R}^d }[/math] with [math]\displaystyle{ |S|=n }[/math], where [math]\displaystyle{ d }[/math] is arbitrary, there is an embedding [math]\displaystyle{ \phi:\mathbf{R}^d\rightarrow\mathbf{R}^k }[/math] such that
[math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|^2\le\|\phi(x)-\phi(y)\|^2\le(1+\epsilon)\|x-y\|^2 }[/math].

The Johnson-Lindenstrauss Theorem is usually stated for the [math]\displaystyle{ \ell_2^2 }[/math]-norm [math]\displaystyle{ \|\cdot\|^2 }[/math] instead of the Euclidian norm [math]\displaystyle{ \|\cdot\| }[/math] itself. Note that this does not change anything other than the constant faction in [math]\displaystyle{ k=O(\epsilon^{-2}\log n) }[/math] because [math]\displaystyle{ (1\pm\epsilon)^{\frac{1}{2}}=1\pm\Theta(\epsilon) }[/math]. The reason for stating the theorem in [math]\displaystyle{ \ell_2^2 }[/math]-norm is because [math]\displaystyle{ \|\cdot\|^2 }[/math] is a sum (rather than a square root) which is easier to analyze.

In fact, the embedding [math]\displaystyle{ \phi:\mathbf{R}^d\rightarrow\mathbf{R}^k }[/math] can be as simple as a linear transformation [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] so that [math]\displaystyle{ \phi(x)=Ax }[/math] for any [math]\displaystyle{ x\in\mathbf{R}^{d} }[/math]. Therefore, the above theorem can be stated more precisely as follows.

Johnson-Lindenstrauss Theorem (linear embedding)
For any [math]\displaystyle{ 0\lt \epsilon\lt 1/2 }[/math] and any positive integer [math]\displaystyle{ n }[/math], there is a positive integer [math]\displaystyle{ k=O(\epsilon^{-2}\log n) }[/math] such that the following holds:
For any set [math]\displaystyle{ S\subset\mathbf{R}^d }[/math] with [math]\displaystyle{ |S|=n }[/math], where [math]\displaystyle{ d }[/math] is arbitrary, there is a linear transformation [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] such that
[math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2 }[/math].

The theorem is proved by the probabilistic method. Specifically, we construct a random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] and show that with high probability ([math]\displaystyle{ 1-O(1/n) }[/math]) it is a good embedding satisfying:

[math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|\le\|Ax-Ay\|\le(1+\epsilon)\|x-y\| }[/math].

Therefore, if such random matrix [math]\displaystyle{ A }[/math] is efficient to construct, it immediately gives us an efficient randomized algorithm for dimension reduction in the Euclidian space.

There are several such constructions of the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math], including:

  • projection onto uniform random [math]\displaystyle{ k }[/math]-dimensional subspace of [math]\displaystyle{ \mathbf{R}^{d} }[/math]; (the original construction of Johnson and Lindenstrauss in 1984; a simplified analysis due to Dasgupta and Gupta in 1999)
  • random matrix with i.i.d. Gaussian entries; (due to Indyk and Motwani in 1998)
  • random matrix with i.i.d. -1/+1 entries; (due to Achlioptas in 2003)

It was proved by Kasper Green Larsen and Jelani Nelson in 2016 that the JLT is optimal: there is a set of [math]\displaystyle{ n }[/math] points from [math]\displaystyle{ \mathbf{R}^d }[/math] such that any embedding [math]\displaystyle{ \phi:\mathbf{R}^d\rightarrow\mathbf{R}^k }[/math] having the distortion asserted in the JLT must have dimension [math]\displaystyle{ k }[/math] of the target space satisfy [math]\displaystyle{ k=\Omega(\epsilon^{-2}\log n) }[/math].

JLT via Gaussian matrix

Here we prove the Johnson-Lindenstrauss theorem with the second construction, by the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] with i.i.d. Gaussian entries. The construction of [math]\displaystyle{ A }[/math] is very simple:

  • Each entry of [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] is drawn independently from the Gaussian distribution [math]\displaystyle{ \mathcal{N}(0,1/k) }[/math].

Recall that a Gaussian distribution [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math] is specified by its mean [math]\displaystyle{ \mu }[/math] and standard deviation [math]\displaystyle{ \sigma }[/math] such that for a random variable [math]\displaystyle{ X }[/math] distributed as [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math], we have

[math]\displaystyle{ \mathbf{E}[X]=\mu }[/math] and [math]\displaystyle{ \mathbf{Var}[X]=\sigma^2 }[/math],

and the probability density function is given by [math]\displaystyle{ p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\mathrm{e}^{-\frac{(x-\mu)^2}{2\sigma^2}} }[/math], therefore

[math]\displaystyle{ \Pr[X\le t]=\int_{-\infty}^t\frac{1}{\sqrt{2\pi\sigma^2}}\mathrm{e}^{-\frac{(x-\mu)^2}{2\sigma^2}}\,\mathrm{d}x }[/math].

Fix any two points [math]\displaystyle{ x,y }[/math] out of the [math]\displaystyle{ n }[/math] points in [math]\displaystyle{ S\subset \mathbf{R}^d }[/math], if we can show that

[math]\displaystyle{ \Pr\left[(1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2\right]\ge 1-\frac{1}{n^3} }[/math],

then by the union bound, the following event

[math]\displaystyle{ \forall x,y\in S,\quad (1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2 }[/math]

holds with probability [math]\displaystyle{ 1-O(1/n) }[/math]. The Johnson-Lindenstrauss theorem follows.

Furthermore, dividing both sides of the inequalities [math]\displaystyle{ (1-\epsilon)\|x-y\|^2\le\|Ax-Ay\|^2\le(1+\epsilon)\|x-y\|^2 }[/math] by the factor [math]\displaystyle{ \|x-y\|^2 }[/math] gives us

[math]\displaystyle{ (1-\epsilon)\le\frac{\|Ax-Ay\|^2}{\|x-y\|^2}=\left\|A\frac{(x-y)}{\|x-y\|^2}\right\|^2\le(1+\epsilon) }[/math],

where [math]\displaystyle{ \frac{(x-y)}{\|x-y\|^2} }[/math] is a unit vector.

Therefore, the Johnson-Lindenstrauss theorem is proved once the following theorem on the unit vector is proved.

Theorem (JLT on unit vector)
For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for the random matrix [math]\displaystyle{ A\in\mathbf{R}^{k\times d} }[/math] with each entry drawn independently from the Gaussian distribution [math]\displaystyle{ \mathcal{N}(0,1/k) }[/math], for any unit vector [math]\displaystyle{ u\in\mathbf{R}^d }[/math] with [math]\displaystyle{ \|u\|=1 }[/math],
[math]\displaystyle{ \Pr\left[\left|\|Au\|^2-1\right|\gt \epsilon\right]\lt \delta }[/math].

For any [math]\displaystyle{ u\in\mathbf{R}^d }[/math], we have [math]\displaystyle{ \|Au\|^2=\sum_{i=1}^k(Au)_i^2 }[/math], where the [math]\displaystyle{ (Au)_i }[/math]'s are independent of each other, because each

[math]\displaystyle{ (Au)_i=\langle A_{i\cdot},u\rangle=\sum_{j=1}^dA_{ij}u_j }[/math]

is determined by a distinct row vector [math]\displaystyle{ A_{i\cdot} }[/math] of the matrix [math]\displaystyle{ A }[/math] with independent entries.

Moreover, since each entry [math]\displaystyle{ A_{ij}\sim\mathcal{N}(0,1/k) }[/math] independently and recall that for any two independent Gaussian random variables [math]\displaystyle{ X_1\sim \mathcal{N}(\mu_1,\sigma_1^2) }[/math] and [math]\displaystyle{ X_2\sim \mathcal{N}(\mu_2,\sigma_2^2) }[/math], their weighted sum [math]\displaystyle{ aX_1+bX_2 }[/math] follows the Gaussian distribution [math]\displaystyle{ \mathcal{N}(a\mu_1+b\mu_2,a^2\sigma_1^2+b^2\sigma_2^2) }[/math], therefore [math]\displaystyle{ (Au)_i=\langle A_{i\cdot},u\rangle=\sum_{j=1}^dA_{ij}u_j }[/math] follows the Gaussian distribution

[math]\displaystyle{ (Au)_i\sim\mathcal{N}\left(0,\sum_{j=1}^d\frac{u_j^2}{k}\right) }[/math],

which is precisely [math]\displaystyle{ \mathcal{N}\left(0,\frac{1}{k}\right) }[/math] if [math]\displaystyle{ u\in\mathbf{R}^d }[/math] is a unit vector, i.e. [math]\displaystyle{ \|u\|^2=\sum_{j=1}^du_j^2=1 }[/math].

In summary, the random variable [math]\displaystyle{ \|Au\|^2 }[/math] that we are interested in, can be represented as a sum-of-square form

[math]\displaystyle{ \|Au\|^2=\sum_{i=1}^kY_i^2 }[/math],

where each [math]\displaystyle{ Y_i=(Au)_i }[/math] independently follows the Gaussian distribution [math]\displaystyle{ \mathcal{N}\left(0,\frac{1}{k}\right) }[/math].

Concentration of [math]\displaystyle{ \chi^2 }[/math]-distribution

By the above argument, the Johnson-Lindenstrauss theorem is proved by the following concentration inequality for the sum of the squares of [math]\displaystyle{ k }[/math] Gaussian distributions.

Chernoff bound for sum-of-squares of Gaussian distributions
For any positive [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math] there is a positive integer [math]\displaystyle{ k=O\left(\epsilon^{-2}\log \frac{1}{\delta}\right) }[/math] such that for i.i.d. Gaussian random variables [math]\displaystyle{ Y_1,Y_2,\ldots,Y_k\sim\mathcal{N}(0,1/k) }[/math],
[math]\displaystyle{ \Pr\left[\left|\sum_{i=1}^kY_i^2-1\right|\gt \epsilon\right]\lt \delta }[/math].

First, we see that this is indeed a concentration inequality that bounds the deviation of a random variable from its mean.

For each [math]\displaystyle{ Y_i\sim\mathcal{N}\left(0,\frac{1}{k}\right) }[/math], we know that [math]\displaystyle{ \mathbf{E}[Y_i]=0 }[/math] and [math]\displaystyle{ \mathbf{Var}[Y_i]=\mathbf{E}[Y_i^2]-\mathbf{E}[Y_i]^2=\frac{1}{k} }[/math], thus

[math]\displaystyle{ \mathbf{E}[Y_i^2]=\mathbf{Var}[Y_i]+\mathbf{E}[Y_i]^2=\frac{1}{k} }[/math].

By linearity of expectation, it holds that

[math]\displaystyle{ \mathbf{E}\left[\sum_{i=1}^kY_i^2\right]=\sum_{i=1}^k\mathbf{E}\left[Y_i^2\right]=1 }[/math].

We prove an equivalent concentration bound stated for the [math]\displaystyle{ \chi^2 }[/math]-distribution (sum of the squares of [math]\displaystyle{ k }[/math] standard Gaussian distributions).

Chernoff bound for the [math]\displaystyle{ \chi^2 }[/math]-distribution
For i.i.d. standard Gaussian random variables [math]\displaystyle{ X_1,X_2,\ldots,X_k\sim\mathcal{N}(0,1) }[/math], for [math]\displaystyle{ 0\lt \epsilon\lt 1 }[/math],
  • [math]\displaystyle{ \Pr\left[\sum_{i=1}^kX_i^2\gt (1+\epsilon)k\right]\lt \mathrm{e}^{-\epsilon^2k/8} }[/math],
  • [math]\displaystyle{ \Pr\left[\sum_{i=1}^kX_i^2\lt (1-\epsilon)k\right]\lt \mathrm{e}^{-\epsilon^2k/8} }[/math].

Note that this indeed implies the above concentration bound by setting [math]\displaystyle{ X_i=\sqrt{k}\cdot Y_i }[/math] and [math]\displaystyle{ \delta\ge\mathrm{e}^{-\epsilon^2k/8} }[/math].

Proof.

We first prove the upper tail. Let [math]\displaystyle{ \lambda\gt 0 }[/math] be a parameter to be determined.

[math]\displaystyle{ \begin{align} \Pr\left[\sum_{i=1}^kX_i^2\gt (1+\epsilon)k\right] &=\Pr\left[\mathrm{e}^{\lambda \sum_{i=1}^kX_i^2}\gt \mathrm{e}^{(1+\epsilon)\lambda k}\right]\\ &\lt \mathrm{e}^{-(1+\epsilon)\lambda k}\cdot \mathbf{E}\left[\mathrm{e}^{\lambda \sum_{i=1}^kX_i^2}\right] && \text{(Markov's inequality)}\\ &= \mathrm{e}^{-(1+\epsilon)\lambda k}\cdot \prod_{i=1}^k \mathbf{E}\left[\mathrm{e}^{\lambda X_i^2}\right]. && \text{(independence)} \end{align} }[/math]
Proposition
For standard Gaussian [math]\displaystyle{ X\sim\mathcal{N}(0,1) }[/math], [math]\displaystyle{ \mathbf{E}\left[\mathrm{e}^{\lambda X^2}\right]=\frac{1}{\sqrt{1-2\lambda}} }[/math].
Proof.
[math]\displaystyle{ \begin{align} \mathbf{E}\left[\mathrm{e}^{\lambda X^2}\right] &= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{\lambda x^2}\mathrm{e}^{-x^2/2}\,\mathrm{d}x\\ &= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{-(1-2\lambda)x^2/2}\,\mathrm{d}x\\ &= \frac{1}{\sqrt{1-2\lambda}}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{-y^2/2}\,\mathrm{d}y &&(\text{set }y=\sqrt{1-2\lambda}x)\\ &=\frac{1}{\sqrt{1-2\lambda}}. \end{align} }[/math]

The last equation is because [math]\displaystyle{ \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\mathrm{e}^{-y^2/2}\,\mathrm{d}y }[/math] is the total probability mass of a standard Gaussian distribution (which is obviously 1).

[math]\displaystyle{ \square }[/math]

Then continue the above calculation:

[math]\displaystyle{ \begin{align} \Pr\left[\sum_{i=1}^kX_i^2\gt (1+\epsilon)k\right] &\lt \mathrm{e}^{-(1+\epsilon)\lambda k}\cdot \left(\frac{1}{\sqrt{1-2\lambda}}\right)^k\\ &= \mathrm{e}^{-\epsilon\lambda k}\cdot \left(\frac{\mathrm{e}^{-\lambda}}{\sqrt{1-2\lambda}}\right)^k\\ &\le \mathrm{e}^{-\epsilon\lambda k + 2\lambda^2 k} && (\text{for }\lambda\lt 1/4)\\ &= \mathrm{e}^{-\epsilon k/8}. && (\text{by setting the stationary point }\lambda=\epsilon/4) \end{align} }[/math]

This proved the upper tail. The lower tail can be symmetrically proved by optimizing over a parameter [math]\displaystyle{ \lambda\lt 0 }[/math].

[math]\displaystyle{ \square }[/math]

As we argued above, this finishes the proof of the Johnson-Lindenstrauss Theorem.

Nearest Neighbor Search (NNS)

Nearest Neighbor Search (NNS)
  • Data: a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] points [math]\displaystyle{ y_1,y_2,\ldots,y_n\in X }[/math] from a metric space [math]\displaystyle{ (X,\mathrm{dist}) }[/math];
  • Query: a point [math]\displaystyle{ x\in X }[/math];
  • Answer: a nearest neighbor of the query point [math]\displaystyle{ x }[/math] among all data points, i.e. a data point [math]\displaystyle{ y_{i^*}\in S }[/math] such that [math]\displaystyle{ \mathrm{dist}(x,y_{i^*})=\min_{y_i\in S}\mathrm{dist}(x,y_i) }[/math].


[math]\displaystyle{ c }[/math]-ANN (Approximate Nearest Neighbor)
[math]\displaystyle{ c\gt 1 }[/math] is an approximate ratio.
  • Data: a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] points [math]\displaystyle{ y_1,y_2,\ldots,y_n\in X }[/math] from a metric space [math]\displaystyle{ (X,\mathrm{dist}) }[/math];
  • Query: a point [math]\displaystyle{ x\in X }[/math];
  • Answer: a [math]\displaystyle{ c }[/math]-approximate nearest neighbor of the query point [math]\displaystyle{ x }[/math] among all data points, i.e. a data point [math]\displaystyle{ y_{i^*}\in S }[/math] such that [math]\displaystyle{ \mathrm{dist}(x,y_{i^*})\le c\, \min_{y_i\in S}\mathrm{dist}(x,y_i) }[/math].


[math]\displaystyle{ (c,r) }[/math]-ANN (Approximate Near Neighbor)
[math]\displaystyle{ c\gt 1 }[/math] is an approximate ratio; [math]\displaystyle{ r\gt 0 }[/math] is a radius;
  • Data: a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] points [math]\displaystyle{ y_1,y_2,\ldots,y_n\in X }[/math] from a metric space [math]\displaystyle{ (X,\mathrm{dist}) }[/math];
  • Query: a point [math]\displaystyle{ x\in X }[/math];
  • Answer: [math]\displaystyle{ \begin{cases} \text{a data point }y_{i^*}\in S\text{ such that }\mathrm{dist}(x,y_{i^*})\le c\cdot r & \text{if }\exists y_i\in S, \mathrm{dist}(x,y_{i})\le r,\\ \bot & \text{if }\forall y_i\in S, \mathrm{dist}(x,y_{i})\gt c\cdot r,\\ \text{arbitrary} & \text{otherwise}. \end{cases} }[/math]
Theorem

For any instance [math]\displaystyle{ S=\{y_1,y_2,\ldots,y_n\} }[/math] of the nearest neighbor problem, let [math]\displaystyle{ D_{\min}\triangleq\min_{y_i,y_j\in S}\mathrm{dist}(y_i,y_j) }[/math], [math]\displaystyle{ D_{\max}\triangleq\max_{y_i,y_j\in S}\mathrm{dist}(y_i,y_j) }[/math] and [math]\displaystyle{ R\triangleq \frac{D_{\max}}{D_{\min}} }[/math]. If for any [math]\displaystyle{ r\gt 0 }[/math] there always exists a data structure for the [math]\displaystyle{ (c,r) }[/math]-ANN problem such that:

  • the space cost of the data structures is at most [math]\displaystyle{ s }[/math];
  • each query is answered within time cost [math]\displaystyle{ t }[/math];
  • and the answer is correct with probability [math]\displaystyle{ 1-\delta }[/math];

then there exists a data structure for the [math]\displaystyle{ c }[/math]-ANN problem such that:

  • the space cost of the data structures is at most [math]\displaystyle{ O(s\log_c R) }[/math];
  • each query is answered within time cost [math]\displaystyle{ O(t\log\log_c R) }[/math];
  • and the answer is correct with probability [math]\displaystyle{ 1-O(\delta\log\log_c R) }[/math].

Locality-Sensitive Hashing (LSH)