高级算法 (Fall 2021)/Probability Basics and 高级算法 (Fall 2021)/Problem Set 4: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "=Probability Space= The axiom foundation of probability theory is laid by [http://en.wikipedia.org/wiki/Andrey_Kolmogorov Kolmogorov], one of the greatest mathematician of the...")
 
imported>TCSseminar
 
Line 1: Line 1:
=Probability Space=
*每道题目的解答都要有<font color="red" size=5>完整的解题过程</font>。中英文不限。
The axiom foundation of probability theory is laid by [http://en.wikipedia.org/wiki/Andrey_Kolmogorov Kolmogorov], one of the greatest mathematician of the 20th century, who advanced various very different fields of mathematics.


{{Theorem|Definition (Probability Space)|
== Problem 1 ==
A '''probability space''' is a triple <math>(\Omega,\Sigma,\Pr)</math>.
*<math>\Omega</math> is a set, called the '''sample space'''.
*<math>\Sigma\subseteq 2^{\Omega}</math> is the set of all '''events''', satisfying:
*:(K1). <math>\Omega\in\Sigma</math> and <math>\emptyset\in\Sigma</math>. (Existence of the ''certain'' event and the ''impossible'' event)
*:(K2). If <math>A,B\in\Sigma</math>, then <math>A\cap B, A\cup B, A-B\in\Sigma</math>. (Intersection, union, and difference of two events are events).
* A '''probability measure''' <math>\Pr:\Sigma\rightarrow\mathbb{R}</math> is a function that maps each event to a nonnegative real number, satisfying
*:(K3). <math>\Pr(\Omega)=1</math>.
*:(K4). For any ''disjoint'' events  <math>A</math> and <math>B</math> (which means <math>A\cap B=\emptyset</math>), it holds that <math>\Pr(A\cup B)=\Pr(A)+\Pr(B)</math>.
*:(K5*). For a decreasing sequence of events <math>A_1\supset A_2\supset \cdots\supset A_n\supset\cdots</math> of events with <math>\bigcap_n A_n=\emptyset</math>, it holds that <math>\lim_{n\rightarrow \infty}\Pr(A_n)=0</math>.
}}


;Remark
== Problem 2 ==
* In general, the set <math>\Omega</math> may be continuous, but we only consider '''discrete''' probability in this lecture, thus we assume that <math>\Omega</math> is either finite or countably infinite.
A ''<math>k</math>-uniform hypergraph'' is an ordered pair <math>G=(V,E)</math>, where <math>V</math> denotes the set of vertices and <math>E</math> denotes the set of edges. Moreover, each edge in <math>E</math> now contains <math>k</math> distinct vertices, instead of <math>2</math> (so a <math>2</math>-uniform hypergraph is just what we normally call a graph).
* Sometimes it is convenient to assume <math>\Sigma=2^{\Omega}</math>, i.e. the events enumerates all subsets of <math>\Omega</math>. But in general, a probability space is well-defined by any <math>\Sigma</math> satisfying (K1) and (K2). Such <math>\Sigma</math> is called a <math>\sigma</math>-algebra defined on <math>\Omega</math>.
A hypergraph is <math>k</math>-regular if all vertices have degree <math>k</math>; that is, each vertex is exactly contained within <math>k</math> hypergraph edges.
* The last axiom (K5*) is redundant if <math>\Sigma</math> is finite, thus it is only essential when there are infinitely many events. The role of axiom (K5*) in probability theory is like [http://en.wikipedia.org/wiki/Zorn's_lemma Zorn's Lemma] (or equivalently the [http://en.wikipedia.org/wiki/Axiom_of_choice Axiom of Choice]) in axiomatic set theory.


Useful laws for probability can be deduced from the ''axioms'' (K1)-(K5).
Show that for sufficiently large <math>k</math>, the vertices of a <math>k</math>-uniform, <math>k</math>-regular hypergraph can be <math>2</math>-colored so that no edge is monochromatic.
{{Theorem|Proposition|
What's the smallest value of <math>k</math> you can achieve?
# Let <math>\bar{A}=\Omega\setminus A</math>. It holds that <math>\Pr(\bar{A})=1-\Pr(A)</math>.
# If <math>A\subseteq B</math> then <math>\Pr(A)\le\Pr(B)</math>.
}}
{{Proof|
# The events <math>\bar{A}</math> and <math>A</math> are disjoint and <math>\bar{A}\cup A=\Omega</math>. Due to Axiom (K4) and (K3), <math>\Pr(\bar{A})+\Pr(A)=\Pr(\Omega)=1</math>.
# The events <math>A</math> and <math>B\setminus A</math> are disjoint and <math>A\cup(B\setminus A)=B</math> since <math>A\subseteq B</math>. Due to Axiom (K4), <math>\Pr(A)+\Pr(B\setminus A)=\Pr(B)</math>, thus <math>\Pr(A)\le\Pr(B)</math>.
}}


;Notation
== Problem 3 ==
An event <math>A\subseteq\Omega</math> can be represented as <math>A=\{a\in\Omega\mid \mathcal{E}(a)\}</math> with a predicate <math>\mathcal{E}</math>.
Suppose we have graphs <math>G=(V,E)</math> and <math>H=(V,F)</math> on the same vertex set.
We wish to partition <math>V</math> into clusters <math>V_1,V_2,\cdots</math> so as to maximise:
:<math>(\#\text{ of edges in }E\text{ that lie within clusters})+(\#\text{ of edges in }F\text{ that lie between clusters}).</math>


The predicate notation of probability is
* Show that the following SDP is an upperbound on this.
:<math>\Pr[\mathcal{E}]=\Pr(\{a\in\Omega\mid \mathcal{E}(a)\})</math>.
:::<math>
We use the two notations interchangeably.
\text{maximize} &&& \sum_{(u,v)\in E}\langle x_u,x_v\rangle+\sum_{(u,v)\in F}(1-\langle x_u,x_v\rangle) \\
 
\begin{align}
==Union bound==
\text{subject to} && \langle x_u,x_u\rangle & =1, & \forall u & \in V, \\
A very useful inequality in probability is the '''Boole's inequality''', mostly known by its nickname '''union bound'''.
&& \langle x_u,x_v\rangle & \ge0, & \forall u,v & \in V, \\
{{Theorem
&& x_u & \in R^n, & \forall u & \in V.
|Theorem (union bound)|
:Let <math>A_1, A_2, \ldots, A_n</math> be <math>n</math> events. Then
::<math>\begin{align}
\Pr\left(\bigcup_{1\le i\le n}A_i\right)
&\le
\sum_{i=1}^n\Pr(A_i).
\end{align}</math>
}}
{{Proof|
Let <math>B_1=A_1</math> and  for <math>i>1</math>, let <math>B_i=A_i\setminus \left(\bigcup_{j<i}A_j\right)</math>.
We have <math>\bigcup_{1\le i\le n} A_i=\bigcup_{1\le i\le n} B_i</math>.
 
On the other hand, <math>B_1,B_2,\ldots,B_n</math> are disjoint, which implies by the axiom of probability space that
:<math>\Pr\left(\bigcup_{1\le i\le n}A_i\right)=\Pr\left(\bigcup_{1\le i\le n}B_i\right)=\sum_{i=1}^n\Pr(B_i)</math>.
Also note that <math>B_i\subseteq A_i</math> for all <math>1\le i\le n</math>, thus <math>\Pr(B_i)\le \Pr(A_i)</math> for all <math>1\le i\le n</math>. The theorem follows.
}}
 
The union bound is a special case of the '''Boole-Bonferroni inequality'''.
{{Theorem
|Theorem (Boole-Bonferroni inequality)|
:Let <math>A_1, A_2, \ldots, A_n</math> be <math>n</math> events. For <math>1\le k\le n</math>, define <math>S_k=\sum_{i_1<i_2<\cdots<i_k}\Pr\left(\bigcap_{j=1}^k A_{i_j}\right)</math>.
 
:Then for '''''odd''''' <math>m</math> in <math>\{1,2,\ldots, n\}</math>:
::<math>\Pr\left(\bigcup_{1\le i\le n}A_i\right)\le \sum_{k=1}^m (-1)^{k-1} S_k</math>;
:and for '''''even''''' <math>m</math> in <math>\{1,2,\ldots, n\}</math>:
::<math>\Pr\left(\bigcup_{1\le i\le n}A_i\right)\ge \sum_{k=1}^m (-1)^{k-1} S_k</math>.
}}
The inequality follows from the well-known '''inclusion-exclusion principle''', stated as follows, as well as the fact that the quantity <math>S_k</math> is ''unimodal'' in <math>k</math>.
{{Theorem
|Principle of Inclusion-Exclusion|
:Let <math>A_1, A_2, \ldots, A_n</math> be <math>n</math> events. Then
::<math>\Pr\left(\bigcup_{1\le i\le n}A_i\right)=\sum_{k=1}^n (-1)^{k-1} S_k,</math>
:where <math>S_k=\sum_{i_1<i_2<\cdots<i_k}\Pr\left(\bigcap_{j=1}^k A_{i_j}\right)</math>.
}}
 
= Conditional Probability =
In probability theory, the word "condition" is a verb. "Conditioning on the event ..." means that it is assumed that the event occurs.
 
{{Theorem
|Definition (conditional probability)|
:The '''conditional probability''' that event <math>A</math> occurs given that event <math>B</math> occurs is
::<math>
\Pr[A\mid B]=\frac{\Pr[A\wedge B]}{\Pr[B]}.
</math>
}}
 
The conditional probability is well-defined only if <math>\Pr[B]\neq0</math>.
 
== Law of total probability ==
The following fact is known as the law of total probability. It computes the probability by averaging over all possible cases.
{{Theorem
|Theorem (law of total probability)|
:Let <math>B_1,B_2,\ldots,B_n</math> be ''mutually disjoint'' events, and <math>\bigcup_{i=1}^n B_i=\Omega</math> is the sample space.
:Then for any event <math>A</math>,
::<math>
\Pr[A]=\sum_{i=1}^n\Pr[A\wedge B_i]=\sum_{i=1}^n\Pr[A\mid B_i]\cdot\Pr[B_i].
</math>
}}
{{Proof| Since <math>B_1,B_2,\ldots, B_n</math> are mutually disjoint and <math>\bigvee_{i=1}^n B_i=\Omega</math>, events <math>A\wedge B_1, A\wedge B_2,\ldots, A\wedge B_n</math> are also mutually disjoint, and <math>A=\bigcup_{i=1}^n\left(A\cap B_i\right)</math>. Then the additivity of disjoint events, we have
:<math>
\Pr[A]=\sum_{i=1}^n\Pr[A\wedge B_i]=\sum_{i=1}^n\Pr[A\mid B_i]\cdot\Pr[B_i].
</math>
}}
 
The law of total probability provides us a standard tool for breaking a probability into sub-cases. Sometimes this will help the analysis.
 
== "The Chain Rule" ==
By the definition of conditional probability, <math>\Pr[A\mid B]=\frac{\Pr[A\wedge B]}{\Pr[B]}</math>. Thus, <math>\Pr[A\wedge B] =\Pr[B]\cdot\Pr[A\mid B]</math>. This hints us that we can compute the probability of the AND of events by conditional probabilities. Formally, we have the following theorem:
{{Theorem|Theorem|
:Let <math>A_1, A_2, \ldots, A_n</math>  be any <math>n</math> events. Then
::<math>\begin{align}
\Pr\left[\bigwedge_{i=1}^n A_i\right]
&=
\prod_{k=1}^n\Pr\left[A_k \mid \bigwedge_{i<k} A_i\right].
\end{align}</math>
}}
{{Proof|It holds that <math>\Pr[A\wedge B] =\Pr[B]\cdot\Pr[A\mid B]</math>. Thus, let <math>A=A_n</math> and <math>B=A_1\wedge A_2\wedge\cdots\wedge A_{n-1}</math>, then
:<math>\begin{align}
\Pr[A_1\wedge A_2\wedge\cdots\wedge A_n]
&=
\Pr[A_1\wedge A_2\wedge\cdots\wedge A_{n-1}]\cdot\Pr\left[A_n\mid \bigwedge_{i<n}A_i\right].
\end{align}
\end{align}
</math>
</math>
Recursively applying this equation to <math>\Pr[A_1\wedge A_2\wedge\cdots\wedge A_{n-1}]</math> until there is only <math>A_1</math> left, the theorem is proved.
}}
=Random Variable=
{{Theorem|Definition (random variable)|
:A random variable <math>X</math> on a sample space <math>\Omega</math> is a real-valued function <math>X:\Omega\rightarrow\mathbb{R}</math>. A random variable X is called a '''discrete''' random variable if its range is finite or countably infinite.
}}
For a random variable <math>X</math> and a real value <math>x\in\mathbb{R}</math>, we write "<math>X=x</math>" for the event <math>\{a\in\Omega\mid X(a)=x\}</math>, and denote the probability of the event by
:<math>\Pr[X=x]=\Pr(\{a\in\Omega\mid X(a)=x\})</math>.
The independence can also be defined for variables:
{{Theorem
|Definition (Independent variables)|
:Two random variables <math>X</math> and <math>Y</math> are '''independent''' if and only if
::<math>
\Pr[(X=x)\wedge(Y=y)]=\Pr[X=x]\cdot\Pr[Y=y]
</math>
:for all values <math>x</math> and <math>y</math>. Random variables <math>X_1, X_2, \ldots, X_n</math> are '''mutually independent''' if and only if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> and any values <math>x_i</math>, where <math>i\in I</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]
&=
\prod_{i\in I}\Pr[X_i=x_i].
\end{align}</math>
}}
Note that in probability theory, the "mutual independence" is <font color="red">not</font> equivalent with "pair-wise independence", which we will learn in the future.
= Linearity of Expectation =
Let <math>X</math> be a discrete '''random variable'''.  The expectation of <math>X</math> is defined as follows.
{{Theorem
|Definition (Expectation)|
:The '''expectation''' of a discrete random variable <math>X</math>, denoted by <math>\mathbf{E}[X]</math>, is given by
::<math>\begin{align}
\mathbf{E}[X] &= \sum_{x}x\Pr[X=x],
\end{align}</math>
:where the summation is over all values <math>x</math> in the range of <math>X</math>.
}}
Perhaps the most useful property of expectation is its '''linearity'''.
{{Theorem
|Theorem (Linearity of Expectations)|
:For any discrete random variables <math>X_1, X_2, \ldots, X_n</math>, and any real constants <math>a_1, a_2, \ldots, a_n</math>,
::<math>\begin{align}
\mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i].
\end{align}</math>
}}
{{Proof| By the definition of the expectations, it is easy to verify that (try to prove by yourself):
for any discrete random variables <math>X</math> and <math>Y</math>, and any real constant <math>c</math>,
* <math>\mathbf{E}[X+Y]=\mathbf{E}[X]+\mathbf{E}[Y]</math>;
* <math>\mathbf{E}[cX]=c\mathbf{E}[X]</math>.
The theorem follows by induction.
}}
The linearity of expectation gives an easy way to compute the expectation of a random variable if the variable can be written as a sum.
;Example
: Supposed that we have a biased coin that the probability of HEADs is <math>p</math>. Flipping the coin for n times, what is the expectation of number of HEADs?
: It looks straightforward that it must be np, but how can we prove it? Surely we can apply the definition of expectation to compute the expectation with brute force. A more convenient way is by the linearity of expectations: Let <math>X_i</math> indicate whether the <math>i</math>-th flip is HEADs. Then <math>\mathbf{E}[X_i]=1\cdot p+0\cdot(1-p)=p</math>, and the total number of HEADs after n flips is <math>X=\sum_{i=1}^{n}X_i</math>. Applying the linearity of expectation, the expected number of HEADs is:
::<math>\mathbf{E}[X]=\mathbf{E}\left[\sum_{i=1}^{n}X_i\right]=\sum_{i=1}^{n}\mathbf{E}[X_i]=np</math>.
The real power of the linearity of expectations is that it does not require the random variables to be independent, thus can be applied to any set of random variables. For example:
:<math>\mathbf{E}\left[\alpha X+\beta X^2+\gamma X^3\right] = \alpha\cdot\mathbf{E}[X]+\beta\cdot\mathbf{E}\left[X^2\right]+\gamma\cdot\mathbf{E}\left[X^3\right].</math>
However, do not exaggerate this power!
* For an arbitrary function <math>f</math> (not necessarily linear), the equation <math>\mathbf{E}[f(X)]=f(\mathbf{E}[X])</math> does <font color="red">not</font> hold generally.
* For variances, the equation <math>var(X+Y)=var(X)+var(Y)</math> does <font color="red">not</font> hold without further assumption of the independence of <math>X</math> and <math>Y</math>.
==Conditional Expectation ==
Conditional expectation can be accordingly defined:
{{Theorem
|Definition (conditional expectation)|
:For random variables <math>X</math> and <math>Y</math>,
::<math>
\mathbf{E}[X\mid Y=y]=\sum_{x}x\Pr[X=x\mid Y=y],
</math>
:where the summation is taken over the range of <math>X</math>.
}}
There is also a '''law of total expectation'''.
{{Theorem
|Theorem (law of total expectation)|
:Let <math>X</math> and <math>Y</math> be two random variables. Then
::<math>
\mathbf{E}[X]=\sum_{y}\mathbf{E}[X\mid Y=y]\cdot\Pr[Y=y].
</math>
}}
= <math>k</math>-wise  independence =
Recall the definition of independence between events:
{{Theorem
|Definition (Independent events)|
:Events <math>\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n</math> are '''mutually independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]
&=
\prod_{i\in I}\Pr[\mathcal{E}_i].
\end{align}</math>
}}
Similarly, we can define independence between random variables:
{{Theorem
|Definition (Independent variables)|
:Random variables <math>X_1, X_2, \ldots, X_n</math> are '''mutually independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> and any values <math>x_i</math>, where <math>i\in I</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]
&=
\prod_{i\in I}\Pr[X_i=x_i].
\end{align}</math>
}}
Mutual independence is an ideal condition of independence. The limited notion of independence is usually defined by the '''k-wise independence'''.
{{Theorem
|Definition (k-wise Independenc)|
:1. Events <math>\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n</math> are '''k-wise independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math>  with <math>|I|\le k</math>
:::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]
&=
\prod_{i\in I}\Pr[\mathcal{E}_i].
\end{align}</math>
:2. Random variables <math>X_1, X_2, \ldots, X_n</math> are '''k-wise independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> with <math>|I|\le k</math> and any values <math>x_i</math>, where <math>i\in I</math>,
:::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]
&=
\prod_{i\in I}\Pr[X_i=x_i].
\end{align}</math>
}}
A very common case is pairwise independence, i.e. the 2-wise independence.
{{Theorem
|Definition (pairwise Independent random variables)|
:Random variables <math>X_1, X_2, \ldots, X_n</math> are '''pairwise independent''' if, for any <math>X_i,X_j</math> where <math>i\neq j</math> and any values <math>a,b</math>
:::<math>\begin{align}
\Pr\left[X_i=a\wedge X_j=b\right]
&=
\Pr[X_i=a]\cdot\Pr[X_j=b].
\end{align}</math>
}}
Note that the definition of k-wise independence is hereditary:
* If <math>X_1, X_2, \ldots, X_n</math> are k-wise independent, then they are also <math>\ell</math>-wise independent for any <math>\ell<k</math>.
* If <math>X_1, X_2, \ldots, X_n</math> are NOT k-wise independent, then they cannot be <math>\ell</math>-wise independent for any <math>\ell>k</math>.
== Pairwise Independent Bits ==
Suppose we have <math>m</math> mutually independent and uniform random bits <math>X_1,\ldots, X_m</math>. We are going to extract <math>n=2^m-1</math> pairwise independent bits from these <math>m</math> mutually independent bits.
Enumerate all the nonempty subsets of <math>\{1,2,\ldots,m\}</math> in some order. Let <math>S_j</math>  be the <math>j</math>th subset. Let
:<math>
Y_j=\bigoplus_{i\in S_j} X_i,
</math>
where <math>\oplus</math> is the exclusive-or, whose truth table is as follows.
:{|cellpadding="4" border="1"
|-
|<math>a</math>
|<math>b</math>
|<math>a</math><math>\oplus</math><math>b</math>
|-
| 0 || 0 ||align="center"| 0
|-
| 0 || 1 ||align="center"| 1
|-
| 1 || 0 ||align="center"| 1
|-
| 1 || 1 ||align="center"| 0
|}
There are <math>n=2^m-1</math> such <math>Y_j</math>, because there are <math>2^m-1</math> nonempty subsets of <math>\{1,2,\ldots,m\}</math>. An equivalent definition of <math>Y_j</math> is
:<math>Y_j=\left(\sum_{i\in S_j}X_i\right)\bmod 2</math>.
Sometimes, <math>Y_j</math> is called the '''parity''' of the bits in <math>S_j</math>.
We claim that <math>Y_j</math> are pairwise independent and uniform.
{{Theorem
|Theorem|
:For any <math>Y_j</math> and any <math>b\in\{0,1\}</math>,
::<math>\begin{align}
\Pr\left[Y_j=b\right]
&=
\frac{1}{2}.
\end{align}</math>
:For any <math>Y_j,Y_\ell</math> that <math>j\neq\ell</math> and any <math>a,b\in\{0,1\}</math>,
::<math>\begin{align}
\Pr\left[Y_j=a\wedge Y_\ell=b\right]
&=
\frac{1}{4}.
\end{align}</math>
}}
The proof is left for your exercise.
Therefore, we extract exponentially many pairwise independent uniform random bits from a sequence of mutually independent uniform random bits.
Note that <math>Y_j</math> are not 3-wise independent. For example, consider the subsets <math>S_1=\{1\},S_2=\{2\},S_3=\{1,2\}</math> and the corresponding random bits <math>Y_1,Y_2,Y_3</math>. Any two of <math>Y_1,Y_2,Y_3</math> would decide the value of the third one.

Revision as of 08:10, 20 December 2021

  • 每道题目的解答都要有完整的解题过程。中英文不限。

Problem 1

Problem 2

A [math]\displaystyle{ k }[/math]-uniform hypergraph is an ordered pair [math]\displaystyle{ G=(V,E) }[/math], where [math]\displaystyle{ V }[/math] denotes the set of vertices and [math]\displaystyle{ E }[/math] denotes the set of edges. Moreover, each edge in [math]\displaystyle{ E }[/math] now contains [math]\displaystyle{ k }[/math] distinct vertices, instead of [math]\displaystyle{ 2 }[/math] (so a [math]\displaystyle{ 2 }[/math]-uniform hypergraph is just what we normally call a graph). A hypergraph is [math]\displaystyle{ k }[/math]-regular if all vertices have degree [math]\displaystyle{ k }[/math]; that is, each vertex is exactly contained within [math]\displaystyle{ k }[/math] hypergraph edges.

Show that for sufficiently large [math]\displaystyle{ k }[/math], the vertices of a [math]\displaystyle{ k }[/math]-uniform, [math]\displaystyle{ k }[/math]-regular hypergraph can be [math]\displaystyle{ 2 }[/math]-colored so that no edge is monochromatic. What's the smallest value of [math]\displaystyle{ k }[/math] you can achieve?

Problem 3

Suppose we have graphs [math]\displaystyle{ G=(V,E) }[/math] and [math]\displaystyle{ H=(V,F) }[/math] on the same vertex set. We wish to partition [math]\displaystyle{ V }[/math] into clusters [math]\displaystyle{ V_1,V_2,\cdots }[/math] so as to maximise:

[math]\displaystyle{ (\#\text{ of edges in }E\text{ that lie within clusters})+(\#\text{ of edges in }F\text{ that lie between clusters}). }[/math]
  • Show that the following SDP is an upperbound on this.
[math]\displaystyle{ \text{maximize} &&& \sum_{(u,v)\in E}\langle x_u,x_v\rangle+\sum_{(u,v)\in F}(1-\langle x_u,x_v\rangle) \\ \begin{align} \text{subject to} && \langle x_u,x_u\rangle & =1, & \forall u & \in V, \\ && \langle x_u,x_v\rangle & \ge0, & \forall u,v & \in V, \\ && x_u & \in R^n, & \forall u & \in V. \end{align} }[/math]