随机算法 (Fall 2015)/Lovász Local Lemma and Standard error: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>Auntof6
m (Typo fixing and/or general cleanup using AWB)
 
Line 1: Line 1:
= Lovász Local Lemma=
[[Image:standard deviation diagram.svg||325px|thumb|For a value that is sampled with an unbiased [[normal distribution|normally distributed]] error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.]]
Suppose that we are give a set of "bad" events <math>A_1,A_2,\ldots,A_n</math>. We want to know that it is possible that none of them occurs, that is:
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0.
</math>
Obviously, a ''necessary'' condition for this is that for none of the bad events its occurrence is certain, i.e. <math>\Pr[A_i]<1</math> for all <math>i</math>. We are interested in the ''sufficient'' condition for the above. There are two easy cases:
;Case 1<nowiki>: mutual independence.</nowiki>
If all the bad events <math>A_1,A_2,\ldots,A_m</math> are mutually independent, then
:<math>
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]=\prod_{i=1}^m(1-\Pr[A_i])
</math>
and hence this probability is positive if <math>\Pr[A_i]<1</math> for all <math>i</math>.


;Case 2<nowiki>: arbitrary dependency.</nowiki>
The '''standard error''' is the [[standard deviation]] of the [[sampling distribution]] of a [[statistic]].<ref>Everitt B.S. 2003. ''The Cambridge Dictionary of Statistics'', CUP. ISBN 0-521-81099-X</ref> The term may also be used for an estimate (good guess) of that standard deviation taken from a [[Sample (statistics)|sample]] of the whole group.
On the other extreme, if we know nothing about the dependencies between these bad event, the best we can do is to apply the union bound:
:<math>
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]\ge 1-\sum_{i=1}^m\Pr\left[A_i\right],
</math>
which is positive if <math>\sum_{i=1}^m\Pr\left[A_i\right]<1</math>. This is a very loose bound, however it cannot be further improved if no further information regarding the dependencies between the events is assumed.


== Lovász Local Lemma (symmetric case) ==
The [[mean|average]] of some part of a group (called a sample) is the usual way to estimate the average for the whole group. It is often too hard or costs too much money to measure the whole group. But if a different sample is measured, it will have an average that is a little bit different from the first sample. The '''standard error of the mean''' is a way to know how close the average of the sample is to the average of the whole group. It is a way to know how sure you can be about the average from the sample.
In most situations, the dependencies between events are somewhere between these two extremal cases: the events are not independent of each other, but on the other hand the dependencies between them are not total out of control. For these more general cases, we would like to exploit the tradeoff between probabilities of bad events and dependencies between them.  


The Lovász local lemma is such a powerful tool for showing the possibility of rare event under ''limited dependencies''. The structure of dependencies between a set of events is described by a '''dependency graph''', which is a graph with events as vertices and each event is adjacent to the events which are dependent with it in the dependency graph.
In real measurements, the true value of the standard deviation of the mean for the whole group is usually not known. So the term ''standard error'' is often used to mean a close guess to the true number for the whole group. The more measurements there are in a sample, the closer the guess will be to the true number for the whole group.


{{Theorem
==How to find standard error of the mean==
|Definition (dependency graph)|
One way to find the standard error of the mean is to have lots of samples. First, the average for each sample is found. Then the average and [[standard deviation]] of those sample averages is found. The standard deviation for all the sample averages is the standard error of the mean. This can be a lot of work. Sometimes it is too difficult or costs too much money to have lots of samples.
:Let <math>A_1,A_2,\ldots,A_m</math> be a set of events. A graph <math>D=(V,E)</math> with set of vertices <math>V=\{A_1,A_2,\ldots,A_m\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_m</math> if every event <math>A_i</math> is mutually independent of all the events in <math>\{A_j\mid (A_i,A_j)\not\in E\}</math>.
}}
The maximum degree <math>d</math> of the dependency graph <math>D</math> is a very useful information, as it tells us that every event <math>A_i</math> among  <math>A_1,A_2,\ldots,A_m</math> is dependent with how many other events at most.


;Remark on the mutual independence
Another way to find the standard error of the mean is to use an equation that needs only one sample. Standard error of the mean is usually estimated by the [[standard deviation]] for a sample from the whole group ([[Standard deviation#With sample standard deviation|sample standard deviation]]) divided by the square root of the sample size.
:In probability theory, an event <math>A</math> is said to be independent of events <math>B_1,B_2,\ldots,B_k</math> if for any ''disjoint'' <math>I,J\subseteq\{1,2,\ldots,k\}</math>, we have
:::<math>\Pr\left[A\mid \left(\bigwedge_{i\in I}B_i \right)\wedge \left(\bigwedge_{i\in J}\overline{B}_i\right) \right]=\Pr[A]</math>,
:that is, occurrences of events among <math>B_1,B_2,\ldots,B_k</math> have no influence on the occurrence of <math>A</math>.


;Example
:<math>SE_\bar{x}\ = \frac{s}{\sqrt{n}}</math>
:Let <math>X_1,X_2,\ldots,X_n</math> be a set of ''mutually independent'' random variables. Each event <math>A_i</math> is a predicate defined on a number of variables among <math>X_1,X_2,\ldots,X_n</math>. Let <math>\mathsf{vbl}(A_i)</math> be the unique smallest set of variables which determine <math>A_i</math>. The dependency graph <math>D=(V,E)</math> is defined as that any two events <math>A_i,A_j</math> are adjacent in <math>D</math> if and only if they share variables, i.e. <math>\mathsf{vbl}(A_i)\cap\mathsf{vbl}(A_j)\neq\emptyset</math>.


The following theorem was proved by Erdős and Lovász in 1975 and then later improved by Lovász in 1977. Now it is commonly referred as the '''Lovász local lemma'''. It is a very powerful tool, especially when being used with the probabilistic method, as it supplies a way for dealing with rare events.
where


{{Theorem
:''s'' is the [[Standard deviation#With sample standard deviation|sample standard deviation]] (i.e., the sample-based estimate of the standard deviation of the population), and
|Lovász Local Lemma (symmetric case)|
:''n'' is the number of measurements in the sample.
:Let <math>A_1,A_2,\ldots,A_m</math> be a set of events, and assume that the followings hold:
#<math>\Pr[A_i]\le p</math> for every event <math>A_i</math>;
#every event <math>A_i</math> is mutually independent of all other events except at most <math>d</math> of them, and
:::<math>\mathrm{e}p(d+1)\le 1</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}
Here <math>d</math> is the maximum degree of the dependency graph <math>D</math> for the events <math>A_1,\ldots,A_m</math>.


Intuitively, the Lovász Local Lemma says that if a rare (but hopefully possible) event is formulated as to avoid a series of bad events simultaneously, then the rare event is indeed possible if:
How big does the sample need to be so that the estimate of the standard error of the mean is close to the actual standard error of the mean for the whole group? There should be at least six measurements in a sample. Then the standard error of the mean for the sample will be within 5% of the standard error of the mean if the whole group were measured.<ref>{{cite journal |last=Gurland |first=J |coauthors=Tripathi RC |year=1971 |title=A simple approximation for unbiased estimation of the standard deviation |journal=American Statistician |volume=25 |issue=4|pages=30–32 |doi=10.2307/2682923 |jstor=2682923 |publisher=American Statistical Association }}</ref>
* none of these bad events is too probable;
* none of these bad events is dependent with too many other bad events;
And the tradeoff between "too probable" and "too many" is precisely captured by the <math>\mathrm{e}p(d+1)\le 1</math> condition.


==Lovász Local Lemma (asymmetric case)==
== Corrections for some cases ==
Sometimes when applying the local lemma, a few bad events are much more probable than others or are dependent with more other bad events. In this case, using the same upper bounds <math>p</math> on the probability of bad events or <math>d</math> on the number of dependent events will be much wasteful. To more accurately deal with such general cases, we need a more refined way to characterize the tradeoff between local dependencies and probabilities of bad events.
There is another equation to use if the number of measurements is for 5% or more of the whole group:<ref>{{cite journal
  | first = L.
  | last = Isserlis
  | year = 1918
  | title = On the value of a mean as calculated from a sample
  | journal = Journal of the Royal Statistical Society
  | volume = 81
  | issue = 1
  | pages = 75–81
  | jstor = 2340569
  | doi = 10.2307/2340569
  | publisher = Blackwell Publishing
  }} (Equation 1)</ref>


We need to introduce a few notations that will be frequently used onwards.
There are special equations to use if a sample has less than 20 measurements.<ref>Sokal and Rohlf 1981. ''Biometry: principles and practice of statistics in biological research'', 2nd ed. p53 ISBN 0716712547</ref>
Let <math>\mathcal{A}=\{A_1,A_2,\ldots,A_m\}</math> be a set of events. For every event <math>A_i\in\mathcal{A}</math>, we define its neighborhood and inclusive neighborhood as follows:
*'''inclusive neighborhood''': <math>\Gamma^+(A_i)\,</math> denotes the set of events in <math>\mathcal{A}</math>, including <math>A_i</math> itself, that are dependent with <math>A_i</math>. More precisely, <math>A_i</math> is mutually independent of all events in <math>\mathcal{A}\setminus\Gamma^+(A_i)</math>.
*'''neighborhood''': <math>\Gamma(A_i)=\Gamma^+(A_i)\setminus \{A_i\}</math>, that is, <math>\Gamma(A_i)</math> contains the events in <math>\mathcal{A}</math> that are dependent with <math>A_i</math>, not including <math>A_i</math> itself.


The following is the asymmetric version of the Lovász Local Lemma. This generalization is due to Spencer.
Sometimes a sample comes from one place even though the whole group may be spread out. Also, sometimes a sample may be made in a short time period when the whole group covers a longer time. In this case, the numbers in the sample are not independent. Then special equations are used to try to correct for this.<ref>James R. Bence 1995. Analysis of short time series: correcting for autocorrelation. ''Ecology'' '''76'''(2): 628–639.</ref>


{{Theorem
== Usefulness ==
|Lovász Local Lemma (general case)|
''A practical result:'' One can become more sure of an average value by having more measurements in a sample. Then the standard error of the mean will be smaller because the standard deviation is divided by a bigger number. However, to make the uncertainty (standard error of the mean) in an average value half as big, the sample size (n) needs to be four times bigger. This is because the standard deviation is divided by the square root of the sample size. To make the uncertainty one-tenth as big, the sample size (n) needs to be one hundred times bigger!
:Let <math>\mathcal{A}=\{A_1,A_2,\ldots,A_m\}</math> be a set of events, where every event <math>A_i\in\mathcal{A}</math> is mutually independent of all other events excepts those in its neighborhood <math>\Gamma(A_i)\,</math> in the dependency graph. Suppose there exist real numbers <math>\alpha_1,\alpha_2,\ldots, \alpha_m\in[0,1)</math> such that for every <math>A_i\in\mathcal{A}</math>,
::<math>\Pr[A_i]\le \alpha_i\prod_{A_j\in\Gamma(A_i)}(1-\alpha_j)</math>.
:Then  
::<math>\Pr\left[\bigwedge_{A_i\in\mathcal{A}}\overline{A_i}\right]\ge\prod_{i=1}^m(1-\alpha_i)</math>.
}}
This generalized version of the local lemma immediately implies the symmetric version of the lemma. Namely, <math>\Pr\left[\bigwedge_{i}\overline{A_i}\right]>0</math> if the followings are satisfied:
# <math>\Pr[A_i]\le p</math> for all <math>A_i\in\mathcal{A}</math>;
# <math>\mathrm{e}p(d+1)\le 1</math>, where <math>d=\max_{A_i\in\mathcal{A}}|\Gamma(A_i)|</math> is the maximum degree of the dependency graph.
To see this, for every <math>A_i\in\mathcal{A} </math> let <math>\alpha_i=\frac{1}{d+1}</math>. Note that <math>\prod_{A_j\in\Gamma(A_i)}(1-\alpha_j)\ge \left(1-\frac{1}{d+1}\right)^d>\frac{1}{\mathrm{e}}</math>.


With the above two conditions satisfied, for all <math>A_i\in\mathcal{A}</math>, it is easy to verify:
Standard errors are easy to calculate and are used a lot because:
:<math>\Pr[A_i]\le p\le\frac{1}{\mathrm{e}(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le \alpha_i\prod_{A_j\in\Gamma(A_i)}(1-\alpha_j)</math>,
*If the standard error of several individual quantities is known then the standard error of some [[function (mathematics)|function]] of the quantities can be easily calculated in many cases;
which according to the Lovász Local Lemma (general case), implies that
*Where the [[probability distribution]] of the value is known, it can be used to calculate a good approximation to an exact [[confidence interval]]; and
:<math>\Pr\left[\bigwedge_{i=1}\overline{A_i}\right]\ge\prod_{i=1}^m(1-\alpha_i)=\left(1-\frac{1}{d+1}\right)^m>0</math>.
*Where the probability distribution is not known, other equations can be used to estimate a confidence interval
This gives the symmetric version of the local lemma.
* As the [[sample size]] gets very large the principle of the [[central limit theorem]] shows that the numbers in the sample are very much like the numbers in the whole group (they have a [[normal distribution]]).


== A non-constructive proof of LLL ==
== Relative standard error ==
We then give the proof of the generalized Lovász Local Lemma. In particular, this proof is '''''non-constructive''''', in contrast to the '''''constructive''''' proofs that we are going to introduce later, which are basically algorithms.
The [[relative standard error]] (RSE) is the standard error divided by the average. This number is smaller than one. Multiplying it by 100% gives it as a percentage of the average. This helps to show whether the uncertainty is important or not. For example, consider two surveys of household income that both result in a sample average of $50,000.  If one survey has a standard error of $10,000 and the other has a standard error of $5,000, then the relative standard errors are 20% and 10% respectively. The survey with the lower relative standard error is better because it has a more precise measurement (the uncertainty is smaller).


Apply the chain rule. The probability that none of the bad events occurs can be expressed as:
In fact, people who need to know average values often decide how small the uncertainty should be before they decide to use the information.  For example, the U.S. National Center for Health Statistics does not report an average if the relative standard error exceeds 30%. NCHS also requires at least 30 observations for an estimate to be reported.{{citation needed|date=October 2011}}
:<math>
\begin{align}
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]
=\prod_{i=1}^m\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]
=\prod_{i=1}^m\left(1-\Pr\left[A_i\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\right).
\end{align}
</math>


It is then sufficient to show that
== Example ==
:<math>\Pr\left[A_i\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\le\alpha_i</math>,
[[File:Uncertainty Example.png|thumb|center| 500 px]]
which will prove the lemma.
[[File:Red Drum (Sciaenops ocellatus).jpg||230px|thumb|right| Example of a redfish (also known as red drum, ''Sciaenops ocellatus'') used in the example.]]


We then prove a slightly more general statement:
For example, there are many redfish in the water of the [[Gulf of Mexico]]. To find out how much a 42&nbsp;cm long redfish weighs on average, it is not possible to measure all of the redfish that are 42&nbsp;cm long. Instead, it is possible to measure some of them. The fish that are actually measured are called a sample. The table shows weights for two samples of redfish, all 42&nbsp;cm long. The average (mean) weight of the first sample is 0.741&nbsp;kg. The average (mean) weight of the second sample is 0.735&nbsp;kg, a little bit different from the first sample.  Each of these averages is a little bit different from the average that would come from measuring every 42&nbsp;cm long redfish (which is not possible anyway).
:(induction hypothesis) <math>\Pr\left[A_{i_1}\mid \bigwedge_{j=2}^\ell\overline{A_{i_j}}\right]\le \alpha_{i_1}</math> for any distinct events <math>A_{i_1},A_{i_2}\ldots,A_{i_\ell}\in\mathcal{A}</math>.


The proof is by induction on <math>\ell</math>. For <math>\ell=1</math>, due to the assumption in the Lovász Local Lemma
The uncertainty in the mean can be used to know how close the average of the samples are to the average that would come from measuring the whole group. The uncertainty in the mean is estimated as the standard deviation for the sample, divided by the square root of the number of samples minus one. The table shows that the uncertainties in the means for the two samples are very close to each other. Also, the relative uncertainty is the uncertainty in the mean divided by the mean, times 100%. The relative uncertainty in this example is 2.38% and 2.50% for the two samples.
:<math>\Pr[A_{i_1}]\le\alpha_{i_1}\prod_{A_j\in\Gamma(A_{i_1})}(1-\alpha_{j})\le\alpha_{i_1}</math>.


For general <math>\ell</math>, assume the hypothesis is true for all smaller <math>\ell</math>.  
Knowing the uncertainty in the mean, one can know how close the sample average is to the average that would come from measuring the whole group. The average for the whole group is between a) the average for the sample plus the uncertainty in the mean, and b) the average for the sample minus the uncertainty in the mean. In this example, the  average weight for all of the 42&nbsp;cm long redfish in the Gulf of Mexico is expected to be 0.723–0.759&nbsp;kg based on the first sample, and 0.717–0.753 based on the second sample.
Without loss of generality, assume that <math>A_{i_2},\ldots,A_{i_k}</math> are the events among <math>A_{i_2}\ldots,A_{i_\ell}</math> that are dependent with <math>A_{i_1}</math>, and <math>A_{i_1}</math> is mutually independent of the rest <math>A_{i_{k+1}},\ldots,A_{i_\ell}</math>.


Then applying the following basic conditional probability identity
==References==
:<math>\Pr[A\mid BC]=\frac{\Pr[AB\mid C]}{\Pr[B\mid C]}</math>,
{{reflist}}
we have
:<math>
\Pr\left[A_{i_1}\mid \bigwedge_{j=2}^\ell\overline{A_{i_j}}\right]
=\frac{\Pr\left[ A_i\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^\ell\overline{A_{i_j}}\right]}
{\Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^\ell\overline{A_{i_j}}\right]}
=\frac{\text{Numerator}}{\text{Denominator}}.
</math>
Due to the mutual independence between <math>A_{i_1}</math> and <math>A_{i_k+1},\ldots,A_{i_\ell}</math>, the <math>\text{Numerator}</math> becomes
:<math>
\text{Numerator}
\le\Pr\left[ A_{i_1}\mid \bigwedge_{j=k+1}^\ell\overline{A_{i_j}}\right]=\Pr[A_{i_1}],
</math>
which according to the assumption in the Lovász Local Lemma, is bounded as
:<math>
\text{Numerator}\le \alpha_{i_1}\prod_{A_j\in \Gamma(A_{i_1})}(1-\alpha_j).
</math>
Applying the chain rule to the <math>\text{Denominator}</math> we have
:<math>
\text{Denominator}=\prod_{j=2}^k\Pr\left[\overline{A_{i_j}}\mid \bigwedge_{r=j+1}^\ell\overline{A_{i_r}}\right].
</math>
Note that there are always less than <math>\ell</math> events involved, so we can apply the induction hypothesis and have
:<math>
\text{Denominator}\ge \prod_{j=2}^k(1-\alpha_{i_j})\ge \prod_{A_j\in\Gamma(A_{i_1})}(1-\alpha_j),
</math>
where the last inequality is due to the fact that <math>A_{i_2},\ldots,A_{i_k}\in\Gamma(A_{i_1})</math>.


Combining everything together, we have
[[Category:Statistics]]
:<math>
\Pr\left[A_{i_1}\mid \bigwedge_{j=2}^\ell\overline{A_{i_j}}\right]
\le\alpha_{i_1}.
</math>
As we argued in the beginning, this proves the general Lovász Local Lemma.
 
=Random Search for Exact-<math>k</math>-SAT=
We start by giving the definition of <math>k</math>-CNF and <math>k</math>-SAT.
{{Theorem|Definition (exact-<math>k</math>-CNF)|
:A logic expression <math>\phi</math> defined on <math>n</math> Boolean variables <math>x_1,x_2,\ldots,x_n\in\{\mathrm{true},\mathrm{false}\}</math> is said to be a '''conjunctive normal form (CNF)''' if <math>\phi</math> can be written as a conjunction(AND) of '''clauses''' as <math>\phi=C_1\wedge C_2\wedge\cdots\wedge C_m</math>, where each clause <math>C_i=\ell_{i_1}\vee \ell_{i_2}\vee\cdots\vee\ell_{i_k}</math> is a disjunction(OR) of '''literals''', where every literal <math>\ell_j</math> is either a variable <math>x_i</math> or the negation <math>\neg x_i</math> of a variable.
:*We call a CNF formula a '''exact-<math>k</math>-CNF''' if every clause consists of ''exact'' <math>k</math> ''distinct'' literals.
}}
For example:
:<math>
(x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3\vee x_4)\wedge (x_1\vee x_2\vee x_4)\wedge (x_2\vee x_3\vee \neg x_4)
</math>
is an exact-<math>3</math>-CNF formula.
 
;Remark
:The notion of exact-<math>k</math>-CNF is slightly more restrictive than the <math>k</math>-CNF, where each clause consists of ''at most'' <math>k</math> variables. The discussion of the subtle differences between these two definitions can be found [https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#3-satisfiability here].
 
A logic expression <math>\phi</math> is said to be '''satisfiable''' if there is an assignment of values of true or false to the variables <math>\boldsymbol{x}=(x_1,x_2,\ldots,x_n)</math> so that <math>\phi(\boldsymbol{x})</math> is true. For a CNF <math>\phi</math>, this mean that there is a truth assignment that satisfies all clauses in <math>\phi</math> simultaneously.
 
The '''exact-<math>k</math>-satisfiability (exact-<math>k</math>-SAT)''' problem is that given as input an exact-<math>k</math>-CNF formula <math>\phi</math> decide whether <math>\phi</math> is satisfiable.
{{Theorem|exact-<math>k</math>-SAT|
:'''Input:''' an exact-<math>k</math>-CNF formula <math>\phi</math>.
:'''Output:''' whether <math>\phi</math> is satisfiable.
}}
It is well known that <math>k</math>-SAT is '''NP-complete''' for any <math>k\ge 3</math>, which remains to be true for the exact-<math>k</math>-SAT.
 
== Satisfiability of exact-<math>k</math>-CNF==
Inspired by the Lovasz local lemma, we now consider the dependencies between clauses in a CNF formula.
 
Given a CNF formula <math>\phi</math> defined over Boolean variables <math>\mathcal{X}=\{x_1,x_2,\ldots,x_n\}</math> and a clause <math>C</math> in <math>\phi</math>, we use <math>\mathsf{vbl}(C)\subseteq\mathcal{X}</math> to denote the set of variables that appear in <math>C</math>.
For a clause <math>C</math> in a CNF formula <math>\phi</math>, its '''degree''' <math>d(C)=|\{D\neq C\mid \mathsf{D}\cap\mathsf{C}\neq\emptyset\}|</math> is the number of other clauses in <math>\phi</math> that share variables with <math>C</math>. The '''maximum degree''' <math>d</math> of a CNF formula <math>\phi</math> is <math>d=\max_{C\text{ in }\phi}d(C)</math>.
 
By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of exact-<math>k</math>-CNF with bounded degree.
{{Theorem|Theorem|
:Let <math>\phi</math> be an exact-<math>k</math>-CNF formula with maximum degree <math>d</math>. If <math>d\le 2^{k}/\mathrm{e}-1</math> then <math>\phi</math> is always satisfiable.
}}
{{Proof|
Let <math>X_1,X_2,\ldots,X_n</math> be Boolean random variables sampled uniformly and independently from <math>\{\text{true},\text{false}\}</math>. We are going to show that <math>\phi</math> is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for <math>\phi</math>.
 
Suppose there are <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math> in <math>\phi</math>. Let <math>A_i</math> denote the bad event that <math>C_i</math> is not satisfied by the random assignment <math>X_1,X_2,\ldots,X_n</math>. Clearly, each <math>A_i</math> is dependent with at most <math>d</math> other <math>A_j</math>'s. And our goal is to show that
:<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>.
 
Recall that in an exact-<math>k</math>-CNF, every clause <math>C_i</math> consists of exact <math>k</math> variable, and is violated by precisely one local assignment among all <math>2^k</math> possibilities. Thus the probability that <math>C_i</math> is not satisfied is <math>\Pr[A_i]=2^{-k}</math>.
 
Assuming that <math>d\le 2^{k}/\mathrm{e}-1</math>, i.e. <math>\mathrm{e}(d+1)2^{-k}\le 1</math>, by the Lovasz local lemma (symmetric case), we have
:<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>.
}}
 
== The random search algorithm ==
The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not specify a way to find this solution. Next we give a simple randomized algorithm and prove it can find the satisfying solution efficiently under a slightly stronger assumption than the Lovasz local lemma.
 
Given as input a CNF formula <math>\phi</math> defined on Boolean variables <math>x_1,x_2,\ldots,x_n</math>, the following algorithm is due to Moser in 2009. Recall that for a clause <math>C</math> in a CNF <math>\phi</math>, we use <math>\mathsf{vbl}(C)</math> to denote the set of variables on which <math>C</math> is defined.
 
{{Theorem
|Solve(CNF <math>\phi</math>)|
:Pick values of <math>x_1,x_2\ldots,x_n</math> uniformly and independently at random;
:While there is an unsatisfied clause <math>C</math> in <math>\phi</math>
:: '''Fix'''(<math>C</math>);
}}
 
The sub-routine '''Fix''' is a recursive procedure:
{{Theorem
|Fix(Clause <math>C</math>)|
:Replace the values of variables in <math>\mathsf{vbl}(C)</math> with new uniform and independent random values;
:While there is ''unsatisfied'' clause <math>D</math> (including <math>C</math> itself) that <math>\mathsf{vbl}(C)\cap \mathsf{vbl}(D)\neq \emptyset</math>
:: '''Fix'''(<math>D</math>);
}}
 
 
---------
We consider a restrictive case.
 
Let <math>X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\}</math> be a set of ''mutually independent'' random variables which assume boolean values. Each event <math>A_i</math> is an AND of at most <math>k</math> literals (<math>X_i</math> or <math>\neg X_i</math>). Let <math>v(A_i)</math> be the set of the <math>k</math> variables that <math>A_i</math> depends on. The probability that none of the bad events occurs is
:<math>
\Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right].
</math>
In this particular model, the dependency graph <math>D=(V,E)</math> is defined as that <math>(i,j)\in E</math> iff <math>v(A_i)\cap v(A_j)\neq \emptyset</math>.
 
Observe that <math>\overline{A_i}</math> is a clause (OR of literals). Thus, <math>\bigwedge_{i=1}^n \overline{A_i}</math> is a '''<math>k</math>-CNF''', the CNF that each clause depends on <math>k</math> variables.
The probability
:<math>
\bigwedge_{i=1}^n \overline{A_i}>0
</math>
means that the the <math>k</math>-CNF <math>\bigwedge_{i=1}^n \overline{A_i}</math> is satisfiable.
 
The satisfiability of <math>k</math>-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first '''NP-complete''' problem (the Cook-Levin theorem). Given the current suspect on '''NP''' vs '''P''', we do not expect to solve this problem generally.
 
However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most <math>d</math> other clauses. We call a <math>k</math>-CNF with this property a <math>k</math>-CNF with bounded degree <math>d</math>.
 
Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:
;Problem
:Find a condition on <math>k</math> and <math>d</math>, such that any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable.
 
In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.
 
Let <math>\phi</math> be a <math>k</math>-CNF of <math>n</math> clauses with bounded degree <math>d</math>,  defined on variables <math>X_1,\ldots,X_m</math>. The following procedure find a satisfiable assignment for <math>\phi</math>.
 
{{Theorem
|Solve(<math>\phi</math>)|
:Pick a random assignment of <math>X_1,\ldots,X_m</math>.
:While there is an unsatisfied clause <math>C</math> in <math>\phi</math>
:: '''Fix'''(<math>C</math>).
}}
 
The sub-routine '''Fix''' is defined as follows:
{{Theorem
|Fix(<math>C</math>)|
:Replace the variables in <math>v(C)</math> with new random values.
:While there is unsatisfied clause <math>D</math> that <math>v(C)\cap v(D)\neq \emptyset</math>
:: '''Fix'''(<math>D</math>).
}}
 
The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.
 
We then prove it works.
 
===Number of top-level callings of Fix ===
In '''Solve'''(<math>\phi</math>), the subroutine '''Fix'''(<math>C</math>) is called. We now upper bound the number of times it is called (not including the recursive calls).
 
Assume '''Fix'''(<math>C</math>) always terminates.
:;Observation
::Every clause that was satisfied before '''Fix'''(<math>C</math>) was called will still remain satisfied and <math>C</math> will also be satisfied after '''Fix'''(<math>C</math>) returns.
 
The observation can be proved by induction on the structure of recursion.  Since there are <math>n</math> clauses, '''Solve'''(<math>\phi</math>) makes at most <math>n</math> calls to '''Fix'''.
 
We then prove that '''Fix'''(<math>C</math>) terminates.
 
=== Termination of Fix ===
The idea of the proof is to '''reconstruct''' a random string.
 
Suppose that during the running of '''Solve'''(<math>\phi</math>), the '''Fix''' subroutine is called for <math>t</math> times (including all the recursive calls).
 
Let <math>s</math> be the sequence of the random bits used by '''Solve'''(<math>\phi</math>). It is easy to see that the length of <math>s</math> is <math>|s|=m+tk</math>, because the initial random assignment of <math>m</math> variables takes <math>m</math> bits, and each time of calling '''Fix''' takes <math>k</math> bits.
 
We then reconstruct <math>s</math> in an alternative way.
 
Recall that '''Solve'''(<math>\phi</math>) calls '''Fix'''(<math>C</math>) at top-level for at most <math>n</math> times. Each calling of '''Fix'''(<math>C</math>) defines a recursion tree, rooted at clause <math>C</math>, and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of '''Solve'''(<math>\phi</math>) can be described by at most <math>n</math> recursion trees.
 
:;Observation 1
::Fix a <math>\phi</math>. The <math>n</math> recursion trees which capture the total running history of '''Solve'''(<math>\phi</math>) can be encoded in <math>n\log n+t(\log d+O(1))</math> bits.
Each root node corresponds to a clause. There are <math>n</math> clauses in <math>\phi</math>. The <math>n</math> root nodes can be represented in <math>n\log n</math> bits.
 
The smart part is how to encode the branches of the tree. Note that '''Fix'''(<math>C</math>) will call '''Fix'''(<math>D</math>) only for the <math>D</math> that shares variables with <math>C</math>. For a k-CNF with bounded degree <math>d</math>, each clause <math>C</math> can share variables with at most <math>d</math> many other clauses. Thus, each branch in the recursion tree can be represented  in <math>\log d</math> bits. There are extra <math>O(1)</math> bits needed to denote whether the recursion ends. So totally  <math>n\log n+t(\log d+O(1))</math> bits are sufficient to encode all <math>n</math> recursion trees.
 
:;Observation 2
::The random sequence <math>s</math> can be encoded in <math>m+n\log n+t(\log d+O(1))</math> bits.
 
With <math>n\log n+t(\log d+O(1))</math> bits, the structure of all the recursion trees can be encoded. With extra <math>m</math> bits, the final assignment of the <math>m</math>
variables is stored.
 
We then observe that with these information, the sequence of the random bits <math>s</math> can be reconstructed from backwards from the final assignment.
 
The key step is that a clause <math>C</math> is only fixed when it is unsatisfied (obvious), and an unsatisfied clause <math>C</math> must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the <math>k</math> random bits in the random sequence <math>s</math> used in the call of the Fix corresponding to the node. Therefore, <math>s</math> can be reconstructed from the final assignment plus at most <math>n</math> recursion trees, which can be encoded in at most <math>m+n\log n+t(\log d+O(1))</math> bits.
 
The following theorem lies in the heart of the '''Kolmogorov complexity'''. The theorem states that random sequence is '''incompressible'''.
{{Theorem
|Theorem (Kolmogorov)|
:For any encoding scheme , with high probability, a random sequence <math>s</math> is encoded in at least <math>|s|</math> bits.
}}
 
Applying the theorem, we have that with high probability,
:<math>m+n\log n+t(\log d+O(1))\ge |s|=m+tk</math>.
Therefore,
:<math>
t(k-O(1)-\log d)\le n\log n.
</math>
In order to bound <math>t</math>, we need
:<math>k-O(1)-\log d>0</math>,
which hold for <math>d< 2^{k-\alpha}</math> for some constant <math>\alpha>0</math>. In fact, in this case, <math>t=O(n\log n)</math>, the running time of the procedure is bounded by a polynomial!
 
=== Back to the local lemma ===
We showed that for <math>d<2^{k-O(1)}</math>, any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.
 
Recall that the symmetric version of the local lemma:
{{Theorem
|Theorem (The local lemma: symmetric case)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that the following hold:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#the maximum degree of the dependency graph for the events <math>A_1,A_2,\ldots,A_n</math> is <math>d</math>, and
:::<math>ep(d+1)\le 1</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}
Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens <math>\overline{A_i}</math> are clauses defined on <math>k</math> variables. Then,
:<math>
p=2^{-k}
</math>
thus, the condition <math>ep(d+1)\le 1</math> means that
:<math>
d<2^{k}/e
</math>
which means the Moser's procedure is asymptotically optimal on the degree of dependency.
 
= Algorithmic Lovász Local Lemma =

Latest revision as of 03:47, 2 July 2016

File:Standard deviation diagram.svg
For a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.

The standard error is the standard deviation of the sampling distribution of a statistic.[1] The term may also be used for an estimate (good guess) of that standard deviation taken from a sample of the whole group.

The average of some part of a group (called a sample) is the usual way to estimate the average for the whole group. It is often too hard or costs too much money to measure the whole group. But if a different sample is measured, it will have an average that is a little bit different from the first sample. The standard error of the mean is a way to know how close the average of the sample is to the average of the whole group. It is a way to know how sure you can be about the average from the sample.

In real measurements, the true value of the standard deviation of the mean for the whole group is usually not known. So the term standard error is often used to mean a close guess to the true number for the whole group. The more measurements there are in a sample, the closer the guess will be to the true number for the whole group.

How to find standard error of the mean

One way to find the standard error of the mean is to have lots of samples. First, the average for each sample is found. Then the average and standard deviation of those sample averages is found. The standard deviation for all the sample averages is the standard error of the mean. This can be a lot of work. Sometimes it is too difficult or costs too much money to have lots of samples.

Another way to find the standard error of the mean is to use an equation that needs only one sample. Standard error of the mean is usually estimated by the standard deviation for a sample from the whole group (sample standard deviation) divided by the square root of the sample size.

[math]\displaystyle{ SE_\bar{x}\ = \frac{s}{\sqrt{n}} }[/math]

where

s is the sample standard deviation (i.e., the sample-based estimate of the standard deviation of the population), and
n is the number of measurements in the sample.

How big does the sample need to be so that the estimate of the standard error of the mean is close to the actual standard error of the mean for the whole group? There should be at least six measurements in a sample. Then the standard error of the mean for the sample will be within 5% of the standard error of the mean if the whole group were measured.[2]

Corrections for some cases

There is another equation to use if the number of measurements is for 5% or more of the whole group:[3]

There are special equations to use if a sample has less than 20 measurements.[4]

Sometimes a sample comes from one place even though the whole group may be spread out. Also, sometimes a sample may be made in a short time period when the whole group covers a longer time. In this case, the numbers in the sample are not independent. Then special equations are used to try to correct for this.[5]

Usefulness

A practical result: One can become more sure of an average value by having more measurements in a sample. Then the standard error of the mean will be smaller because the standard deviation is divided by a bigger number. However, to make the uncertainty (standard error of the mean) in an average value half as big, the sample size (n) needs to be four times bigger. This is because the standard deviation is divided by the square root of the sample size. To make the uncertainty one-tenth as big, the sample size (n) needs to be one hundred times bigger!

Standard errors are easy to calculate and are used a lot because:

  • If the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated in many cases;
  • Where the probability distribution of the value is known, it can be used to calculate a good approximation to an exact confidence interval; and
  • Where the probability distribution is not known, other equations can be used to estimate a confidence interval
  • As the sample size gets very large the principle of the central limit theorem shows that the numbers in the sample are very much like the numbers in the whole group (they have a normal distribution).

Relative standard error

The relative standard error (RSE) is the standard error divided by the average. This number is smaller than one. Multiplying it by 100% gives it as a percentage of the average. This helps to show whether the uncertainty is important or not. For example, consider two surveys of household income that both result in a sample average of $50,000. If one survey has a standard error of $10,000 and the other has a standard error of $5,000, then the relative standard errors are 20% and 10% respectively. The survey with the lower relative standard error is better because it has a more precise measurement (the uncertainty is smaller).

In fact, people who need to know average values often decide how small the uncertainty should be before they decide to use the information. For example, the U.S. National Center for Health Statistics does not report an average if the relative standard error exceeds 30%. NCHS also requires at least 30 observations for an estimate to be reported.Template:Citation needed

Example

File:Uncertainty Example.png
File:Red Drum (Sciaenops ocellatus).jpg
Example of a redfish (also known as red drum, Sciaenops ocellatus) used in the example.

For example, there are many redfish in the water of the Gulf of Mexico. To find out how much a 42 cm long redfish weighs on average, it is not possible to measure all of the redfish that are 42 cm long. Instead, it is possible to measure some of them. The fish that are actually measured are called a sample. The table shows weights for two samples of redfish, all 42 cm long. The average (mean) weight of the first sample is 0.741 kg. The average (mean) weight of the second sample is 0.735 kg, a little bit different from the first sample. Each of these averages is a little bit different from the average that would come from measuring every 42 cm long redfish (which is not possible anyway).

The uncertainty in the mean can be used to know how close the average of the samples are to the average that would come from measuring the whole group. The uncertainty in the mean is estimated as the standard deviation for the sample, divided by the square root of the number of samples minus one. The table shows that the uncertainties in the means for the two samples are very close to each other. Also, the relative uncertainty is the uncertainty in the mean divided by the mean, times 100%. The relative uncertainty in this example is 2.38% and 2.50% for the two samples.

Knowing the uncertainty in the mean, one can know how close the sample average is to the average that would come from measuring the whole group. The average for the whole group is between a) the average for the sample plus the uncertainty in the mean, and b) the average for the sample minus the uncertainty in the mean. In this example, the average weight for all of the 42 cm long redfish in the Gulf of Mexico is expected to be 0.723–0.759 kg based on the first sample, and 0.717–0.753 based on the second sample.

References

Template:Reflist

  1. Everitt B.S. 2003. The Cambridge Dictionary of Statistics, CUP. ISBN 0-521-81099-X
  2. Template:Cite journal
  3. Template:Cite journal (Equation 1)
  4. Sokal and Rohlf 1981. Biometry: principles and practice of statistics in biological research, 2nd ed. p53 ISBN 0716712547
  5. James R. Bence 1995. Analysis of short time series: correcting for autocorrelation. Ecology 76(2): 628–639.