# Upper bounds, lower bounds

Bounds are just inequalities (in a general sense, e.g. asymptotic inequalities). An inequality

${\displaystyle A\leq B}$

is read "${\displaystyle A}$ is a lower bound of ${\displaystyle B}$" or equivalently "${\displaystyle B}$ is an upper bound of ${\displaystyle A}$".

In Computer Science, when talking about upper or lower bounds, people really mean the upper or lower bounds of complexities.

In this lecture, we are focused on the time complexity, although there are other complexity measures in various computational models (e.g. space complexity, communication complexity, query complexity).

The complexity is represented as a function of ${\displaystyle n}$, where ${\displaystyle n}$ is the length of the input.

There are two kinds of complexities:

Complexity of algorithms
For an algorithm ${\displaystyle A}$, the (worst-case) time complexity of ${\displaystyle A}$ is the maximum running time over all inputs ${\displaystyle x}$ of length ${\displaystyle n}$.
Complexity of problems
For a computational problem, its time complexity is the time complexity of the optimal algorithm which solves the problem.

The complexity of an algorithm tells how good the algorithm is, yet the complexity of a problems tells how hard the problem is. While the former is what we care mostly about in practice, the later is more about the fundamental truths of computation.

In Theoretical Computer Science, when talking about upper or lower bounds, people usually refer to the bounds of the complexities of problems, rather than those of algorithms. Therefore, an upper bound means an algorithm; and a lower bound means bad news such as impossibility results.

Today's lecture is devoted to lower bounds, i.e. the necessary prices which have to paid by any algorithms which solve the given problems. Speaking of necessary prices, we have to be specific about the model, the mathematical rules which stipulate what is problem and what is algorithm.

## Decision problems

Computational problems are functions mapping inputs to outputs. Sometimes, an input is called an instance of the problem and an output is called a solution to that instance. The theory of complexity deals almost exclusively with decision problems, the computational problems with yes-or-no answers.

For a decision problem ${\displaystyle f}$, its positive instances are the inputs with "yes" answers. A decision problem can be equivalently represented as the set of all positive instances, denoted ${\displaystyle L}$. We call ${\displaystyle L}$ a formal language (not entirely the same thing as the c language). The task of computing ${\displaystyle f(x)}$ is equivalent to that of determining whether ${\displaystyle x\in L}$. Therefore the two formulations of "decision problems" and "formal languages" can be interchangeably used.

## Turing Machine

In order to study complexities, which deal with the limits of computations, we have to be clear about what computation is, that is, modeling computations.

This work was done by Alan Turing in 1937. The model is now referred by the name Turing machine.

# Complexity Classes

Problems are organized into classes according to their complexity in respective computational models. There are nearly 500 classes collected by the Complexity Zoo.

## P, NP

A deterministic algorithm ${\displaystyle A}$ is polynomial time if its running time is within a polynomial of ${\displaystyle n}$ on any input of length ${\displaystyle n}$.

 Definition (P) The class P is the class of decision problems that can be computed by polynomial time algorithms.

We now introduce the infamous NP class.

 Definition (NP) The class NP consists of all decision problems ${\displaystyle f}$ that have a polynomial time algorithm ${\displaystyle A}$ such that for any input ${\displaystyle x}$, ${\displaystyle f(x)=1}$ if and only if ${\displaystyle \exists y}$, ${\displaystyle A(x,y)=1}$, where the size of ${\displaystyle y}$ is within polynomial of the size of ${\displaystyle x}$

Informally, NP is the class of decision problems that the "yes" instances can be verified in polynomial time. The string ${\displaystyle y}$ in the definition is called a certificate or a witness. Provided a polynomial size certificate, the algorithm ${\displaystyle A}$ verifies (instead of computing) the ${\displaystyle f(x)}$ for any positive instance ${\displaystyle x}$ in polynomial time.

Example
Both the problems of deciding whether an input array is sorted and deciding whether an input graph has Hamiltonian cycles are in NP. For the former one, the input array itself is a certificate. And for the later one, a Hamiltonian cycle in the graph is a certificate (given a cycle, it is easy to verify whether it is Hamiltonian).

This definition is one of the equivalent definitions of the NP class. Another definition (also a classic one) is that NP is the class of decision problems that can be computed by polynomial time nondeterministic algorithms.

Common misuses of the terminology:

• "This algorithm is NP." --- NP is a class of decision problems, not algorithms.
• "This problem is a NP problem, so it must be very hard." --- By definition, a problem is in NP if its positive instances are poly-time verifiable, which implies nothing about the hardness. You probably means the problem is NP-hard.
• "NP problems are the hardest problems." --- There are infinitely many harder problems outside NP. Actually, according to a widely believed conjecture, there are infinitely many classes of problems which are harder than NP (see [1]).

Note that unlike P, the definition of NP is asymmetric. It only requires the positive instances (the ${\displaystyle x}$ that ${\displaystyle f(x)=1}$) to be poly-time verifiable, but does not say anything about the negative instances. The class for that case is co-NP.

 Definition (co-NP) The class co-NP consists of all decision problems ${\displaystyle f}$ that have a polynomial time algorithm ${\displaystyle A}$ such that for any input ${\displaystyle x}$, ${\displaystyle f(x)=0}$ if and only if ${\displaystyle \exists y}$, ${\displaystyle A(x,y)=0}$, where the size of ${\displaystyle y}$ is within polynomial of the size of ${\displaystyle x}$

Clearly, P ${\displaystyle \subseteq }$ NP ${\displaystyle \cap }$ co-NP. Does P = NP ${\displaystyle \cap }$ co-NP? It is an important open problem in the complexity theory which is closely related to our understanding of the relation between NP and P.

## ZPP, RP, BPP

Now we proceeds to define complexity classes of the problems that is efficiently computable by the randomized algorithms, i.e. the randomized analogs of P.

In the last class we learned that there are two types of randomized algorithms: Monte Carlo algorithms (randomized algorithms with erros) and Las Vegas algorithms (randomized algorithms with random running time but with no erros). For Monte Carlo algorithms, there are two types of errors: one-sided errors where the algorithm errs only for positive instances, and two-sided errors where there are both false positives and false negatives. Therefore, there are three cases to deal with:

1. Las Vegas algorithms, the corresponding class is ZPP (for Zero-error Probabilistic Polynomial time).
2. Monte Carlo algorithms with one-sided error, the corresponding class is RP (for Randomized Polynomial time).
3. Monte Carlo algorithms with two-sided error, the corresponding class is BPP (for Bounded-error Probabilistic Polynomial time).

We first introduce the class ZPP of the problems which can be solved by polynomial time Las Vegas algorithms. For Las Vegas algorithms, the running time is a random variable, therefore we actually refer to the Las Vegas algorithms whose expected running time is within a polynomial of ${\displaystyle n}$ for any input of size ${\displaystyle n}$, where the expectation is taken over the internal randomness (coin flippings) of the algorithm.

 Definition (ZPP) The class ZPP consists of all decision problems ${\displaystyle f}$ that have a randomized algorithm ${\displaystyle A}$ running in expected polynomial time for any input such that for any input ${\displaystyle x}$, ${\displaystyle A(x)=f(x)}$.

Next we define the class RP of the problems which can be solved by polynomial time Monte Carlo algorithms with one-sided error.

 Definition (RP) The class RP consists of all decision problems ${\displaystyle f}$ that have a randomized algorithm ${\displaystyle A}$ running in worst-case polynomial time such that for any input ${\displaystyle x}$, if ${\displaystyle f(x)=1}$, then ${\displaystyle \Pr[A(x)=1]\geq 1-1/2}$; if ${\displaystyle f(x)=0}$, then ${\displaystyle \Pr[A(x)=0]=1}$.
Remark
The choice of the error probability is arbitrary. In fact, replacing the 1/2 with any constant ${\displaystyle 0 will not change the definition of RP.
Example
Define the decision version of the minimum cut problem as follows. For a graph ${\displaystyle G}$, ${\displaystyle f(G)=1}$ if and only if there exists any cut of size smaller than ${\displaystyle k}$, where ${\displaystyle k}$ is an arbitrary parameter. The problem ${\displaystyle f}$ can be solved probabilistically by Karger's min-cut algorithm in polynomial time. The error is one-sided, because if there does not exists any cut of size smaller than ${\displaystyle k}$, then obviously the algorithm cannot find any. Therefore ${\displaystyle f\in }$RP.

Like NP, the class RP is also asymmetrically defined, which hints us to define the co-RP class.

 Definition (co-RP) The class co-RP consists of all decision problems ${\displaystyle f}$ that have a randomized algorithm ${\displaystyle A}$ running in worst-case polynomial time such that for any input ${\displaystyle x}$, if ${\displaystyle f(x)=1}$, then ${\displaystyle \Pr[A(x)=1]=1}$; if ${\displaystyle f(x)=0}$, then ${\displaystyle \Pr[A(x)=0]\geq 1-1/2}$.

We then define the class BPP of the problems which can be solved by polynomial time Monte Carlo algorithms with two-sided error.

 Definition (BPP) The class BPP consists of all decision problems ${\displaystyle f}$ that have a randomized algorithm ${\displaystyle A}$ running in worst-case polynomial time such that for any input ${\displaystyle x}$, if ${\displaystyle f(x)=1}$, then ${\displaystyle \Pr[A(x)=1]\geq 1-1/4}$; if ${\displaystyle f(x)=0}$, then ${\displaystyle \Pr[A(x)=0]\geq 1-1/4}$.
Remark
Replacing the error probability from ${\displaystyle {\frac {1}{4}}}$ to ${\displaystyle {\frac {1}{3}}}$, or any constant ${\displaystyle 0 will not change the definition of the BPP class.

What is known (you can even prove by yourself):

• RP, BPP, ZPP all contain P
• RP ${\displaystyle \subseteq }$ NP
• ZPP = RP${\displaystyle \cap }$co-RP;
• RP ${\displaystyle \subseteq }$ BPP;
• co-RP ${\displaystyle \subseteq }$ BPP.

Open problem:

• BPP vs P (the second most important open problem in the complexity theory to the NP vs P).