组合数学 (Fall 2011)/Counting and existence and Graham's number: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>WikiSysop
 
imported>Zabshk
(Reverted vandalism.)
 
Line 1: Line 1:
== Counting arguments ==
{{Orphan|date=December 2010}}
=== Shannon's circuit lower bound===
'''Graham's number''' is a very, very big [[natural number]] that was defined by a man named Ronald Graham. Graham was solving a problem in an area of mathematics called [[Ramsey theory]]. He proved that the answer to his problem was smaller than Graham's number.
This is a fundamental problem in in Computer Science.


A '''boolean function''' is a function in the form <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.
Graham's number is one of the biggest numbers ever used in a [[mathematical proof]]. Even if every digit in Graham's number were written in the tiniest writing possible, it would still be too big to fit in the [[observable universe]].


[http://en.wikipedia.org/wiki/Boolean_circuit Boolean circuit] is a mathematical model of computation.
==Context==
Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled <math>x_1, x_2, \ldots , x_n</math>. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).


Computations in Turing machines can be simulated by circuits, and any boolean function in '''P''' can be computed by a circuit with polynomially many gates. Thus, if we can find a function in '''NP''' that cannot be computed by any circuit with polynomially many gates, then '''NP'''<math>\neq</math>'''P'''.
Ramsey theory is an area of mathematics that asks questions like the following:


The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.
{{quote|<p>Suppose we draw some number of points, and connect every pair of points by a line. Some lines are blue and some are red. Can we always find 3 points for which the 3 lines connecting them are all the same color?}}


{{Theorem
It turns out that for this simple problem, the answer is "yes" when we have 6 or more points, no matter how the lines are colored. But when we have 5 points or fewer, we can color the lines so that the answer is "no".
|Theorem (Shannon 1949)|
:There is a boolean function <math>f:\{0,1\}^n\rightarrow \{0,1\}</math> with circuit complexity greater than <math>\frac{2^n}{3n}</math>.
}}
{{Proof|
We first count the number of boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>. There are <math>2^{2^n}</math> boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.


Then we count the number of boolean circuit with fixed number of gates.
Graham's number comes from a variation on this question.
Fix an integer <math>t</math>, we count the number of circuits with <math>t</math> gates. By the [http://en.wikipedia.org/wiki/De_Morgan's_laws De Morgan's laws], we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable <math>x_i</math>, an inverted input variable <math>\neg x_i</math>, or the output of another gate; thus, there are at most <math>2+2n+t-1</math> possible gate inputs. It follows that the number of circuits with <math>t</math> gates is at most <math>2^t(t+2n+1)^{2t}</math>.  


If <math>t=2^n/3n</math>, then
{{quote|<p>Once again, say we have some points, but now they are the corners of an ''n''-dimensional [[hypercube]]. They are still all connected by blue and red lines. For any 4 points, there are 6 lines connecting them. Can we find 4 points that all lie on one [[Plane (mathematics)|plane]], and the 6 lines connecting them are all the same color?}}
:<math>
\frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)<1,</math>      thus, <math>2^t(t+2n+1)^{2t} < 2^{2^n}.</math>


Each boolean circuit computes one boolean function. Therefore, there must exist a boolean function <math>f</math> which cannot be computed by any circuits with <math>2^n/3n</math> gates.
By asking that the 4 points lie on a plane, we have made the problem much harder. We would like to know: for what values of ''n'' is the answer "no" (for some way of coloring the lines), and for what values of ''n'' is it "yes" (for all ways of coloring the lines)? But this problem has not been completely solved yet.
}}


Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but ''almost all'' boolean functions have exponentially large circuit complexity.
In 1971, Ronald Graham and B. L. Rothschild found a partial answer to this problem. They showed that for ''n''=6, the answer is "no". But when ''n'' is very large, as large as Graham's number or larger, the answer is "yes".


=== Double counting ===
One of the reasons this partial answer is important is that it means that the answer is eventually "yes" for at least some large ''n''. Before 1971, we didn't know even that much.
The double counting principle states the following obvious fact: if the elements of a set are counted in two different ways, the answers are the same.
;Handshaking lemma
The following lemma is a standard demonstration of double counting.
{{Theorem|Handshaking Lemma|
:At a party, the number of guests who shake hands an odd number of times is even.
}}


We model this scenario as an undirected graph <math>G(V,E)</math> with <math>|V|=n</math> standing for the <math>n</math> guests. There is an edge <math>uv\in E</math> if <math>u</math> and <math>v</math> shake hands. Let <math>d(v)</math> be the degree of vertex <math>v</math>, which represents the number of times that <math>v</math> shakes hand. The handshaking lemma states that in any undirected graph, the number of vertices whose degrees are odd is even. It is sufficient to show that the sum of odd degrees is even.
==Definition==


The handshaking lemma is a direct consequence of the following lemma, which is proved by Euler in his 1736 paper on [http://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg Seven Bridges of Königsberg] that began the study of graph theory.
Graham's number is not only too big to write down all of its digits, it is too big even to write in [[scientific notation]]. In order to be able to write it down, we have to use [[Knuth's up-arrow notation]].


{{Theorem|Lemma (Euler 1736)|
We will write down a [[sequence]] of numbers that we will call '''g1''', '''g2''', '''g3''', and so on. Each one will be used in an equation to find the next. '''g64 is''' Graham's number.
:<math>\sum_{v\in V}d(v)=2|E|</math>
}}
{{Proof|
We count the number of '''directed''' edges. A directed edge is an ordered pair <math>(u,v)</math> such that <math>\{u,v\}\in E</math>. There are two ways to count the directed edges.


First, we can enumerate by edges. Pick every edge <math>uv\in E</math> and apply two directions <math>(u,v)</math> and <math>(v,u)</math> to the edge. This gives us <math>2|E|</math> directed edges.
First, here are some examples of up-arrows:


On the other hand, we can enumerate by vertices. Pick every vertex <math>v\in V</math> and for each of its <math>d(v)</math> neighbors, say <math>u</math>, generate a directed edge <math>(v,u)</math>. This gives us <math>\sum_{v\in V}d(v)</math> directed edges.
* <math>3\uparrow3</math> is 3x3x3 which equals 27. An arrow between two numbers just means the first number multiplied by itself the second number of times.
* You can think of <math>3 \uparrow \uparrow 3</math> as <math>3 \uparrow (3 \uparrow 3)</math> because two arrows between numbers A and B just means A written down a B number of times with an arrow in between each A. Because we know what single arrows are, <math>3\uparrow(3\uparrow3)</math> is 3 multiplied by itself <math>3\uparrow3</math> times and we know <math>3\uparrow3
</math> is twenty-seven. So <math>3\uparrow\uparrow3</math> is 3x3x3x3x....x3x3, in total 27 times. That equals 7625597484987.
* <math>3 \uparrow \uparrow \uparrow 3</math> is <math>3 \uparrow \uparrow (3 \uparrow \uparrow 3)</math> and we know <math>3\uparrow\uparrow3</math> is 7625597484987. So <math>3\uparrow\uparrow(3\uparrow\uparrow3)</math> is <math>3\uparrow \uparrow 7625597484987</math>. That can also be written as <math>3\uparrow(3\uparrow(3\uparrow(3\uparrow . . .(3\uparrow(3\uparrow(3\uparrow3)</math> with a total of 7625597484987 3s. This number is so huge, its digits, even written very small, could fill up the observable universe and beyond.
** Although this number may already be beyond comprehension, this is barely the start of  this giant number.
* The next step like this is <math>3 \uparrow \uparrow \uparrow \uparrow 3</math> or <math>3 \uparrow \uparrow \uparrow (3 \uparrow \uparrow \uparrow 3)</math>. This is the number we will call '''g1'''.


It is obvious that the two terms are equal, since we just count the same thing twice with different methods. The lemma follows.
After that, '''g2''' is equal to <math>3\uparrow \uparrow \uparrow \uparrow \ldots \uparrow \uparrow \uparrow \uparrow 3</math>; the number of arrows in this number is '''g1'''.
}}


The handshaking lemma is implied directly by the above lemma, since the sum of even degrees is even.
'''g3''' is equal to <math>3\uparrow \uparrow \uparrow \uparrow \uparrow \ldots \uparrow \uparrow \uparrow \uparrow \uparrow 3</math>, where the number of arrows is '''g2'''.


;Cayley's formula
We keep going in this way. We stop when we define '''g64''' to be <math>3\uparrow \uparrow \uparrow \uparrow \uparrow \ldots \uparrow \uparrow \uparrow \uparrow \uparrow 3</math>, where the number of arrows is '''g63'''.
We now present a theorem of the number of labeled trees on a fixed number of vertices. It is due to [http://en.wikipedia.org/wiki/Arthur_Cayley Cayley] in 1889. The theorem is often referred by the name [http://en.wikipedia.org/wiki/Cayley's_formula Cayley's formula].


{{Theorem|Cayley's formula for trees|
This is Graham's number.
: There are <math>n^{n-2}</math> different trees on <math>n</math> distinct vertices.
}}


The theorem has several proofs. Classical methods include the bijection which encodes a tree by a [http://en.wikipedia.org/wiki/Pr%C3%BCfer_sequence Prüfer sequence], and through the [http://en.wikipedia.org/wiki/Kirchhoff's_matrix_tree_theorem Kirchhoff's matrix tree theorem]. Here we present a proof by double counting, which is considered by the [http://en.wikipedia.org/wiki/Proofs_from_THE_BOOK Proofs from THE BOOK] "the most beautiful of them all".
==Related pages==
{{Proof|(Due to Pitman 1999)
* [[Knuth's up-arrow notation]]


Let <math>T_n</math> be the number of different trees defined on <math>n</math> distinct vertices.
[[Category:Mathematics]]
 
[[Category:Hyperoperations]]
A '''rooted tree''' is a tree with a special vertex. That is, one of the <math>n</math> vertices is marked as the "root" of the tree. A rooted tree defines a natural direction of all edges, such that an edge <math>uv</math> of the tree is directed from <math>u</math> to <math>v</math> if <math>u</math> is before <math>v</math> along the unique path from the root.
[[Category:Integers]]
 
We count the number of different ''sequences'' of directed edges that can be added to an empty graph on <math>n</math> vertices to form from it a ''rooted'' tree. We note that such a sequence can be formed in two ways:
# Starting with an unrooted tree, choose one of its vertices as root, and fix an total order of edges to specify the order in which the edges are added.
# Starting from an empty graph, add the edges one by one in steps.
 
In the first method, we pick one of the <math>T_n</math> unrooted trees, choose one of the <math>n</math> vertices as the root, and pick one of the <math>(n-1)!</math> total orders of the <math>n-1</math> edges. This gives us <math>T_nn(n-1)!=T_nn!</math> ways.
 
In the second method, we consider the number of choices in one step, and multiply the numbers of choices in all steps. This is done as follows.
 
Given a sequence of ''adding'' <math>n-1</math> edges to an empty graph to form a rooted tree, we reverse this sequence and get a sequence of ''removing'' edges one by one from the final rooted tree until no edge left. We observe that:
* At first, we remove an edge from the rooted tree. Suppose that the root of the tree is <math>r</math>, and the removed directed edge is <math>(u,v)</math>.  After removing <math>(u,v)</math>, the original rooted tree is disconnected into two rooted trees, one rooted at <math>r</math> and the other rooted at <math>v</math>.
* After removing <math>k-1</math> edges, there are <math>k</math> rooted trees. In the <math>k</math>th step, a directed edge <math>(u,v)</math> in the current forest is removed and the tree containing <math>(u,v)</math> is disconnected into two trees, one rooted at the old root of that tree, and the other rooted at <math>v</math>.
 
We now again reverse the above procedure, and consider the sequence of adding directed edges to an empty graph to form a rooted tree.
* At first, we have <math>n</math> rooted trees, each of 0 edge (<math>n</math> isolated vertices).
* After adding <math>n-k</math> edges, there are <math>k</math> rooted trees. Denoting the directed edge added next as <math>(u,v)</math>. As observed above, <math>u</math> can be any one of the <math>n</math> vertices; but <math>v</math> must be the root of one of the <math>k</math> trees, except the tree which contains <math>u</math>. There are <math>n(k-1)</math> choices of such <math>(u,v)</math>.
Multiplying the numbers of choices in all steps, the number of sequences of adding directed edges to an empty graph to form a rooted tree is given by
:<math>\prod_{k=2}^nn(k-1)=n^{n-2}n!</math>.
 
By the principle of double counting, counting the same thing by different methods yield the same result.
:<math>T_nn!=n^{n-2}n!</math>,
which gives that <math>T_n=n^{n-2}</math>.
}}
 
== Cayley's formula ==
 
== The Pigeonhole Principle ==
The '''pigeonhole principle''' states the following "obvious" fact:
:''<math>n+1</math> pigeons cannot sit in <math>n</math> holes so that every pigeon is alone in its hole.''
More generally, the pigeonhole principle states as the following.
{{Theorem|Generalized pigeonhole principle|
:If a set consisting of more than <math>mn</math> objects is partitioned into <math>n</math> classes, then some class receives more than <math>m</math> objects.
}}
 
This is one of the oldest '''non-constructive''' principles: it states only the ''existence'' of a pigeonhole with more than <math>m</math> pigeons and says nothing about how to ''find'' such a pigeonhole.
 
=== Monotonic subsequences ===
Let <math>(a_1,a_2,\ldots,a_n)</math> be a sequence of <math>n</math> distinct real numbers. A '''subsequence''' is a sequence of distinct terms of <math>(a_1,a_2,\ldots,a_n)</math> appearing in the same order in which they appear in <math>(a_1,a_2,\ldots,a_n)</math>. Formally, a subsequence of <math>(a_1,a_2,\ldots,a_n)</math> is an <math>(a_{i_1},a_{i_2},\ldots,a_{i_k})</math>, with <math>i_1<i_2<\cdots<i_k</math>.
 
A sequence <math>(a_1,a_2,\ldots,a_n)</math> is '''increasing''' if <math>a_1<a_2<\cdots<a_n</math>, and '''decreasing''' if <math>a_1>a_2>\cdots>a_n</math>.
 
We are interested in the ''longest'' increasing and decreasing subsequences of an <math>a_1<a_2<\cdots<a_n</math>. It is intuitive that the length of both the longest increasing subsequence and the longest decreasing subsequence cannot be small simultaneously. A famous result of Erdős and Szekeres formally justifies this intuition. This is one of the first results in extremal combinatorics, published in the influential 1935 paper of Erdős and Szekeres.
 
{{Theorem|Theorem (Erdős-Szekeres 1935)|
:A sequence of more than <math>mn</math> different real numbers must contain either an increasing subsequence of length <math>m+1</math>, or a decreasing subsequence of length <math>n+1</math>.
}}
{{Proof|(due to Seidenberg 1959)
Let <math>(a_1,a_2,\ldots,a_{N})</math> be the original sequence of <math>N>mn</math> distinct real numbers. Associate each <math>a_i</math> a pair <math>(x_i,y_i)</math>, defined as:
*<math>x_i</math>: the length of the longest ''increasing'' subsequence ''ending'' at <math>a_i</math>;
*<math>y_i</math>: the length of the longest ''decreasing'' subsequence ''starting'' at <math>a_i</math>.
A key observation is that <math>(x_i,y_i)\neq (x_j,y_j)</math> whenever <math>i\neq j</math>. This is proved as follows:
: '''Case 1:''' If <math>a_i<a_j</math>, then the longest increasing subsequence ending at <math>a_i</math> can be extended by adding on <math>a_j</math>, so <math>x_i<x_j</math>.
: '''Case 2:'''  If <math>a_i>a_j</math>, then the longest decreasing subsequence starting at <math>a_j</math> can be preceded by <math>a_i</math>, so <math>y_i>y_j</math>.
Now we put <math>N</math> "pigeons" <math>a_1,a_2,\ldots,a_N</math> into "pigeonholes" <math>\{1,2,\ldots,N\}\times\{1,2,\ldots,N\}</math>, such that <math>a_i</math> is put into hole <math>(x_i,y_i)</math>, with at most one pigeon per each hole (since different <math>a_i</math> has different <math>(x_i,y_i)</math>).
 
The number of pigeons is <math>N>mn</math>. Due to pigeonhole principle, there must be a pigeon which is outside the region <math>\{1,2,\ldots,m\}\times\{1,2,\ldots,n\}</math>, which implies that there exists an <math>a_i</math> with either <math>x_i>m</math> or <math>y_i>n</math>. Due to our definition of <math>(x_i,y_i)</math>, there must be either an increasing subsequence of length <math>m+1</math>, or a decreasing subsequence of length <math>n+1</math>.
}}
 
=== Dirichlet's approximation ===
Let <math>x</math> be an irrational number. We now want to approximate <math>x</math> be a rational number (a fraction).
 
Since every real interval <math>[a,b]</math> with <math>a<b</math> contains infinitely many rational numbers, there must exist rational numbers arbitrarily close to <math>x</math>. The trick is to let the denominator of the fraction sufficiently large.
 
Suppose however we restrict the rationals we may select to have denominators bounded by <math>n</math>. How closely we can approximate <math>x</math> now?
 
The following important theorem is due to Dirichlet and his ''Schubfachprinzip'' ("drawer principle"). The theorem is fundamental in numer theory and real analysis, but the proof is combinatorial.
 
{{Theorem|Theorem (Dirichlet 1879)|
:Let <math>x</math> be an irrational number. For any natural number <math>n</math>, there is a rational number <math>\frac{p}{q}</math> such that <math>1\le q\le n</math> and
::<math>\left|x-\frac{p}{q}\right|<\frac{1}{nq}</math>.
}}
{{Proof|
Let <math>\{x\}=x-\lfloor x\rfloor</math> denote the '''fractional part''' of the real number <math>x</math>. It is obvious that <math>\{x\}\in[0,1)</math> for any real number <math>x</math>.
 
Consider the <math>n+1</math> numbers <math>\{kx\}</math>, <math>k=1,2,\ldots,n+1</math>. These <math>n+1</math> numbers (pigeons) belong to the following <math>n</math> intervals (pigeonholes):
:<math>\left(0,\frac{1}{n}\right),\left(\frac{1}{n},\frac{2}{n}\right),\ldots,\left(\frac{n-1}{n},1\right)</math>.
Since <math>x</math> is irrational, <math>\{kx\}</math> cannot coincide with any endpoint of the above intervals.
 
By the pigeonhole principle, there exist <math>1\le a<b\le n+1</math>, such that <math>\{ax\},\{bx\}</math> are in the same interval, thus
:<math>|\{bx\}-\{ax\}|<\frac{1}{n}</math>.
Therefore,
:<math>|(b-a)x-\left(\lfloor bx\rfloor-\lfloor ax\rfloor\right)|<\frac{1}{n}</math>.
Let <math>q=b-a</math> and <math>p=\lfloor bx\rfloor-\lfloor ax\rfloor</math>. We have <math>|qx-p|<\frac{1}{n}</math> and <math>1\le q\le n</math>. Dividing both sides by <math>q</math>, the theorem is proved.
}}
 
=== Pigeonhole vs. resolution proofs ===
 
== Averaging Principle ==
 
== References ==
:('''声明:''' 资料受版权保护, 仅用于教学.)
:('''Disclaimer:''' The following copyrighted materials are meant for educational uses only.)
 
* Aigner and Ziegler. ''Proofs from THE BOOK, 4th Edition.'' Springer-Verlag. [[media:PFTB_chap25.pdf| Chapter 25]] and [[media:PFTB_chap30.pdf| Chapter 30]].
* Alon and Spencer. ''The Probabilistic Method, 3rd Edition.'' Wiley, 2008. [[media:TPM_Chap1.pdf|Chapter 1]], [[media:TPM_Chap2.pdf|Chapter 2]], and [[media:TPM_Chap3.pdf|Chapter 3]].

Latest revision as of 20:06, 6 April 2017

Template:Orphan Graham's number is a very, very big natural number that was defined by a man named Ronald Graham. Graham was solving a problem in an area of mathematics called Ramsey theory. He proved that the answer to his problem was smaller than Graham's number.

Graham's number is one of the biggest numbers ever used in a mathematical proof. Even if every digit in Graham's number were written in the tiniest writing possible, it would still be too big to fit in the observable universe.

Context

Ramsey theory is an area of mathematics that asks questions like the following:

Template:Quote

It turns out that for this simple problem, the answer is "yes" when we have 6 or more points, no matter how the lines are colored. But when we have 5 points or fewer, we can color the lines so that the answer is "no".

Graham's number comes from a variation on this question.

Template:Quote

By asking that the 4 points lie on a plane, we have made the problem much harder. We would like to know: for what values of n is the answer "no" (for some way of coloring the lines), and for what values of n is it "yes" (for all ways of coloring the lines)? But this problem has not been completely solved yet.

In 1971, Ronald Graham and B. L. Rothschild found a partial answer to this problem. They showed that for n=6, the answer is "no". But when n is very large, as large as Graham's number or larger, the answer is "yes".

One of the reasons this partial answer is important is that it means that the answer is eventually "yes" for at least some large n. Before 1971, we didn't know even that much.

Definition

Graham's number is not only too big to write down all of its digits, it is too big even to write in scientific notation. In order to be able to write it down, we have to use Knuth's up-arrow notation.

We will write down a sequence of numbers that we will call g1, g2, g3, and so on. Each one will be used in an equation to find the next. g64 is Graham's number.

First, here are some examples of up-arrows:

  • [math]\displaystyle{ 3\uparrow3 }[/math] is 3x3x3 which equals 27. An arrow between two numbers just means the first number multiplied by itself the second number of times.
  • You can think of [math]\displaystyle{ 3 \uparrow \uparrow 3 }[/math] as [math]\displaystyle{ 3 \uparrow (3 \uparrow 3) }[/math] because two arrows between numbers A and B just means A written down a B number of times with an arrow in between each A. Because we know what single arrows are, [math]\displaystyle{ 3\uparrow(3\uparrow3) }[/math] is 3 multiplied by itself [math]\displaystyle{ 3\uparrow3 }[/math] times and we know [math]\displaystyle{ 3\uparrow3 }[/math] is twenty-seven. So [math]\displaystyle{ 3\uparrow\uparrow3 }[/math] is 3x3x3x3x....x3x3, in total 27 times. That equals 7625597484987.
  • [math]\displaystyle{ 3 \uparrow \uparrow \uparrow 3 }[/math] is [math]\displaystyle{ 3 \uparrow \uparrow (3 \uparrow \uparrow 3) }[/math] and we know [math]\displaystyle{ 3\uparrow\uparrow3 }[/math] is 7625597484987. So [math]\displaystyle{ 3\uparrow\uparrow(3\uparrow\uparrow3) }[/math] is [math]\displaystyle{ 3\uparrow \uparrow 7625597484987 }[/math]. That can also be written as [math]\displaystyle{ 3\uparrow(3\uparrow(3\uparrow(3\uparrow . . .(3\uparrow(3\uparrow(3\uparrow3) }[/math] with a total of 7625597484987 3s. This number is so huge, its digits, even written very small, could fill up the observable universe and beyond.
    • Although this number may already be beyond comprehension, this is barely the start of this giant number.
  • The next step like this is [math]\displaystyle{ 3 \uparrow \uparrow \uparrow \uparrow 3 }[/math] or [math]\displaystyle{ 3 \uparrow \uparrow \uparrow (3 \uparrow \uparrow \uparrow 3) }[/math]. This is the number we will call g1.

After that, g2 is equal to [math]\displaystyle{ 3\uparrow \uparrow \uparrow \uparrow \ldots \uparrow \uparrow \uparrow \uparrow 3 }[/math]; the number of arrows in this number is g1.

g3 is equal to [math]\displaystyle{ 3\uparrow \uparrow \uparrow \uparrow \uparrow \ldots \uparrow \uparrow \uparrow \uparrow \uparrow 3 }[/math], where the number of arrows is g2.

We keep going in this way. We stop when we define g64 to be [math]\displaystyle{ 3\uparrow \uparrow \uparrow \uparrow \uparrow \ldots \uparrow \uparrow \uparrow \uparrow \uparrow 3 }[/math], where the number of arrows is g63.

This is Graham's number.

Related pages