Mind blown: the fast growing hierarchy for laymen — aka enormous numbers

WARNING — I’m not a mathematician so I probably made a bunch of errors. Please leave corrections in the comments.

The Universe

A few years ago I stumbled upon an article by famed computer scientist and quantum physics expert Scott Aaronson titled “Who can name the bigger number?” The article inspired a contest between two philosophers at MIT to see who could name the biggest number. I previously wrote about this contest here.

Scott’s article spawned my interest in big numbers. Can you imagine something so big that it transcends our Universe? Unlike physics, which is based on our physical world, big numbers are purely abstract. We can’t ever write them down because there isn’t enough matter in the Universe to store each digit. Some folks don’t even believe that they exist. Still, big numbers are important, and mathematicians use them in proofs and for ranking the computational strength of various mathematical systems.

To acquire a deep understanding of this area of mathematics requires expertise on number theory and set theory which I admittedly don’t have. Still, I’m interested in these subjects and I’ve devoted time to this article to hopefully spur your interest too.


If you were to participate in Scott’s contest to name the biggest number, what would you write? Most people would probably write googol. The search engine was famously named after this number. It is a 1 followed by 100 zeroes. When written out it looks like this:

A googol is a 1 followed by 100 zeroes

That would be a pretty good answer for most people. If you were up against a mathematician you would get trounced.

In the googol example, it was hard to write that number because it has so many zeros in it. Fortunately we’ve come up with various systems for writing big numbers. For example, using exponents we can write googol like this:

10^{100} = Googol

For most of this article we will use various forms of ordinal notation to write larger and larger numbers. I will do my best to explain them as I understand them. Part of the magic in understanding big numbers is following along as we ascend to bigger and bigger numbers.

For googol it is easy to see how big it is because you can see all of its digits. Writing it in exponent form makes it look even smaller. Googol is not small.

How big is googol?

Imagine a fish tank the size of the known universe

The best way to visualize the size of googol is to pretend that we could fill the entire universe with grains of sand. Picture a fish tank that is 46 billion light years cubed. Now count each grain of sand in the universe size fish tank. Your count is less than googol. You would need another 10x grains of sand to reach googol. The magnitude of this number is incredible.

Googol seems like a big number but compared to the numbers that come later in this article, it is infinitely small. Given a scale from zero to infinity, googol is very close to zero compared to the numbers that we are about to talk about. Because we can’t visualize the size of these numbers we will use a scale called the Fast Growing Hierarchy (FGH) to rank them.

The FGH is based on fast growing functions with the slowest function at the bottom and faster functions as you go up the hierarchy. Much of this article will describe these functions and the speed at which they grow. I’ll show how we can more efficiently write them and do expansions for some of the more popular ones. Expansions help you get a feel for the power of these functions by inputing small numbers in order to see the massive outputs. In many cases I won’t be able to do the full expansions because the numbers get too big or worse, there are infinitely too many steps to expand them.

This stuff blows my mind and I hope it will blow yours too.

The Fast Growing Hierarchy is our measuring stick

We start this crazy trip by introducing functions. Functions are easy to understand, they take input, follow some rules, and produce output. When you think of a function that grows numbers quickly you probably think of exponentiation. We use it in our daily lives and the results can be easily visualized. The Fast Growing Hierarchy is made up of a series of increasingly faster growing functions. Everything starts with successorship, moves on to addition, then multiplication, etc.. Each higher bar on the ladder of the FGH represents a faster growing function.

Baby steps

Level 1: Successorship

Successorship is the most basic function. It asks, what is the next bigger item in the list? It looks like this when working with the natural numbers:

f_0(n) = n+1


Don’t underestimate the power of this function. It can be called recursively to generate more powerful functions like addition, multiplication and even exponentiation. Recursion is the key to how functions on the FGH work so effectively to produce monster numbers. Recursion works by allowing us to call a function which can then call itself. It can keep doing this until it is done.

An easy example of recursion is seen with the successor function. If you want to generate the number 5 and you start with the number 1 you would simply call the successor function — which would call itself 4 times to increment 1 to 5. Recursion is very powerful as you will quickly see. Next up is multiplication.

Level 2: Multiplication (sort of)

f_1(n) = f_{n}^{0}(n) = (..((n+1)+1)+..+1) = n+n=2n

Multiplication is defined as repeated addition which is using repeated calls to the successor function.

In general, as we go up the FGH you’ll see that each higher rung on the ladder uses recursive function calls to the prior rung. A more general way to writing this is:

f_{a+1}(n) = f_{n}^{a}(n)

If we wanted to do a simple expansion at level 2 it would look like this:

f_1(3) = f_{0}^{3} = f_0(f_0(f_0(3))) = f_0(f_0(4)) = f_0(5) = 6

This pattern will continue for a while with each step up the FGH. Next up is exponentiation:

Level 3: Exponentiation

f_2(n) = f_{1}^{n}(n) = 2(2(...2(2n))) = 2^n > 2\uparrow n

Exponentiation is defined by using repeated multiplication. It doesn’t look powerful at first glance but if you look at a couple of examples you’ll see otherwise.

f_2(20) = 209,971,520

Or how about this one:

f_2(100) = 126,765,060,022,822,940,149,670,320,537,600

That number has 33 digits in it. Its already a big number and we are just getting started at level 3 in the FGH.

Writing these numbers by listing all their digits takes up a lot of space. If we continue this way we will quickly run out of space.

Knuth’s up arrow notation

Donald Knuth devised up arrow notation as a way to use fewer symbols to write larger numbers. Take a look at this simple problem using exponentiation:


There is a way we can expand on exponentiation and build towers of exponents. This is called tetration. Here is an example:


Notice how quickly the number grows when we add another 3 to the tower? That number is already 1000 times the size of the world population. It is about a tenth of the number of cells in a human body.

Cells in the human body

We can keep adding more numbers to the tower to get even bigger numbers. After a while this gets annoying because you end up with a ridiculously big tower. For example:

Picture a stack of 3’s 1000 high.

What if we wanted a series of power towers of 3’s and so on? Going up the FGH without having a good way to quickly describe the higher level operations is difficult. This is where up arrow notation comes in handy. Rather than writing all those exponents we can use arrows. Each up arrow is a higher level of operational power. It produces numbers that are massively bigger than the previous level. Here is a simple progression of up arrow notation:

One arrow is exponentiation

The first digit is the base, then the arrow or arrows and then the number of times you want to perform the operation. So in this case the base is 2 and you want to perform the operation 3 times with one up arrow. (One up arrow is exponentiation.)

Two arrows is tetration

Lets try three up arrows:

Three arrows is pentation

Going from two up arrows to three up arrows took the number from 16 to 65536. This is called pentation. Lets try it again with four up arrows:

Four arrows is hexation

This is already a monster number and we still have a double arrow operation that hasn’t been evaluated yet. Another way to write this is:

To give you another frame of reference, lets look at where a googolplex sits in terms of up arrow notation. As you know:

Another way to think of googoleplex is 10 raised to a googol. Its a big number. With up arrow notation this is where we find Googolplex:

A googolplex is actually closer to the lower bound but this is a good way to understand how big it is in terms of up arrow notation.

Another example showing the power of up arrow notation:

We still haven’t expanded this number all the way

Wow, we’ve got a big number here. Look at it this way:

Or you could look at it this way. Its a tower of three 7.6 trillion 3’s high.

This number is already so big that we have no ability to understand it. Nothing in the world is this numerous. Even if we were to shuffle every particle in the known universe we couldn’t approach this number.

Eventually we will get to Graham’s number which is covered later in this article. Graham’s number is built on levels of arrows that start at:

Hexation is powerful

and it grows wildly big. You’ll see more of this beauty later in the article.


The first function in the FGH is successorship, the second is multiplication, the third is exponentiation, lets take a look at the fourth in the series. This is called tetration.

Level 4: Tetration

Think of tetration as iterated exponentiation. The way exponentiation bundles a series of multiplication problems, tetration bundles a series of exponentiation problems. We will start using up arrow notation to discuss the strength of each function as we go higher on the FGH.

Tetration. It is recursively calling the exponentiation function.

Let me impress upon you the strength of this function by showing you an example:

How is that for big?

With n=3 we get a massive number. You don’t even want to know what n=4 looks like. The next level is called pentation.

Level 5: Pentation

Pentation is iterated tetration. Its a series of tetration problems bundled together. Think of it as a series of power towers. It is a little more powerful than triple up arrows which as we saw earlier grows insanely fast.


Next up, Hexation. This one is more important because at this level we can build Graham’s number which as I mentioned earlier is a famous big number. It is also a monster of a number.

Level 6: Hexation


Hexation is iterated pentation. So just like the previous examples we are taking a string of pentation operations and bundling them together.

We can keep going by increasing the level by one but it will take a long time to get where we want to go. Instead we can go ahead and generalize the formula for determining the power of each rung on the FGH:

As you saw in my explanation of up arrow notation, each additional arrow grows the result by a very large quantity. As we go up higher in the FGH we see larger and larger numbers. We will need an increasingly larger number of up arrows to represent them.

For example:

The FGH at 15 produces a wickedly big number. Just look at all of those up arrows! Still, this is a relatively small number where we are going.


We can see the recursion is key to generating these big numbers and that as we go up the FGH the recursion becomes more powerful. Is there anything else we can do to make these functions grow faster? Yes. Diagonalization. This concept is a bit abstract but it goes like this.

If we were to calculate all of the values of the function it would look like this:

Table showing diagonalization

The table would obviously go to infinity in each direction but this gives you an idea of what the functions would look like. What if we were to take the diagonal of this grid? We would end up with a set that looks like this:

This is called diagonalization. Let’s try an example to show you how powerful this is combined with recursion. At first glance this isn’t going to look too powerful:

At n=4 the function isn’t too impressive. However when you increase to n=1000 you’ll see a totally different story:

It smashes the original. Diagonalization is a very powerful way to generate very fast growing functions.

Let’s get back to the FGH and see if we can generate even bigger numbers with faster growing functions. Up till now we’ve used positive integers to represent the index for each level up the FGH. We can summarize this by creating a fundamental sequence that looks like this:

FGH as a set of increasing functions

This sequence would go on to infinity. This set is called a fundamental sequence for the natural numbers. How do we go bigger than the natural numbers on the FGH? Is that even possible? You bet. It does however require some explanation of a special kind of infinity called omega: ω.


The set of all natural numbers

When studying big numbers you’ll eventually come across ω. It is commonly written as:

Omega — the ordinal representing the set of natural numbers

ω is a very interesting concept because it represents the first infinite ordinal after the natural numbers. More specifically, it is an ordinal that is identified with the set of natural numbers. ω is an ordinal but its a special ordinal called a transfinite ordinal.

There is no way to add natural numbers, multiply them, or any other primitive operation on them to reach ω. It is called an inaccessible number because it cannot be reached from below. It is the smallest infinity greater than the natural numbers.

Another important property of ω is that it is an ordinal. This means that the numbers contained within the set represented by ω are ordered. Their ordering matters because we can add 1 to ω to reach something larger than ω:

What happens when we add 1 to ω? We get this:

This means that we take ω and add another set that simply has the number 1 in it. ω+1 is a bigger ordinal than just ω on its own because it represents a larger set than ω.

For the purposes of showing the growth of the fast growing hierarchy we will treat addition with ω like this:

This is important because we can use the properties of ω to define even faster growing functions.

Omega: ω

ω is important in the definition of the FGH because it allows us to define significantly faster growing functions. Remember, we can take an arbitrarily large index for m such as:

We can go higher and higher with the indexed positive integer numbers. What if we want to go higher than that? Remember, the natural numbers are infinite. This is where ω comes in handy. We can write ω in the subscript:

This is completely valid and an important point on the fast growing hierarchy. In terms of speed of growth using ω in the subscript creates a function that grows just like the one we created earlier:

This is where the diagonalization function we created earlier comes into play. There is a famous function called the Ackermann function which grows at this rate.

So how would you expand this function? The same way we did earlier by taking the nth item in the fundamental sequence:

Here is another example:

Can we go bigger than ω? Sure, we can add 1 to it:

If you recall, the fundamental sequence for ω looked like this:

What would the fundamental sequence look like at ω+1? It is easy, just add one to each item in the set:

FGH at omega + 1

Notice that the set starts at 1 rather than zero and it includes ω in it at the end? It isn’t obvious at first but this set grows significantly faster than just ω. For example:

You can already see how ω+1 is going to generate significantly bigger numbers than what we got with just ω. We haven’t even started to evaluate the outer two ω’s and we already have this monster of a number with n=3.

On the FGH ω+1 grows very quickly. In fact it grows fast enough to create Graham’s number which I’ve mentioned several times before but have yet to describe. This is a good place to take a detour.

Graham’s Number

Ron Graham

Graham’s number is one of the most famous big numbers in mathematics. For a while it was the largest number ever used in a proof. That isn’t the case anymore but it is still a legend. The number is the upper bound to a problem in Ramsey’s Theory. I won’t detail the problem but you can read more about it on the Wiki page.

We can define Graham’s number using some of the functions that we’ve already defined. Understanding this number will help you get a feel for how fast these functions grow. Graham’s number is a great marker in that sense. It is also a whopper of a number in size.

Using up arrow notation we are going to build Graham’s number. First we start with this:

We already looked at G1 previously when we talked about Knuth’s up arrow notation. G1 is already a giant number. With Graham’s number this is the first step. For the second step, take the result of G1 and use it to determine how many up arrows are on G2:

We continue to do that all the way up to G64. The whole progression looks like this:

That give’s you Graham’s number. Another way to write it is:

If we were going to write Graham’s number using the fast growing hierarchy we would say it is around:

Graham’s number is really big. If you tried to store all of its digits in your brain your head would turn into a black hole before you got all of the digits memorized. Said another way — a black hole the size of your head stores less information than there are digits in Graham’s number.

There are a lot of great video’s on YouTube about Graham’s number. I particularly like the Numberphile video:

This video got me jazzed about big numbers and start my trip down the deep dark rabbit hole. Let’s go back to the fast growing hierarchy.


Going big with omega

ω+1 in the FGH resulted in a very fast growing function. We can keep adding to ω to create faster growing functions:

Let me show an example so you can see how much faster ω+2 grows compared to ω+1:

We know from the previous example that n=3 on ω+1 results in a massive number. Now insert that massive number into this function to get:

So you can see that things are getting a little ridiculous as we expand this number. What you are seeing is the power of recursion and diagonalization. ω+2 is a much faster growing function than ω+1. It is so much bigger that we could say:

If you take a look at the function, the real power that generates big numbers is in the index of the function, not the value of n. The example above emphasizes that each increasing index is a significant leap in power and produces much larger numbers.

We can continue on from here. For example:

Notice that we can add ω+ω? This creates another fundamental sequence. The sequence looks like this:

Now we can diagonalize over it to get a very powerful function. Lets try an example:

Just like the previous examples, n determines which index in the fundamental sequence we use to evaluate the function. There are other ways to think about ω+ω:

We can add more ω’s to get higher sequences in the FGH. For example, these are all valid:

The fundamental sequence for ω² is:

An example of expanding out some of ω² is a bit more complicated:

We can keep going higher with:

ω^ω also has a fundamental sequence that looks like this:

Lets try an example:

The expansion is a bit complicated but it gives you some insight into how big this number will become. If we were to do one more expansion it would look like this:

If you remember how big ω+1 was, you can see that ω^ω is a monster. We can keep going bigger by adding more omegas to the power tower of ω’s. In fact we can go all the way up to an infinity of ω’s:

Young Adults

We are now to a very special point on the FGH. We’ve reached our second transfinite ordinal. It is called Epsilon Naught: ε0.

As we recall, the limit to the infinite set of natural numbers was ω. If we were to build a set of all of the possible orderings of ω:

So ε0 is the limit of all the possible orderings of ω. ε0 is also important because it is the first fixed point that satisfies the equation of:

This is a little abstract, but what it means is that if you add another ω to the top of the infinite stack of ω’s, you would get the same answer because adding one more ω to the top doesn’t change the number. The number is already at infinity so adding one more doesn’t change it.

Also Epsilon Naught

Epsilon naught: ε0 is larger than any ordinal that can be named using addition, multiplication, and exponentiation using the ω symbol. Think of it as an infinite power tower of ω’s. Everything past this point is non-computable.

Expansion works the same way that it works with ω. We index from the fundamental sequence of ε0 and then expand. Here is a simple example:

Another expansion with n=6 would look like this:

How do we get a faster function than ε0? If we can’t add more ω’s to the stack it seems like we are stuck. Right? Wrong. We can increment ε0 just like we did with ω:

Adding a 1 to the index drastically increases the size of this function

Lets try an example with (ε0+1):

Lets go bigger. Here we are raising (ε0+1) to ω:

This example hints at another fundamental sequence. This one looks like this:

You can think of this sequence as an infinite tower of ω’s with an (ε0+1) at the top:

Are we stuck again? If we add more infinities or more ε0 we will get the same thing. So how do we go bigger? How about ε1? Here is the fundamental sequence for it:

How high up the FGH are we at this point? High. We are generating colossal numbers with this function. Lets try expanding an example with ε1:

We already know how to expand this number further based on previous example and as you can see ε1builds a much bigger number than anything we saw in ε0. As we’ve seen, each step up the FGH builds significantly bigger numbers.

We can continue to go bigger by using the next fundamental sequence:

We can keep going like this. Each next index is a ridiculously fast growing function that produces numbers that boggle the mind. We can even plug in an ω:

Remember that ω allowed us to get larger indexes after we used up all of the positive integers. With ε the same strategy holds true. So doing an example with ω in the index would produce:

We can even have nestings of epsilons in the index. It is complicated to explain but remember that the index is where the growth power for these functions lie. So adding nested epsilons would produce a much more powerful function. For example:

This is perfectly valid. This also leads us to our next fundamental sequence which is also an important fixed point in the FGH:

𝛇0 creates wicked big numbers. Similar to ε0, 𝛇0 satisfies the equation:

What this means is that adding another level of epsilon doesn’t change the value of 𝛇0. Its like the infinite stack of ω’s. Adding another one doesn’t change ε0. Similarly, 𝛇0 is an infinite stack of ε’s so adding another one won’t change it.

Lets try an example:

As we expand 𝛇0 you can see some very hefty functions forming. Imaging expanding this all the way to actual digits?

To go beyond 𝛇0, we need to do the exact same thing we did with ε. We add 1. So we’ve got:

And an example which should look familiar:

We know how massively big 𝛇0 is where n=3, so you can imagine how big this number is when the input is the output of the previous 𝛇0 function.

Lets keep going. We can easily construct 𝛇1 from the next fundamental sequence.

Lets try a quick example:

Breaking down zeta 1 will take a long time

We can keep going higher with the zetas with just like we did with the epsilons. Eventually we get to the fundamental sequence:

This is called eta naught: η0. We construct it similarly to how we constructed 𝛇. The size of the numbers generated by these functions are mind boggling. They are HUGE.

To recap, we’ve been generating these large numbers by using functions which are represented by greek letters:

Symbols represent increasingly larger infinite sets

For each rung up the FGH we’ve used a new greek letter to represent an increasingly more powerful function. At some point we are going to run out of greek letters. Does this spell the limit to the FGH? No it does not.

The answer to this problem is the Veblen function which more efficiently leverages ordinals themselves to index the functions. So we can create an infinite number of them. As a result we can climb higher on the ladder of fast growing functions and generate bigger numbers. The Veblen function is a little bit more complicated then what I’ve described so far so we will need another detour.


Veblen Functions

Oswald Veblen

Created by Oswald Veblen in 1908, the Veblen function is method of mapping ordinals on to ordinals. The Veblen Hierarchy is a list of functions which represent fixed points. Similar to how we reached a fixed point when we went from ω to ε, the Veblen hierarchy allows us to enumerate those fixed points using a greek letter 𝜙 and an index:

The Velben Hierarchy

And just like that we iterated through 4 super fast growing functions. The Veblen function is a monster. Another way to think of level is an infinite nestings of fixed points:

Infinite nestings

For example, lets say we wanted to write ε1 using the Veblen function:

Lets break it down a little so we can see what is going on:

The index on 𝜙 tells us what level of the hierarchy to use. The next input tells us what index of that level to use. The last input tells us which index on the fundamental sequence to use. If we were to break it out in an example:

As the Veblen function expands you can see it start to get very big. This expansion will go on for a very very long time. With inputs of 1 and 3 the Ψ function is a monster. The expansion is complicated because it builds on a lot of different types of math (rules of exponents for example) but once you get the hang of it you can do it.

Lets do one more example of the Veblen function:

I’m going to stop there because this expansion is getting a little out of hand. As you can see Veblen recursive functions grow like crazy. This would go on and on and on.

We can use omega ω in the Veblen function too. For example:

I’m not going to expand it out any further, but it generates a very very very very large number. You can also use epsilon, zeta, etc.. in the index on the Veblen function. You can nest them too, just like we did before. We can even nest the Veblen function:

What happens if we nest the Veblen function an infinite number of times?

There is a special ordinal that is the smallest number larger than the biggest number you could create from an infinite nestings of the Veblen function. It is called Gamma Naught: Γ0. (Its also called the Feferman-Schütte ordinal named after the folks who discovered it.)

Seriously Big Numbers for serious adults

Now we are getting into the really good stuff. Gamma naught Γ0 is the limit for the largest possible ordinal you can get using recursion and diagonalization. Said another way, if you use all of the math that we’ve mentioned up to this point, you wouldn’t be able to generate a faster growing function than Γ0.

It is leagues beyond anything else we’ve talked about so far. There can’t be a function that reaches this high, right? Yes, there can. Let me introduce you to Krukal’s tree theorem.

The TREE function

The TREE function is a famous fast growing function whose strength measures off the charts. It grows wickedly fast — faster than anything you can possibly imagine. For example, these are the first three values of TREE(n):

TREE(1) = 1
TREE(2) = 3
TREE(3) = ungodly big number

TREE(3) has become famous because with such a small input you are able to generate a number that is wickedly big. TREE(n) goes past Γ0 in strength!

The way this function works is very complicated. I’ve posted questions to several places trying to get a laymen explanation for how this function works and I’m not sure I understand it well enough to explain it myself so I’ll use Deedlit’s explaination:

First, I’m going to show you a function on ordinals that outputs fast growing functions. Second, I’m going to relate this to the smaller tree(n) function. Finally, I’m going to show you the TREE(n) function which grows even faster than tree(n).

With the FGH we’ve seen how a function on ordinals can output fast-growing functions. For every fixed point we find an ordinal that has no predecessor: omega, epsilon, zeta, etc.. Each of those fixed points relates to a fundamental sequence; that is, an increasing sequence of ordinals who limit is the fixed point ordinal.

If a = a successor ordinal (fixed point like ω and ε) and n = the natural numbers we can write the general form of this sequence like this:

One interesting thing about this sequence is that for a[n] < a this is a strictly decreasing sequence of ordinals. One of the defining properties on ordinals is that there is no infinite decreasing sequence of ordinals; so for a>0, there is exists a minimal natural number m such that:

Let’s define a new function that takes an input of a successor ordinal and a natural number and returns m.

With some quick example we can see how this works:

As you can see, this construction leads to the same large numbers as the FGH. In general we can say:

Okay, now that we’ve got this, let’s talk about trees. Trees look like this:

This is an example of a tree.

Now lets define the small tree(n) function:

Define tree(n) to be the longest sequence of unlabelled, unordered trees T1, T2, … such that for all i, Ti has at most n+i vertices, and for all i, j with i<j, Ti is not homeomorphically embeddable into Tj.

What is important to understand about trees is that because they are homeomorphically embedded, they are “well-quasi-ordered” and that means that they can’t have an infinite descending sequence. This is a similar property to the ordinals that we discussed earlier. Trees are different though because there are elements in a tree which can’t be compared. For example:

These two trees can’t be compared because they have a different number of children on the 2nd level. Neither tree embeds into the other. With well ordering we can say that if Ti < Tj, then Ti is not embeddable into Tj.

With this information in hand we can now construct sequences of trees that obey the required conditions: take Ti+1 to be the largest tree less than Ti with no more than the largest allowable number of nodes.

So what kind of numbers can we get? The situation is very similar to the decreasing ordinals we described above.

For example, let tree T consist of a single path of m vertices; this corresponds the finite ordinal m. Let n be the maximum allowable number of vertices in the next tree. Then the next tree, which we will denote T[n], will be a path of m-1 vertices, which corresponds to the ordinal m=1, which is exactly m[n]. Similarly, the next tree, T[n][n+1], will correspond to the ordinal m[n][n+1], and so on.

What this means is that the sequence of trees decrease at a similar rate that the ordinals decrease. In more laymen’s terms, if we start a tree at ω^a we get a comparable function in the fast growing hierarchy Fa. The natural question to ask then is: What is the ordinal corresponding to this well ordering of trees?

In order to answer this question we are going to need to dip back into the Veblen function to learn about the extended Veblen Function.

Extended Veblen Function

The extended Veblen function is very similar to the Veblen function but it is extended to go beyond Γ0.

Gamma Naught satisfies this question:

Γ0 — Gamma Naught

We can extend the Veblen function by adding another digit to it. This is also equal to Γ0:

Γ0 in extended Veblen notation

This notation allows us to keep going past Γ0. So for example, gamma 1 would correspond to the fundamental sequence of:

No matter how big the ordinal gets, we know how to handle the fixed points by writing out the fundamental sequence and then then indexing the correct term in the sequence.

In Veblen notation we can write Γ1 by simply writing:

We keep going higher and higher until we reach:

The next ordinal would be:

Notice that we’ve incremented the 2nd input from 0 to 1? This is how we would handle going past ω. We could keep going:

Eventually we reach a point where the middle input runs out of power. When that happens we would simply increment the first input:

Beyond this things get really sketchy. We can keep going and keep adding new digits to the Veblen Function:

Now what would happen if we had a Veblen Function with an infinite number of arguments? This would correspond to the fundamental sequence of something like:

The Small Veblen Ordinal

This is called the Small Veblen Ordinal. It is an important point on the FGH and one that is so far beyond everything we’ve discussed so far.


So now that we’ve defined extended Veblen functions we can gauge the magnitude of well ordering. Well ordering is based on the principal that every non-empty set of positive integers contains a least element. In other words, the set of positive integers is well-ordered. The limit of well orderings and our small tree(n) function is the Small Veblen Ordinal.

A tree whose root as two childless children corresponds to omega. A tree whose root has three childless children corresponds to Γ0. A tree whose root has four childless children corresponds to phi(1,0,0,0). This keeps going up to the Small Veblen Ordinal!

Now we can define TREE(n):

TREE(n) is the length of the longest sequence of trees T1,T2, … labeled from {1,2,3,..,n} such that, for all i, Ti has at most i vertices, and for all i, j such that i<j, there is no label-preserving homeomorphic embedding from Ti into Tj.

TREE(1) is simple. The first tree has to be a unique one-vertex tree.

TREE(2) is 3.

TREE(3) is a total different story. It is so unfathomably large that humanity has not yet been — and likely never will be — capable of inventing a way to define scales large enough to comprehend the number. It is orders of magnitude beyond our reach.

What does TREE(3) look like?

TREE(3) sequence

TREE(3) starts like this and keeps going for a VERY long time. We know that the smaller tree(n) function is bounded by the Small Veblen Ordinal. We also know that TREE(n) grows much faster than the tree(n) function because labelled trees lead to more possibilities than unlabelled trees. How much faster does it grow?

TREE(3) is wicked big.

If you look closely we are running the smaller tree(n) function on a VERY large input and it is still smaller than TREE(3).

Mathematician Harvey Friedman demonstrated that the smallest proof in Peano arithmetic that TREE(3) exists requires far more symbols than atoms in the Universe.

This is where TREE(n) sits on the FGH:

There is another function called Subcubic Graph Function SSG(n) which puts TREE(n) to shame. I don’t fully understand it so I didn’t include it in the article. Still, we can go higher.

Infinite Collapsing Function

To go past the Extended Veblen Functions we need to learn a new technique. This new ordinal notation was devised by David Madore and used a variant on Buchholz’s function. This function is very abstract so hang in there. First we have to define set S.

Ω is a new type of ordinal that is VERY LARGE, bigger than anything we could possibly need. Now we can define a new set C0 that is made up of everything you can do with addition, multiplication, etc..:

What is the first inaccessible ordinal in this set? It is an infinite tower of ω’s. Also called ε0. This is how we would define it using Ψ:

Now we are going to create a new set C1 which contains all of the items in C0, plus all of the items we could make with addition, exponentiation, etc.. from C0:

Now we can define Ψ(1):

And we can keep going like this up to ε0. How do you get past 𝛇0 from the Ψ function? No matter what you plug in to the Ψ function, it can’t go past 𝛇0.

This is where Ω comes in handy from the original set S we created.

Now that we’ve used the Ω set to define Ψ(𝛇0) we can define C(𝛇0 + 1):

And now we’ve got:

So we’ve managed to get past the fixed point trap using the Ω function. We can keep doing this to continue to climb the ladder.

This will keep going for a very long time until we eventually get to the next fixed point trap:

So how do we get past it? Ω to the rescue!

In general we can say:

We can keep going until the next time we get stuck which is going to be:

But we can use Ω to get out of it:

Hopefully you are seeing a pattern forming here. We can stick Ω in to get out of it. Generally we can say that:

We can continually add in our Ω’s to get larger and larger numbers. For example:

We can even compare these numbers with the Veblen functions:

And here is the small Veblen ordinal:

Small Veblen Ordinal written using collapsing functions

Can we go bigger? Yes we can, how about the large Veblen Ordinal which is the maximum of ordinal of the entire extended Veblen hierarchy:

The Large Veblen Ordinal

And just like that we’ve written an ordinal that goes beyond anything we’ve seen so far. Lets relate this number to the FGH and deconstruct it a little. First we need to build the fundamental sequence:

Now we can try an example:

If we deconstruct it a little more:

And it keeps going and going and going. Is the large Veblen ordinal the maximum ordinal? Can we go any higher? Absolutely. What if we added another Ω to the top? We would get this:

Boom! We just went bigger than all of the Veblen Hierarchy. This is so far beyond anything we’ve worked with we don’t really know what that number looks like. We can’t really compare it to anything. We can however keep going. We can stick in an infinite tower of Ω’s which is where things stop.

Bachman Howard Ordinal

This is called the Bachmann Howard Ordinal and it is the upper limit on the Ψ function. This is the proof theoretic ordinal for several mathematical theories include Kripke Platek set theory (with axiom of infinity) and the system CZF of constructive set theory. This ordinal is sick big.

Can we go bigger?

We need to create a new set S1 that contains a new element:

Now we can start to define Ψ1:

Note: this is not the Bachmann Howard Ordinal

If we wanted to write the Bachmann Howard Ordinal in terms of this new Ψ1 we would write:

The Bachmann Howard Ordinal

If we wanted to go bigger than the Bachmann Howard Ordinal we simply have to do this:

Boom! We’ve described an ordinal that is bigger than the Bachmann Howard Ordinal. We can keep going bigger using the same techniques that we used before. Eventually we get to another limit with a fundamental sequence looking like this:

This is called the Church-Kleene Ordinal. It is sometimes written like this:

Church-Kleene Ordinal

This is the smallest ordinal that cannot be created through recursive functions. Up to this point, all of the functions we created used recursion. The Church Kleene Ordinal is so big that it cannot be reached via recursion. It cannot be described via recursive functions. Another way to say this is that there is no computable function that can reach this ordinal. Strangely even though this ordinal is non-recursive, it is still countable. The distance between this ordinal and the Large Veblen Ordinal is insurmountably vast.

This ordinal is also equal to the growth rate of the famous busy beaver function. (The description of this function comes later.)

It is also the omega one of chess. Suppose we had a countably infinite sized chess board with an infinite number of pieces. What is the total supremum of the values of all the positions which the White player can force a win? This is called omega one and the ordinal is usually written like this:

Omega 1 of Chess

It is believed that the omega 1 of chess is at most the Church-Kleene ordinal. (This hasn’t been proven.)

Back to the Church-Kleene ordinal. If we wanted to use this ordinal in the FGH it might look like this:

Church-Kleene Ordinal in the FGH

Can we go higher than the Church-Kleene ordinal? Yes and no. We can define larger ordinals but we have no hope of describing them with any precision. Still, that won’t stop us, let’s keep going.

We can build a hierarchy of non-recursive ordinals. For example:

Or we can do this:

Which is the growth rate of the function Rayo(n) which was created by an MIT philosopher to win the “name the biggest number contest.” The function was named after him. We can even go all the way to an infinite nestings of ω’s:

The fastest growing function in Googology.

This is the first Church-Kleene fixed point. It can also be written like this:

If we were to write it as a function on the FGH it might look like this:

Church-Kleene fixed point on the FGH

As far as I know, this function creates a number that exceeds anything anyone has defined. It grows significantly faster than Rayo(n).

Going further up the ladder get’s a little bit nuts. There are ordinals for infinite time Turing machines which get pretty insane.

Infinite Time Turing machines

Alan Turing invented the concept of the Turing machine in 1936. A Turing machine is an abstract machine which manipulates symbols on a strip of tape according to a table of rules. This creates a mathematical model of computation.

An infinite time Turing machine is one that extends the operation of ordinary Turing machines into transfinite ordinal time. Previously Turning machines had a start state and a halt state. With infinite time Turing machines there is an added limit state to handle the situation when dealing with a limit ordinal. With this in mind we can define two new ordinals that go beyond the Church-Kleene ordinals:

  1. Gamma — the supremum of the writable ordinals
  2. Zeta — the supremum of the eventually writeable ordinals

(Supremum = the least upper bound.)

These two ordinals are admissible (like the Church-Kleene ordinal) but they go beyond it in size. If you want to take a look at a paper on this topic you can see one here. I don’t know how to use these ordinals on the FGH so I only mention them.

The largest possible named number

In my opening paragraph I mentioned an article by the famous computer scientist Scott Aaronson to name the biggest number. He proposed the Busy Beaver function as an example of a function that creates a large number. Recently he posted to stack exchange that he could name a bigger number — one that could potentially win the contest.

First he describes the busy beaver function:

Given a Turing machine M, let S(M) be the number of steps made by M on an initially blank tape, or 0 if M runs forever. Then recall that BB(n), or the nth Busy Beaver number, is defined as the maximum of S(M) over all n-state Turing machines M. BB(n) is easily seen to grow faster than any computable function.

He admits that the BB(n) function is puny compared to the other functions on the FGH. So he proposes a new function:

So let’s define BB1(n) to be the analogue of BB(n), for Turing machines equipped with an oracle that computes BB0(n):=BB(n). Likewise, for all positive integers k, let BBk be the Busy Beaver function for Turing machines that are equipped with an oracle for BBk−1. It’s not hard to see that BBk grows faster than any function computable with a BBk−1 oracle.

Then he goes further by defining BB(n) over omega:

Then he generalizes the formula over all the computable ordinals:

Next, let α(n) be the largest computable ordinal that can defined (in the sense above) by a Turing machine with at most n states.

This leads to the following:

According to Scott:

I have a strong intuition that Turing machines are a “maximally expressive” notational system, at least for those numbers that meet my criterion of being “ultimately defined in terms of finite processes”

That should place Scott’s modified Busy Beaver function at the top of the FGH for October 2016. For now I believe that this is a good place to stop my exploration of the FGH.

Our trip up the FGH went from a very basic successor function all the way up to these very fast growing functions that generate god like numbers. Along the way we used recursion and diagonalization to generate faster growing functions.

Everything we’ve reviewed in the FGH is still significantly smaller than the cardinal ℵ1 . This is the cardinality of all countable ordinal numbers. ℵ1 is the first uncountable ordinal. To quote Donald Trump, it is “YUGE.”

I’m hoping that I didn’t get too much wrong with this article. I don’t have a background in math so I’m working from papers and articles I’ve read. I admittedly don’t know a lot about these subjects but I’m interested in learning. Writing this article was a great learning lesson for me and I welcome any and all feedback.

Lastly, I couldn’t write this article if it wasn’t for the help of others. The following articles and videos were valuable in creating this article. You are encouraged to take a look at some of these if you want to dive in deeper:

  1. David Metzler’s YouTube series on Ridiculously Huge Numbers
  2. Adam Goucher’s articles on Fast Growing Functions
  3. Scott Aaronson’s article on Who can name the bigger number?
  4. Follow up on Scott Aaronson’s article on Stack Exchange
  5. Sbiis SaiBain’s website on large numbers
  6. Giroux Studio’s series on extremely large numbers
  7. The Googology Wiki on the Fast Growing Hierarchy
  8. This question on Stack Exchange answered by Deedlit
  9. Cookie Fonster’s large number page
  10. Cantors Attic
  11. XKCD forums on who can name a bigger number

By joshkerr

Josh is an 8x startup founder and angel investor.

This site uses Akismet to reduce spam. Learn how your comment data is processed.