What is the Largest Number Representing Something Empirical?

the image

"Infinity" (2002, Acryllic), by Geoffery Chandler

What is the largest number representing something empirical? I am not asking a question concerning immense hypothetical numbers that do not actually represent some *thing* …such as googol (10^100), googolplex (10^googol), Graham’s Number, and Moser’s number.

If you guessed it’s the total number of atoms in the entire universe which we can currently observe through our most powerful telescope (10^80 atoms), good try; sorry, that’s not it.

As it turns out, the largest number that actually represents something empirical relates to our brains. As John D. Barrow relates (John D. Barrow, The Constants of Nature (2002), pp. 116-118):

“…astronomy is not the place to look. The big numbers of astronomy are additive. They arise because we are counting stars, planets, atoms and photons in a huge volume. If you want really huge numbers you need to find a place where the possibilities multiply rather than add. For this you need complexity. And for complexity you need biology. In the seventeenth century the English physicist Robert Hooke [1635-1703] made a calculation ‘of the number of separate ideas the mind is capable of entertaining’ (the estimate was reported in Albrecht von Haller’s Elementa Physiologiae, vol. 5, London, 1786, p. 547). The answer he got was 3,155,760,000. Large as this number might appear to be (you would not live long enough to count up to it!) it would now be seen as a staggering underestimation. Our brains contains about 10 billion neurons, each of which sends out feelers, or axons, to link it to about one thousand others. These connections play some role in creating our thoughts and memories. How this is done is still one of nature’s closely guarded secrets. Mike Holderness suggested (in Holderness, M., “Think of a Number,” New Scientist, 16 June 2001, p. 45) that one way of estimating the number of possible thoughts that a brain could conceive is to count all those connections. The brain can do many things at once so we could view it as some number, say a thousand, little groups of neurons. If each neuron makes a thousand different links to the ten million others in the same [neuron] group then the number of different ways in which it could make connections in the same neuron group is 10^7 x 10^7 x 10^7 x … one thousand times. This gives 10^7000 possible patterns of connections. But this is just the number for one neuron group. The total number for 10^7 neurons is 10^7000 multiplied together by 10^7 times. This is 10^70,000,000,000. If the 1000 or so groups of neurons can operate independently of each other then each of them contributes 10^70,000,000,000 possible wirings, increasing the total to the Holderness number, 10^70,000,000,000,000. This is the modern estimate of the number of different electrical patterns that the brain could hold. In some sense it is the number of different possible thoughts or ideas that a human brain could.”

Fine and good you say. So why can’t I remember where I put my keys?
If we can’t blame the equipment, perhaps it comes down to operator error…

For more on large numbers, cf. Wolfram Mathworld’s article “Large Number,” and Scott Aaronson, “Who Can Name the Bigger Number?

The video below is a computer representation of the outer (pial) surface of a mouse’s cortex through all six layers and subcortical white matter to the adjoining striatum:

Addendum: “The Human Brain Has More Switches Than All the Computers on Earth” by Elizabeth Armstrong Moore

“The human brain is truly awesome. A typical, healthy one houses some 200 billion nerve cells, which are connected to one another via hundreds of trillions of synapses. Each synapse functions like a microprocessor, and tens of thousands of them can connect a single neuron to other nerve cells. In the cerebral cortex alone, there are roughly 125 trillion synapses, which is about how many stars fill 1,500 Milky Way galaxies.

These synapses are, of course, so tiny (less than a thousandth of a millimeter in diameter) that humans haven’t been able to see with great clarity what exactly they do and how, beyond knowing that their numbers vary over time. That is until now.

Researchers at the Stanford University School of Medicine have spent the past few years engineering a new imaging model, which they call array tomography, in conjunction with novel computational software, to stitch together image slices into a three-dimensional image that can be rotated, penetrated and navigated. Their work appears in the journal Neuron this week.

To test their model, the team took tissue samples from a mouse whose brain had been bioengineered to make larger neurons in the cerebral cortex express a fluorescent protein (found in jellyfish), making them glow yellow-green. Because of this glow, the researchers were able to see synapses against the background of neurons.

They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study:

One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.

Smith adds that this gives us a glimpse into brain tissue at a level of detail never before attained: “The entire anatomical context of the synapses is preserved. You know right where each one is, and what kind it is.”

While the study was set up to demonstrate array tomography’s potential in neuroscience (which is starting to resemble astronomy), the team was surprised to find that a class of synapses that have been considered identical to one another actually contain certain distinctions. They hope to use their imaging model to learn more about those distinctions, identifying which are gained or lost during learning, after experiences such as trauma, or in neurodegenerative disorders like Alzheimer’s.

In the meantime, Smith and Micheva are starting a company that is gathering funding for future work, and Stanford’s Office of Technology Licensing has obtained a U.S. patent on array tomography and filed for a second.”


This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s