The Tyranny of Entropy: A Computational Universe
Consider this; you’re to generate absolute maximum entropy in an infinite void.
You’re a computer.
You have the power to compute and the capacity to store memories.
You are expected to choose a balance between how much memory and processing power you would need to achieve this mission; of creating an absolutely infinite entropic space. A void to fill with all forms of being, and not-being, all forms of physical noise.
Which power would you have to lean on more to achieve this?
You’re expected to handle so much information. In the vastness of infinite void of space…. you might consider you will need lots of memory to store all the information.
You’re expected to generate new noise, possibly unique, but not necessarily always so. So you do need some processing power to generate the new.
Would it be 50–50 distribution of investment between memory and processing power?
Would it be 80–20?
I think our newest innovations in blockchain and understanding of a deterministic universe might have the answer.
Let’s say you start from a quark, and work our way up…
You have all the information necessary to define this quark. Let’s call this amount of information; The Planck Capacity.
You absolutely need this information.
But this static information can not take us where we want to go; a universe on a vast scale of entropic information.
This information is stale, frozen and can not reproduce unless we create a family of such definitions which come together to generate our very early, primitive alphabet for the universe.
Because alphabets are optimised ways to store information from very small changes to the precursor/original information.
Let’s call this; The Planck Alphabet; Planck1, Planck2, Planck3.
It is like A, B, C, for the soup of our universe. A would not be much of use without the rest of the alphabet. A is meaningful with its sound, with how it acts when combined with other letters.
So is a quark; combine a down quark with two up quarks and you get a particle that acts in a certain way. Despite having a minimal difference, the existence of down and up quarks creates a set of possibilities from their duality.
Let’s get back to the original problem at hand;
How do we allocate right amount of our energy to memory (power to retain) and processing power (power to generate)?
It turns out you can minimise the need for memory in a deterministic space by using a vast amount of processing power.
By describing the initial seed, you can define the whole tree. It takes time, but with an infinitely distributed powerful processing system, you may be able to get away with it.
The locality is one of the key principles in physics; for our universe. Every piece of the universe has the power to compute its own existence in accordance with its immediate surroundings without any dependence on any far parts of the universe.
The whole universe can be reduced into computational modules from quarks to quasars.
You might be shocked to hear this, but that is not a far-fetched idea. There are many schools of science subscribing to this very thinking. We have complex emergent systems defined only by small finite sets of rules. We have man-made cellular automata that generate beautifully orchestrated systems. When humans are presented with the creation of complex systems, they often go for the atomic principle. Stephen Wolfram published his book, A New Kind of Science, on this entire topic of the reducibility of complex systems.
The atomic principle is the best way to retain coherent deterministic space that has the attribute of the locality.
Now with this awareness in mind, we can further pursue the answer for the question of investment on memory versus processing.
I think we have outlined the minimum requirement for memory.
It has to carry enough information to differentiate itself from its neighbour.
It has to carry enough information to affiliate itself with its neighbour.
It has to carry enough information to mutate itself in accordance with its surroundings, just enough to generate necessary entropy but not too much to cause absolute instability in the fabric of space.
To recap; it has to ID itself, it has to ID its neighbours, and it has to be coherent. (for programmers; a linked list)
This list of absolute necessities of the basic components might differ and evolve but in essence; it is a list of static set list of informations.
Let’s wrap this up, as it is getting heavy.
Why are we considering the universe as a system of optimisation?
Well, because evolution tends to go for the most optimised approach to build complex systems in a bottom-up fashion.
This exercise in thought helps us identify what is the most likely pathway that we (a complex universe) may have emerged.
Based on this exercise, it seems like, our universe may be more computational (processing) and less physical (memory).
This could explain some of the most peculiar experiments we observe (Quantum Eraser Experiment). A universe made up of computational elements rather than memory elements would definitely pay more respect to the coherence across computation rather than coherence across time. Thus, it does make sense quantum eraser experiment confuses us when universe acts differently when looked from a time-bound perspective (us).
The universe might just be an ever-running decentralized cluster of nodes computing “what to be” in accordance with their neighbours at the smallest scale.
Producing the complexities we observe only at higher scales, through emergence.
and its only purpose is to generate more chaos, the never-ending entropy to simulate every possible combination of existence and the void.