The Laws of Nature
When you first encounter physics at school, you are initiated to the common idea that at least in principle every phenomenon could be explained if we knew which are exactly the “Laws of Nature”. This idea is pretty simple, but not free from conceptual problems.
First of all, you are assuming that there is a finite set of mathematical relationships that for some reason has been selected among all the possible ones. For instance, Albert Einstein believed that in the end one and only one simple and beautiful Master Equation is meaningful. There is no other way: with this equation you can describe every possible process in the only possible Universe. However, string theory suggests that there could be a “landscape” of possible set of Laws, each set resulting in a particular Universe. Is there any selection principle out there that excludes all but one? Or are we in just one of the possible Universes? This question suggest that there could be other Universes out there (perhaps stemming one from another), each one being the implementation of a set of Laws. All together they form the so-called Multiverse.
Seth Lloyd pointed out that these Laws, at least how know them at present, are based on calculus. Calculus is applied on real numbers, and real numbers have infinite precision. There are good reasons to doubt that an infinite precision could be obtained in our Universe even in principle, and the key point is: Information.
You can associate information to every particle. It is estimated that there are some 10^90 particles in the Universe, thus the maximum infomation contained in the know reality is about 10^120 or 2^400 bit. Indeed a very large number, but not infinite. Even Richard Feynman was puzzled about this: how comes that Nature calculates Feynman graphs with infinite precision if there is an infinite number of graphs for every interaction that occours?
Digital Reality
One possibility is that there are no Laws of Nature at all: they dynamically arised when the Universe came into being. You can do something similar with your computer (or even a sheet of paper!) using cellular automata.
First, you need to define an N-dimensional grid (one or two dimensions represent a good choice to start with). Every cell can have one out of two possible states: on or off, 1 or 0 (this represents 1 bit of information!). Then, you decide a set of rules. At each iteration the status of all the cells is updated according to these rules, and the status is a function of the status of the surrounding cells at the previous iteration. The rules are intended to be a function of the status of the surrounding cells at the previous iteration. In 1D, each cell has 2 neighbors, in 2D it has 8 neighbors, in 3D geometric it has 26 neighbors (in general, it holds the relationship N(D) =3*N(D-1) +2 ).
The simplest grid is made of just 1 spatial dimension plus the time dimension. Even in this case you can reach an incredible level of complexity, such as in the case of “Rule 110” in the Wolfram classification. Since there are 2x2x2=8 possible binary states for the three cells neighboring a given cell, there are a total of 2^8=256 elementary cellular automata, each of which can be indexed with an 8-bit binary number (in our case 110_10 is written in binary 1101110_2). Rule 110 is of particular interest since it has been shown to be computationally universal, i.e. it can simulate any Turing machine in polynomial time. Moreover, Rule 110 has “class-4” behaviour, producing non-repetitive nor completely random patterns.
If you take a closer look, the patterns look like particles in a bubble chamber, travelling through a foamy medium and occasionally interacting among each other, sometimes producing new patterns that look like new kind of particles.By the way, since the rules are forward-only, no travels back in time are allowed and the grandfather paradox is easily solved.
With two dimensions, even new patterns appear. Probably the most celebrated one is the “Game of Life” by John Conway, in which a large set of “objects” show intrinsecally complex behaviours such as rotations, emssions and pulsations. And the rules are simply:
- Any live cell with fewer than two live neighbours dies, as if caused by under-population.
- Any live cell with two or three live neighbours lives on to the next generation.
- Any live cell with more than three live neighbours dies, as if by overcrowding.
- Any empty cell with exactly three live neighbours becomes a live cell, as if by reproduction.
Could it be that the Universe is an enourmous cellular automata running on a massive “supercomputer”? It is suggested that each cell would be as large as a Planck lenght squared, which is the smallest meaningful length in our Universe: lp = 1.616252(81) x 10^{−35} m, and each iteration happens every Planck time tp = 5.39124 x 10^{-44} s (think of it as a quantum of time). The number of cells in the Universe would be atonishingly large, and so would be the computational power of the hypothetical supercomputer that should be able to calculate the status of all these cells in a while. It is worth remembering here that quantum gravity effects are expected to appear for interactions happening at a scale in the order of the Planck length.
One interesting point is that we can interpret every pattern of cells as a kind of particle. In this paradigm, interactions are forced to be local, since they are defined by the nature of the Rules: the status of a cell depends only on the status of the surrounding cells. This has an interesting corollary: there must exist a maximum velocity given by a cell with status 1 that in the next generation makes a step forward (or in diagonal). Perhaps a pattern with this property could represent a physical particle travelling at the maximum speed, i.e. the speed of light. In fact, this velocity does not depend on the velocity or the direction of the pattern, but is an intrinsic property of the Game, as specified by the Rules. It is suggested that the physical space could somehow arise from the computational one. The existence of a maximum velocity of the propagation of information is the second postulate of Special Relativity, the other one being the principle of relativity itself (the laws of physics must be identical for all observers – uh uh, these Laws again).
However, the picture gets quite complicated when we want to take quantum mechanics into account (no surprise).
For Bohm (John) Bell tolls?
The birth of this topic is the publication of the paper called “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?” by Einstein, Podolski and Rosen (1935). It concerns a characteristic quantum phenomenon called entanglement. Two particles are said to be entangled when the properties of one of them are completely correlated with the properties of the other one. For example, two electrons can be described by a wave function f = ( |up>|down> + |down>|up>)/sqrt(2). If we now measure the spin of electron 1 (we have 50% change of getting up, 50% down), the wavefunction collapses to either |up>|down> or |down>|up>. What Einstein pointed out is that by measuring the spin of particle 1 we obtain immediately the information about the spin of particle 2, wherever it is! It has been proved experimentally (Gisin in 1997) that this holds true even in cases when the spin of the two particles are measured when they are space-like separated (i.e. by a distance that can’t be covered by a signal travelling at the speed of light). Einstein dubbed such a signal a spooky action at distance. He provocatively asked: how does particle 2 know the spin of particle 1 even if they are 1,000 light-years away? (my second question would be: is the knowledge inside the particle or inside our mind?)
In the ’50s, David Bohm worked on an deeper explanation of quantum mechanics. In this picture, “hidden variables” guide the two entangled particles so that the wave function does not collapse at all, their properties are present in advance and not given by chance at the time of measurement. Reality would not be probabilistic but deterministic. However, we have no access to this variables, that are thus “hidden” from us. The net result would be that QM is incomplete and probabilistic, but reality has no such weird behaviour. So the question is: can bohmian mechanics reproduce exactly every prediction of quantum mechanics?
In 1964 John Bell proposed a way to test the existence of these hidden variables, encoded in his famous theorem. It is based on an inequality (Bell’s inequality), that states that:
- The number of objects which have parameter A but not parameter B plus the number of objects which have parameter B but not parameter C is greater than or equal to the number of objects which have parameter A but not parameter C.
Bell’s inequality is satisfied by a hidden variables theory. This means that the inequality holds if:
- Logic is valid (at least, classical logic)
- Electrons have a definite value of the spin independently of our knowledge (hidden variables exist, the truth is out there)
- No information travels faster than light (principle of locality)
Does quantum mechanics fulfill this inequality as well? No! So the two theories are intrinsecally different, the Bell’s theorem states that no hidden variables theory can reproduce all the results of quantum mechanics. Now, that’s why we run into troubles: cellular automata looks like a hidden variable theory, so even in principle they could not reproduce quantum mechanics. If hidden variables exists, in order to fulfill Bell’s theorem there must be non-local interactions – but at this point the very core of the Universe-as-a-cellular-automaton drops (at least, in my opinion). Unfortunately for our beloved cellular automata quantum entanglement does exist, and they can’t reproduce it.
Or do they?
Quantum Cellular Automata: Work in Progress
Cellular automata look suspiciously like a hidden-variable theory, but it seems that there is a way to make them fulfill the Bell’s inequality, as shown by the Nobel laureate Gerard ‘t Hooft (they say you could win a lot of money if you manage to pronounce his name correctly!). The keyword here is complexity: cellular automata are interesting mainly because they can generate complex beahviour from very simple rules. Their large-scale behaviour could be so complicated that one needs a stochastic treatment to figure out what’s going on – ‘t Hooft’s idea is to use quantum mechanical techniques for addressing their statistics. In his own words:
Thus, we found sufficient motivation to proceed along the path chosen here: take some classical cellular automaton, use quantum operators to describe its time evolution, write the hamiltonian as an integral of an Hamilton density and treat the resulting theory as a full-fledged quantum field theory. The quantum states that serve as its basis will have to be interpreted entirely in the spirit of the Copenhagen doctrine. As such, there is no reason to expect these states to obey Bell’s inequalities.So far so good. Unfortunately, we are still far from having a set of rules that give rise to the observed Universe, or even a part of it. I believe that the path pointed by ‘t Hooft is a serious one. Even if low-level rules (the ontological level) do not show the same symmetries as the large-scale Universe (e.g. translational and rotational invariance), the phenomenological level does in virtue of the possibility to create an Hilbert space. Space symmetries, energy and entropy are emergent properties of the cellular-automaton Universe. By the way, entropy has a dark relationship with black holes: is there a deeper explanation of why the entropy of a black hole is proportional to its area? Are cells “eaten” and information lost?
Who breaths fire into equations?
We are not finished yet. As Stephen Hawing asked:
What is that breathes fire into equations and makes a Universe for them to describe?There are equations or rules on one side, and there is “reality” on the other.
What’s the link between them? The very idea that the Laws of Nature and Nature itself lie on two separate planes of being could prove to be wrong at some point. Event the Rules could emerge from an even deeper level. We might even ask: who decided the Rules?
A very intriguing solution has been given by John Archibald Wheeler:
It is not unreasonable to imagine that information sits at the core of physics, just as it sits at the core of a computer. It from bit. Otherwise put, every ‘it’—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. ‘It from bit’ symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances—an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes–no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.
With his view (which is not free from criticism) we can catch two birds with one stone: there is no Universe if no one is looking at it, and the Rules/Laws look the way we see them because they are the ones that make possible the creation of sentient beings that in turn observe the Universe. Mind blowing, isn’t it?
Conclusions
The road to reality is winding. I really appreciate Stephen Wolfram’s idea of complexity emerging from very simple rules, such as rule 110. Cellular automata can do that for sure even in very simple topologies, but the question is: can they reproduce our reality, our complexity? From an experimental perspective, I would turn the question into a more practical one: will someone be able to make testable predictions out of all this sooner or later?