Building Something out of Nothing

A good way to start to understand what cellular automata are, and why they are so unusual, is to have a play with them. In […]

A good way to start to understand what cellular automata are, and why they are so unusual, is to have a play with them. In this article we are going to do exactly that. Here are two deceptively simple examples.

The game of life :

Elementary cellular automata:

Think of them a little like a game. They consist of a grid of cells, each of which can either be On or Off- Black or White. Accompanying this grid is a set of very simple rules. These rules describe how each cell should evolve in time, based on the states of its surrounding neighbours. For example, a rule might be: “if all the cells around you are white, then next turn you should be white too” or “If less than 3 of your neighbours are white, then next turn you should turn white ”.The whole grid steps forward in time and the rules determine how the board has changed after each step.

So far, so unremarkable. The fascinating thing about cellular automata, or CA’s for short, Is that from this simplicity a staggering range of complexity emerges. Despite only following local rules, and without reference to any grand blueprint or design, the systems create their own complexity.

Take the game of life for example. Originally discovered purely for fun by the Mathematician John Conway in the 1970’s, its rules, as described on the website, are designed to simulate simple population growth, perhaps that of a bacterial culture. Despite only having 4 rules, the game plays host to a wide variety of behaviour- try the ‘glider gun’, one of the website presets. (Conway initially postulated that no initial configuration of cells could grow indefinitely, and the discovery of this pattern lost him a $50 bet.)

We think of sophisticated objects as having even more sophisticated causes. The idea of simple systems, following purely local rules, generating their own complexity from the ground up is a powerful one, with implications for how we view complexity in nature. It has attracted the interest of many scientists, among them Richard Dawkins. In his latest book, ‘The Greatest Show on Earth’ he describes a model designed by scientists led by George Oster at Berkley, which simulates the growth of the blastula, a hollow ball of cells which occurs very early in the development of an embryo.

The team simplified the behaviour of a cell in the blastula down to essentially a spring, which would contract very sharply in response to being stretched beyond a certain length. Using computer simulation, the team then strung together a group of these cells into a ball, and poked it. Amazingly, despite neglecting almost all of the details of the blastula cells, the simulated ball did exactly what a real blastula does- it folded into itself, a process called invagination. It even went so far as to pinch off a section of itself, a section which in real embryos goes on to form the spinal cord.

The fact that this behaviour was observed despite the inadequacies of the model is fascinating, and hints at universal characteristics that many systems may share, regardless of their details. Indeed, models like this one may equally be applied to the flocking of birds, or the growth of a snowflake in the atmosphere, with uncanny accuracy. Intuitively, we would look for an explanation of this complexity in some sort of detailed mechanism, specific to the system we were studying. But maybe our emphasis is all wrong, and complexity is much simpler than we think!

The second website shows the evolution of a specific type of automata, called ‘elementary cellular automata’. These are one dimensional rows of cells, where the state of a triplet of 3 cells will determine what the state of middle cell of the next iteration is (shown one row below the current one in the applet linked) . There are 2^3=8 different states of the 3 cells. One ‘rule’ is a list of what each of these 8 configurations causes the cell in the row above to become. Since each of these 8 different triplets could cause the cell below to be ‘on’ or ‘off’ and there are 8 different triplets, overall there are 2^8=256 different rules, although some overlap one another.

Not all rules display complexity; some fall into repetitions quickly. However, try rules 30 and 110. Rule 30 displays chaotic, non repetitive but seemingly random behaviour even with simple inputs. But Rule 110 is most interesting of all. It displays behaviour which is not quite random, but not predictable either, with sub structures that appear and disappear again, and which can interact to produce further structures. These Cellular automata, and in particular rule 110, have been studied most famously by Stephen Wolfram in his 2002 book , A New Kind of Science. It turns out rule 110 can act as the mostgeneral form of computer makingit the computational equivalent of many systems we might initially think were far more sophisticated, like the computer you are reading this on. This discovery, by one of Wolfram’s assistants Matthew Cooke, strengthens an assertion made by Wolfram in his book- that there is a universal maximum level of computational power, and that many systems found in the natural world attain it. To Wolfram, we are computationally equivalent to rule 110. An interesting feature of these rules that lends credence to this idea is that of ‘computational irreducibility’. This is the idea that you cannot tell what rule 110 is going to do without simply running it. There is no shortcut to predicting what it will do; we cannot outsmart it .This makes prediction, in the sense of traditional physics, impossible. If nature is full of these processes, which we cannot outsmart, it makes sense to think of us as part of them. This idea has even been proposed as an explanation of free will!

The bold claims do not stop there. It has been conjectured by Gerard T’hooft, Stephen Wolfram and others that the entire universe may be a computer, possibly a cellular automaton, comprised of discreet chunks of space and time. As Richard Feynman put it:

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do?….So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the checker board with all its apparent complexities. But this speculation is of the same nature as those other people make—’I like it,’ ‘I don’t like it’—and it is not good to be prejudiced about these things”

About Jack Binysh

Jack Binysh '13 is reading for a degree in Physics from Lincoln College.