Computing on the Brain

Modelling neurons for artificial intelligence
Scientific knowledge of neurophysiology in the 1940s covered only a small fraction of what we know today. However, enough was known for a group of […]

Art by Elizaveta Gelfreykh and Samuel Pilgrim.

Scientific knowledge of neurophysiology in the 1940s covered only a small fraction of what we know today. However, enough was known for a group of scientists led by Warren McCulloch and Walter Pitts in Chicago to develop a mathematical model of neural networks in the brain. This in turn led to the development of Artificial Neural Networks, an exciting field of computing that is gaining momentum.

At the time of McCulloch and Pitts, it was known that the human brain is made up of discrete cells called neurons. Neurons have a branch-like structure that conducts electrical pulses, allowing transmission of signals to hundreds, or even thousands, of neighbouring neurons via junctions called synapses. It was assumed, albeit falsely, that neurons behaved in an ‘all-or-nothing’ fashion: either they were on, and firing electrical pulses, or they weren’t. McCulloch and Pitts attempted to use this knowledge to describe the brain mathematically. According to the McCulloch-Pitts model of the brain, neurons act as logic gates; that is, they receive binary inputs (e.g., on/off or 0/1) from a number of pre-synaptic neurons, and perform logical operations producing a binary output. In McCulloch and Pitts’ view, neural networks in the brain work in much the same way as elementary electrical circuits.

Nowadays the McCulloch-Pitts model of the brain is dead and buried, and their goal of understanding the brain in terms of mathematical structures and computational algorithms remains elusive, both because of limited computing power and a limited knowledge of neurophysiology. However, their work inspired the field of artificial neural networks in computing; an important field that is flourishing as practical applications in science and industry emerge. Today we rely on, and trust, computer algorithms such as artificial neural networks to do a wide range of tasks for us.

The simplest example of an artificial neural network is something that classifies an input into one of two categories. The input to this ‘classifier’ is a number. If the number is above a certain threshold then the input is classified as being in class A. If the number is below the threshold then the input is in class B. Such an artificial neural network could be used to distinguish between the hand-written characters ‘a’ and ‘b’. In this case a good choice of input would be the ratio between the height and the width of the hand-written character. If the threshold height was chosen appropriately, the artificial neural network would reliably distinguish between an ‘a’ and a ‘b’. This important feature is known as adaptivity. For example, the hand-written character classifier can be adapted to different people’s hand-writing by varying the height threshold parameter.

The adaptivity of artificial neural networks makes them good for problems where the precise relationship between the input and output is not known in advance and to problems where characteristics of the input change over time. A good example is speech recognition, a process where the level and nature of background noise varies over time.

Unfortunately, artificial neural networks do not always give the right answer. However, they can be made more sophisticated, and are commonly used by engineers and statisticians for a wide range of practical applications including speech recognition, credit card fraud detection and missile guidance.

The faculties of artificial neural networks remain limited in comparison to humans, but can we ever expect artificial neural networks to perform as well as our remarkable brains which inspired their creation?

One of the main obstacles lies in development of computing power. Current computing power limits the size of artificial neural networks to thousands of neurons. By comparison the human brain contains roughly 1010 neurons. At the moment, humans have much greater computing power than artificial neural networks, but who knows when the tables will turn?

Art by Elizaveta Gelfreykh and Samuel Pilgrim.

About Philip Maybank

Philip Maybank is studying for a DPhil in Systems Biology at Linacre College.