Continued from:
3. Minds and Machines
The question naturally arises, can a Turing machine, given enough time, compute the functions of a human brain?
It’s a question Turing himself asked, and not an easy one to answer. If a machine performed the same functions of a mind, would it be conscious, as we are? Turing devised his famous Turing Test as a test of machine intelligence. Could a computer pass as human, via a text based chat? Some interpret passing the Turing test as an indicator of consciousness. Others liken it to an English man in a box, moving Chinese symbols around according to rules - does the man really know Chinese?
Let’s set these rather deep philosophical problems aside for now. Let’s ask an even more basic, and pragmatic question. Is a Turing-machine even capable of ‘simulating’ a human mind at all, as it does other Turing Machines?
Some scientists like Roger Penrose claim even this more modest goal is not possible. The usual explanation provided is that there is a gap in our knowledge of nature, and Penrose pin-points this gap in quantum physics. Our brains exploit some unknown quantum phenomenon to perform computations that give rise to consciousness.
The debate is a long way from being resolved. No modern computer is capable of passing the Turing test consistently. They can certainly put on a simple routine for a while, dishing out canned responses, according to a rule-table similar to Turing’s machines. But what if you asked it an abstract hypothetical question?
"What would happen if you pushed a car up a hill with a piece of string?"
A 4 year-old human child would be able to tell you such a thing is silly. You can’t push a car up a hill with string. You can’t push with string. But a computer would struggle with questions of this nature. Algorithms, as we know them, struggle with modelling abstract properties about the world, and in particular with generalising them.
This even applies to the Neural Network inspired machine learning algorithms, dominating the press with stories about how they will take your jobs. For all their power, even run on the fastest supercomputers, they’re ultimately limited by spitting out clever versions of canned responses from their datasets. If you ask them something hypothetical, i.e. if you asked them to think - none can generalise and think as humans do.
The issue perhaps lies in the understanding of just what minds are . Minds aren’t well represented in any algorithm or computational model we possess. You don’t need to be a computer scientist to see that minds aren’t so rigid as our algorithms are. Writers such as Douglass Hofstadter refer to them as ‘fluid’. I think the word and concept is a useful one - it potentially has within it a mathematical formalism, which we shall return too.
If minds are too complex a construction for our understanding, let’s turn our attention to something more basic still.
Let’s look to the constituent parts of our minds. What about cells? What questions can we answer about those?