George Deane |
Ethical Technology
Posted: Aug 5, 2013
The computational theory of mind is a view often tacitly held by some of the world’s most preeminent thinkers, especially in neuroscience and artificial intelligence. Much of the hope that technology will one day allow for mind uploading and conscious artificial intelligence is based on the unfounded assumption that computationalism is true; that if we have a system that behaves as if it is conscious then that is good enough reason to attribute consciousness to it.
But being a marvelous fake isn’t the same as the real thing, and just because the lights are on doesn’t mean anyone’s home. Despite being a useful working paradigm in cognitive science and computational neuroscience, computationalism falls flat as a theory of consciousness.
There is insufficient reason to think that consciousness is, by its nature, an emergent phenomenon of computation and is substrate neutral. John Searle delivered a powerful blow to computationalism by debunking the notion that syntax is sufficient for semantics with the Chinese Room Argument, but he later went further to argue that the theory is not only false but also incoherent.
To understand Searle’s point it is first worth making a distinction between features of the world that are independent of the observer, ie intrinsic and not pliable to interpretation, and those which are relative to the observer, or extrinsic. Typically the natural sciences are concerned with the former. It is worth noting observer relative features of the world can also be objective; many objects such as wallets, books and clothing are not defined in terms of physics but in terms of function. Armed with this distinction we can ask, is computation observer relative or observer dependant?
Computational interpretations can be assigned to processes with certain structural features, but it cannot be discovered in nature because computations do not name a physical process; computations are more accurately thought of as structural descriptions. To glean this fact, consider the discovery of a chair in nature, perhaps on a distant planet never inhabited by life. A chair is a chair only in terms of its function, even though this function is implemented in virtue of physical features; assignment of chair is observer relative.
To state therefore that the brain is intrinsically acomputer is not only fallacious but also incoherent, computation can be assigned to natural processes; be it the brain, the weather, or any other physical system, but it does not denote any intrinsic property as these systems are fundamentally physical. Searle highlights this fact with a comparison: “How does the visual system compute shape from shading; how does it compute object distance from size of retinal image?” A parallel question would be, “How do nails compute the distance they are to travel in the board from the impact of the hammer and the density of the wood?” And the answer is the same in both cases, if we are talking about how the system works intrinsically neither nails nor visual systems compute anything at all.
For a more intuitive illustration of this point consider an uncomfortable entailment of the computationalist theory of mind: multiple realizability. In theoretical accounts of computation the substrate of implementation can be anything as long as it is stable enough and rich enough to carry out the state transitions to complete the computation. It would follow, then, that my consciousness (and yours) could be instantiated by 100 billion glaciers (one for every neuron), provided they were arranged in such a way to mirror the causal computational structure of the brain.
The parallel processing of connectionist models aren’t exempt either; a planet of 101 billion dalmatians genetically engineered to attentively observe their fellow dalmatians and then stick out their tongues (signifying neural excitation) and then bark (signifying the neuron firing) could mirror the analogue computation going on in the brain and this system would, on the computationalist account, be sufficient for consciousness. Any system that could carry out the state transitions to complete the computations would suffice, although intuitively it seems unlikely that these systems would be conscious.
Searle puts this point best of all: “For any program there is some sufficiently complex object such that there is some description of the object under which it is implementing the program. Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements which is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar then if it is a big enough wall it is implementing any program, including any program implemented in the brain.”
Unless we accept that a large enough wall would be instantiating every possible state of consciousness simultaneously then it appears there is more to consciousness than computation. If computation can attributed to anything then it would be trivial to call the brain a computer. Searle proposes we look beyond computation to what the brain actually is: a physical system. This is a damaging blow to accounts of mind uploading and Strong AI that ignore the substrate of implementation. Those hoping to achieve digital immortality through whole brain emulation might not want to be too hasty to dispose of their biological bodies after all. Sharper criteria for consciousness need to be defined.
Searle, J. R. (1990, November). Is the brain a digital computer?. In Proceedings and addresses of the American philosophical association (Vol. 64, No. 3, pp. 21-37). American Philosophical Association.
George Deane is currently studying for and MSc in Cognitive and Decision Sciences at University College London. George's undergraduate studies were in Philosophy. He is especially interested in Neuroethics and the implications of technologies for cognitive enhancement.
No comments:
Post a Comment