Should we worry about the ethical implications of AI?
The philosopher Etienne Gilson once said that philosophy has a habit of burying its undertakers. These days the pallbearers tend to be those irritating scientists who make a point of saying they have no time for philosophy, but who unknowingly practise it anyway. You cannot do science without making metaphysical assumptions, and you can never draw a scientific conclusion in total confidence that the conclusion has no ethical consequences. To deny this is, when you think about it, constitutes a metaphysical claim in itself. The pallbearers should lighten up a bit.
So here’s my question: given where we are in the science of artificial intelligence, should we worry about the ethical implications of machine consciousness? I’m going to suggest that we have to. Not because machine consciousness is possible, but because by convincing ourselves that it is possible we come close to thinking of human persons as being mere machines.
First things first, is machine consciousness in fact possible? The proponents of what is sometimes called strong AI argue that in principle it is. For these theorists the soul is no more than the mind which in turn is simply a type of software that is run on the hardware of the human brain. Ultimately, on this view, your interior mental landscape is no more than an algorithm, and if fully conscious machines are a long way off, this is merely because the algorithm is a particularly complex one. Science, eventually, will overtake this complexity. Job done. Time to start worrying now about our proper attitudes to those future replicants.
But this “optimism” about the possibility of conscious machines is only as well grounded as the assumptions on which it rests, the most important of which is a position in the philosophy of mind known as functionalism. Functionalists argue that the content of any mental state (such as the state of being in pain, or of believing that Brexit will ever happen) is determined by a matrix of causation connecting it to sensory inputs, behavioural outputs and to other mental states. Since such a determination is neutral with respect to the biological material of the thing having the pain (or the belief) then it follows that there is no conceptual problem in claiming that machines have as much chance of a mental life as you or I. Or if not as much chance, then at least some chance.
Functionalism has its difficulties. Some mental states have what philosophers call a phenomenal content, a “what it feels like” bit. There is something “it is like” to have a toothache which seems to evade any attempt to characterise it as a purely functional mediating mechanism between something happening to me, and me reacting to that something. That something is that it hurts. And, just as important, that if I am the one having the toothache, that it hurts me. There is an apparently irreducible subjectivity about our mental lives, an “I” at the centre of them, that from the perspective of functionalism is elusive. Functionalism might be a reasonable account of the human object, but it furnishes incomplete accounts of the human person.
A distinction presents itself therefore, that between intelligence (which we can think of in functional terms, as a general ability to solve problems according to rules) and consciousness (a far richer thing). And it may be that the distinction suggests not a difference in complexity but of kind. In which case the successful development of ever more impressive technological distractions might not say anything at all about the possibility of machine consciousness.
And this is where the danger arises. To think of the mind as a sort of computer program is to be in the grip of a pretty powerful metaphor. Such things are hard to let go. Thus functionalists (and other philosophers sympathetic to purely naturalistic accounts of the human person) have found themselves simply dispensing with the recalcitrant aspects of consciousness, those parts of the mind which seem less amenable to the functionalist description,. The phenomenological magic show – the world of sensation, moral shame, love, regret aesthetic wonder- is compressed into an algorithmically shaped hole, and not all of it will fit in. The planetarium of the human mind is replaced with a decades old telescope purchased at the local car boot sale.
In short, in order to make plausible the claim that machines can be conscious we are becoming encouraged to think of persons as machines. Thus we have idiocies such as transhumanism – the view that since we are no more than programs, bodily death can be circumvented by simply uploading ourselves onto different hard drives.
Strong AI theorists, wittingly or not, are aggressing against the religious idea in general and Christian theism in particular. It is not for creatures to usurp the role of a Creator. And to come to think of ourselves as mere machines is to overlook the significance of the Incarnation. The church father Athanasius said that God became man in order that we might share in his divinity, and not merely as a means of emphasising our status as just one more part of nature. And God saved us, on this view, because only God -through the Resurrection- can save us. It is not “transhumanism” that defeats death, but the fact that, as St Paul said, God took on the nature of a servant and humbled himself in the face of it.
Just as a minor deviation at the genetic level can result in catastrophic consequences at the level of species, so a metaphysical error regarding the nature of human consciousness has led, basically, into a species of idolatry.
This, in my view, is the real moral issue at the heart of the “AI revolution”. Not that we will have to reassess our evaluation of the moral status of machines, but that we are led into a morally and religiously catastrophic conception of the nature of created persons.