The simple and straightforward answer is yes. We are in fact biological machines that have phenomenal subjective experiences and these experiences lead us to the concept of consciousness. However the real and interesting question here is; could a non biological entity ever be conscious? Or phrased differently could we ever create an artificial intelligence machine that was conscious in the same way as us, in that it for example experienced the redness of red, the sweetness of an apple, and the strong emotions that we all feel.
Many simply answer no, perhaps due to a religious belief of a soul, or just because of a strong hunch that has always been with them. However it is not until one has spent time properly contemplating and studying the issue deeply, that one realises the issue is not so straightforward. As someone in that latter category I hope my insights can shed some light on the issue.
The fact is we will never really know for sure whether an AI machine we create is conscious like us, just as we can never know for sure that each other of us is conscious, because subjective experience is personal and private to the individual. We cannot directly experience other minds. The best we can hope to achieve is to infer out the necessary and sufficient conditions for conscious experience. If an AI machine can meet these conditions, then perhaps we will be forced, however reluctant we are, to infer that it is in fact conscious.
Some strong AI theorists argue that programmed artificial intelligence, given the right design, enough complexity and the correct simulated virtual environment, would have conscious experience. I am of the opinion that real world embodiment is one of the crucial necessary conditions of consciousness, i.e. for AI to be conscious it would need to be situated in a robot that took its own sense data in and interacted with its environment. However perhaps I am wrong.
Take dreaming for example; lucid dreaming in particular. I would consider the lucid dreams I have experienced to be conscious experiences. In some lucid dreams the vividness of experience has been no different to my real world experiences, including the awareness or meta-awareness of my self. I also don’t consider embodiment, at least at the time of dreaming, to be a necessary condition for that experience. It’s purely the brain powering that experience (when I say embodiment I mean a body to physically interact with the world and apparatus to take in sense data.) Perhaps then embodiment is not a necessary condition.
Perhaps however I have overlooked the issue of embodiment here. One might reasonably argue that those dream experiences were only made possible because of my real world embodied experiences. The interactions in those dreams, the experience of qualia (if one can excuse the use of the term), is only made possible because of my real world experiences imprinting such like data into my brain. If one looks at it from this perspective, then perhaps the embodiment issue can be resolved. Embodiment remains a necessary condition for conscious experience. (Although perhaps philosophical issues from “The Matrix” could turn the tide back the other way.)
This does not rule out the possibility of artificial consciousness. Much research has and is going on in the field of situated AI and robotics. Rodney Brook and his team at MIT are making much progress on designing and creating realistic robots in real world contexts. Marvin Minsky even goes as far to say that human like robots such as COG and Kismet could in a sense be regarded as conscious. I personally think he is overlooking philosophical issues in making claims like that. I’m pretty sure that those robots don’t have conscious experience, although they are very impressive in their design (wikipedia them). Also I think we need to be wary that simulating the complex behaviour of a conscious entity is not sufficient condition alone to infer that it is conscious (although many might disagree). We need to be looking at how information is being processed at a lower level before such inferences.
We are in fact doing rather well at working out how the brain processes information and then recreating mathematical and computer simulations of such dynamics. Connectionist Artificial Neural Networks for example attempt to simulate the organization and firing of neurons in our brain. The idea is a large multitude of simple units (like neural cells), receiving an input and then interacting with units at higher levels via simple connection signals, or weights, through the system (like neural impulses along dendrites and axons), to receive an output from the resultant weight. It is also interesting to note that since connectionist models’ early developments, second generation models have been proposed with a greater emphasis on temporal structure, context and dynamics (after all our cognition is not static); and third generation models with even greater emphasis on such properties, plus more neurobiologically realistic features such as complex connectivity and deliberate noise, as Clark illustrates (“Mindware: An Introduction to the Philosophy of Cognitive Science” (Oxford Press 2001) – P.68-73).
However even so, such models are rather simplistic compared to real brain neural networks. They ignore the computation that occurs within cell bodies, such as neurotransmitter concentrations etc. Mathematical models have been created that display these finer details and I’m pretty sure it won’t be too long till researchers integrate various AI techniques to create much more realistic simulation of human brain neural network mechanics. If we look at the exponential growth rate of hardware IP capacity and complexity of software, we can see that this isn’t just a far fetched dream. However I do still think there is an issue that is being overlooked and one that I only came across very recently while reading a response to Ray Kurzweil’s “The Singularity is near” (2005).
With a computer the hardware and software are intrinsically separate. A computer is what we call a “virtual machine”. The hardware can be used to realise any number of different software, e.g. java, word. Cognitive Scientists have a tendency to make an analogy of hardware is to brain, as software is to mind. I take the latter to mean that. Software is the equivalent to what we call the conscious mind. Thus given the correct design and complexity (and arguably embodiment) a computer’s software would be its consciousness.
The thing is the hardware and software of our conscious experience are not separate, they are intrinsically linked. Our brains are not virtual machines. We can’t run a java script on them or a word document. When we consciously experience something it is as a direct result of the pattern of activity in and between our neurons. I’m not going to go into the intricacies of the hard problem of consciousness because it would drag this blog on far too long (for a more detailed account see: David Chalmers (2002): “Consciousness and its place in nature”).
However I have come to the conclusion over my studies that the physical and mental are one. . There are not two separate realms of physical and mental. Many physicalists intrinsically deny the mental altogether. However unless they are philosophical zombies with no phenomenal experience I don’t really think they approach the issue from the right angle. I think much of the problem comes from our misuse of language. I argue in some respects it is best to explain things via the physical and in others via the mental. You can explain consciousness in terms of physical brain interactions or in terms of what it appears you are experiencing (e.g. the redness of red); whichever you prefer, as long as you realise that the two are the same thing.
If what I am saying is true then AI software running on a computer would not have conscious experience like us (although it might have some very abstract form of consciousness) because the hardware that implemented it would not be intrinsically tied to it. The same software could be run on any number of virtual machines and give the same pattern of behaviour. Since the brain and mind are one, we have our exact personal conscious experience as a direct result of the physical patterns of activity in our brain and as a direct result of the exact physical patterns of our brain only. Thus a machine could only be conscious like us if hardware and software became one. Many Cognitive Scientists say hardware doesn’t matter; I’m saying it does!
Now I don’t rule out the possibility that AI software and hardware could be intrinsically linked to produce a realistic output; but it would be very difficult to design such dynamic hardware. Hardware that actually physically changed just like our brain neural networks do as it received new input would be incredibly difficult to create. Only time and a serious lot of research into the issue will shed any real light.
So what is my answer to could a machine (namely a non-biological AI) be conscious? Well yes, if firstly it was embodied in a robot that gave it a real world context; but more importantly that the software and hardware became one, so that the machines conscious experience was a direct result of its software and hardware development together.