Wednesday, 3 October 2007

What philosophical insights can we draw from split brain syndrome and blindsight?

Both split brain and blindsight are two extremely bizarre conditions, which strongly suggest a certain relation of phenomenal conscious experience to the wiring of the physical brain.

Split brain syndrome occurs when a lesion to treat epilepsy between the left and right hemispheres of the brain causes the information between the two hemispheres to be stopped. This leads to the strange phenomena seen in the video link below: http://www.youtube.com/watch?v=ZMLzP1VCANo&eurl=

It’s hard for any of us without the syndrome to imagine what it would be like to have that experience. As Joe explains it doesn’t feel any different from before, the brain just adapts to it. It clearly shows that processes need not be conscious to work.

It’s tempting to think from the Doctors words in the conclusion that there is a central processor at the end that produces consciousness. However because it provides strong evidence for the modularity of the human brain (and for that matter probably other animal brain with similar anatomy), I would argue such a strong conclusion is misled. There are hundreds of neural networks in the human brain, each serving separate but interconnected functions. When a split occurs between two related networks the brain is unable to process information between them and this is reflected in distorted and separated phenomenal conscious experiences as shown in split brain syndrome. The final stage would not be possible with at least most of the earlier processes, and it functions in the same way as any neural network; parallel distributed processing PDP.

The brain does not have a soul, unified self or a central processor behind it. It functions using distributed and interconnected network processing, and the strange phenomenal experience related to lesions between such networks is clear evidence of this. However it does indicate that only once a certain level of processing has occurred, consciousness can emerge. This leads us nicely to “Blindsight”.

Blindsight occurs when damage in higher processing areas of the visual cortex, lead to a lack of visual phenomenal conscious experience in both or one eye (depending on whether the left or right hand side of the visual cortex is damaged.) The interesting part is that in experiments, researchers have shown that patients are able to point at a dot on a screen with something like 99.9% accuracy (clearly showing that this wasn’t luck.) without any visual conscious phenomenal experience whatsoever.

Thats pretty incredible. The experiment is clear evidence not only for the modularity of various brain functions, but also that phenomenal experience only emerges at a higher level of brain processing. Lower order visual functions (like point recognition) are possible by processing in lower order areas without the use of phenomenal experience that emerge from higher order processing. We can only draw firm conclusions from blind sight on the emergence of visual consciousness since it only demonstrates that aspect of consciousness. However it would be strange and very counter-intuitive to reason that it wouldn’t apply to at least the other 4 senses if not higher order cognition as well.

It seems to me to strongly indicate that only animals with the correct brain modules or at least the necessary complexity of neural network processing, would have any visual consciousness. Thus it is possible that many insects don’t have visual consciousness and all their functions are facilitated by lower order unconscious processing. I’m not an expert in neuroscience or the anatomy of animal brains so I’m not sure how high up the animal chain we can go here. All insects? All reptiles? All mammals but us? Perhaps if any neuroscientists reading this have the expertise they can shed some light.

This condition also has implications for AI. We might well argue, if we believe artificial consciousness is possible at all, that only AI programmed with the correct modules and complexity would have any consciousness, visual or other! This may seem obvious to many, but blindsight experiments (and split brain ones to for that matter) gives clear empirical evidence for it.

I’m not going to go any further here as I don’t have sophisticated knowledge of the neuroscience and biological anatomy of the brain, but hopefully the philosophical insights have been interesting and thought provoking.

Monday, 17 September 2007

split brain video

just found a brilliant video of a split brain experiment in action:
http://www.youtube.com/watch?v=ZMLzP1VCANo&eurl=

I may well have to right a blog on the philisophical implications of this soon. I had totally forgoton about this bizare and interesting phenomonen.

till then, heres an introduction.

Tuesday, 4 September 2007

Where do the boundaries of the mind end?

In the previous blog I discussed the idea that dualism was flawed and that some form of monism provided a much more satisfactory explanation of the ontology of the physical and the mental. According to this monist view, there is only one reality. Physicalists like to call it the physical and deny the mental altogether. I tend to avoid this and stick to the idea that there really is only one reality, its just when talking about it, or indeed practising Cognitive Science there are multiple levels of analysis that need to be accounted for in order to provide a satisfactory account; physical, phenomenal, functional, etc.

Now the immediate answer that most give as to where the boundaries of the mind end is that the boundaries lie within the skin. Since Turing’s work on computation and the invention of the digital computer there has been an analogy in Cognitive Science of something like, mindware is to brain as software is to hardware. In other words the workings of the human mind is like the software in a computer. The brain of course being the biological computer and its program being the mind. This of course is not to be taken too literally in a dualist sense. We are talking of studying mindware scientifically as a process that occurs in the brain not as some extra mental stuff. That aside; from this we should try to make computational models that correlate with the physical interactions in the brain. If we are successful we will have explained mindware.

There have been numerous attempts over the year to theorize and build computational models of the brain/mind, frequently with the help from AI simulations. Good old fashioned AI (GOFAI) followed Newell and Simon’s Physical Symbol System theory. The idea is a physical device that contains a set of symbols with combinable properties and a set of operations used to manipulate those symbols according to rules. The attractions of this were clear as folk psychological terms such as beliefs could be clearly implemented, actually becoming symbols in the computational models. Thus thought could be naturalized. However the biological implementation of such models was never clear and many researchers rejected it as a realistic model of human cognition.

In the 80s a new wave of computational theorists known as connectionists emerged. Their models were analogous to real human brain neural networks (although far simpler). Artificial neural networks consisted of simple nodes (like neural cells) linked in parallel interacting with higher levels via simple connection weights (like axons and dendrites). Units received input and passed their weights to higher levels (like electrochemical impulses). Today’s models have incorporated realistic temporal and biological features such as salient time delays and deliberate noise. There’s a lot to say about connectionism, and it definitely seems to provide a far more accurate model of computation in the brain, but I’ll leave it at that for the sake of this blog. No doubt with the huge growth in computer IP capacity we will see even more realistic models in the future alongside neuroscientific research.

However it seems that if we are to take a monist approach and perhaps even we don’t, the processes under study should leave the skin. The previous models mentioned above all missed a key factor in their research: “Embodiment”. Human brains are situated in a body that, with the help of a nervous system and sensory apparatus, takes in real world input and causally interacts with the external physical world. More recently then researchers have put focus on the interactions of brain, body and environment. The idea is that intelligent systems often don’t have to go through such rigours higher cognition of assessing the problem, weighing up the alternatives, working out the values, doing the calculations and then getting the answer; in order to achieve a goal. Most of the time such processes become automated in the dynamic reactions between body (from here on in this includes the brain and nervous system) and environment. Could it be possible that much of our incredibly complex behaviour is facilitated by lots and lots of very simple interactions between and within the body and the environment? Well dynamic system theorists argue so and that’s where much research is taking place currently with help from robotics and artificial life projects.

If we look at motor skills such as walking or throwing a ball it’s fairly clear to see how this is the case, and motor skill theory is no new thing. Take catching a ball; the ball comes towards the person, the visual system tracks it, sends a signal to a low level processing area of the visual cortex which unconsciously processes information and relays the signal to the arm and hand muscles which perfectly coordinate and the ball is caught. We all take such a process for granted but really it’s quite incredible how a collection of such simple processes can produce the result of a ball at 20 mph being caught out of the air. We can at least however understand how this all works in principle.

However what are the implications of such an extended theory of mind on higher cognition?

One of the most interesting I think is that when studying cognition we shouldn’t just look at brain computation, but we should take into account cognitive technology. Today we are immersed in a world of computers, mobile phones, etc. Through these cognitive tools we are able to solve tasks with incredible more efficiency than we could without. Think of the internet. Right now if you wanted to find out how to get somewhere you could open up Google and type your destination in or look it up on trainlink, or whatever. You would have the answer in a matter of seconds. 15 years ago you would have had to look up some numbers in the phone book, dialled the number on a landline probably which may or may not be near you, spoken to the operator or whatever and then that might have only been one part of your journey, etc. 150 years ago you might have to go down the road and find someone who knew how to get there, find someone else who could transport you there, etc. 1000 years ago… well you get the picture.

The idea is that as technology evolves so does our cognition and our ability to process information efficiently. It probably isn’t going to that long till we all have decent internet on our phones, thus when you’re in the middle of town and you want to know somthing you can just get your mobile out, open Google, search and there you go. The interesting question follows: Does the internet (which you now always have near immediate access to) count as part of your mind or at least your knowledge base. Most of course would answer no, just as they wouldn’t count a bit of paper with a number on it as part of your mind.

However yet further into the future there’s a good chance that people will be able to get chips in their brain that wires them up to the internet so they have immediate access at any time. It sounds like the stuff of science fiction, but I don’t think it can be ruled out. It’s certainly possible in principle and as technology evolves so will society’s values and thus it may well gain acceptance. Even if it never happens, the thought experiment is still there. Does the knowledge on the internet count as yours; does your mind extend into it? It’s hard to argue no unless you say the mind is only linked to the biological brain or you believe in a soul. Such a situation would mean you had the same instant access to information on the internet as you did with your biological brain. Such an invention, to coin a Jungian term, might well lead to an “unconscious collective”, or in fact better phrased; a “Conscious Collective”. Of course it may turn out that this never happens or that its impossible, but it does raise a very interesting point on the boundaries of the mind.

Saturday, 25 August 2007

The problems of dualism

Dualism works on the principle that there are two distinct forms of reality; the mental and the physical. The most well known form of it is interactionism. Descartes in The Meditations argued that mind and body (or what we can interpret as mental and physical) had to be distinct realms of reality due to their difference in properties. Descartes held that the physical was spatial, temporal, extended, divisible and the mental was not. Thus they must be distinct. Descartes argued that the pineal gland was where the mental and physical came together. Of course nobody thinks that anymore but there are similar ideas of where it all comes together in the brain. Interactionists argue that mind and body interact with each other through some form of causal mechanism. What this is is very unclear and as I will argue later no such explanation will ever fully answer the question. I was contemplating the issue of the mental and physical having separate properties, and realised that this was only the case if one postulated a mental substance or soul; or in more recent times a homunculi; a centre of mind where ‘it all comes together’ so to speak.

I argue that if we quite reasonably take the mental to mean conscious phenomenal experience (and invariably move away from the notion of the homunculi) this is simply not the case. Our experiences are represented in space, time and thus are extended. Similarly we frequently divide our experience into parts. For example we distinguish between different objects. Although ‘the stream of consciousness’ can come as one grand unified experience it rarely does. Kant in the Critique of Pure Reason held that space and time were the necessary a priori conditions of any possible experience. I have to say I side with him strongly and Descartes dualist argument fails as anything more than a superficial bit of pseudo-logic.

Another form of dualism is epiphemonanlism. This version is certainly very counter-intuitive and it certainly doesn’t satisfy me, although it does some. It basically states that the mental realm is a non-functional by-product of the physical interactions. To me this seems to imply that conscious phenomenal experience like feeling pain ‘just happens’. It doesn’t help the conscious entity survive. IT isn’t a selection pressure. The account seems very misleading I think and certainly not satisfying. While it can’t be ruled out surely we should look for a more complete account with fewer holes.

Psychophysical parallelism is a final version which states that the mental and the physical are 2 distinct causally closed systems, i.e. they both ontologically distinct but there is no interaction between the two what so ever. This is even more counter-intuitive and I’m not going to dignify it with much of a response. While again we can’t rule it out it’s truly unsatisfying.

The main problem with dualism is that it always leaves what philosophers call an epistemic gap unexplained. How is the jump made from the physical world to the mental (and vice versa in the case of interactionism? Supporters come along and say things like “Well it happens at the synaptic bridges,” or “well quantum mechanics is responsible.” The fact is any justification that makes reference to any physical system theory is missing the point. There will always be a metaphysical gap that is un-explainable. It’s not a question of whether future research in physical sciences may find out the answer. By definition the gap is non-physical, or at the very least where its going is. Of course once again this lack of explanation doesn’t rule out dualism in itself but we want to find as complete an account as possible!

So what are the alternatives? The obvious one is physicalism; that everything is physical; the mental just doesn’t exist. All of it, the redness of red, the bitterness of a lime, the pain of a cut; its all just particles interacting with each other. In a sense I support such an account but I think the word physicalism is misleading and it definitely isn’t going to be accepted as a theory by the majority of this world. People’s experiences are too unique and rich for them to accept such a cold truth.

Another position is what Chalmers calls type-F monism of which there are various interpretations (one of them is my view which is described below.) An interesting yet puzzling interpretation is that there are proto-phenomenal properties underlying physical reality.

My position is what can be coined non reductive monism. There is only one substance whatever that might be. At times it is best to refer to it as physical and others mental or at least refer to conscious phenomenal experience. It is appropriate at times to make reference to folk mental talk such as beliefs and desires and also to qualitative descriptive words such as red when we refer to experience. At other times we may need to work on a lower level of analysis at the physical level; describing the physical interactions of neurons etc. We are trapped by language in the mind-body debate. We have to realise that ontologically speaking the mental and the physical are the same thing. However in science we have to work on multiple levels of analysis. No level is more right than another, simply more appropriate given the current task. Taking a strong physicalist (at least to me) seems to overlook that. I even have to say that physicalists turn themselves into philosophical zombies by my definition of the mental (phenomenal consciousness). Many argue otherwise, but once again I think it’s a problem of how we define things in the debate, i.e. a problem of language. In one sense my view is ontologically speaking monist; there is only one substance. In another sense my view is a form of property dualism; one underlying substance with various characterizable properties; mental (phenomenal experience), functional, computational, physical.

Another important emphasis I want to finally add is that when studying consciousness, to coin Andy Clark’s term ‘mindware’, we don’t just want to be looking at the causal interactions in the brain. Humans are embodied in an environment, and thus the study should leave the boundaries of the skin and take into account the interactions between body and environment to build the most complete picture. So when people say “What? Yeah but my experience is something different to just neurons firing”; our response can be something like “of course, neuron firing is only part of the picture…” People have a tendency to focus on the brain in their physical accounts, which misses a large part of the picture. I will come back to this idea of extended mind/cognition in a later blog. Until then I hope to have convinced the reader that there is a better alternative to straightforward dualism.

Tuesday, 21 August 2007

How do we know other minds?

When we look or converse with other humans we immediately assume that they are conscious like us; that they see red as red, feel emotion like us and can understand their own thoughts. We assume others have minds like ours yet we have no direct access to their mind. How this is the case and how we access other minds has been a debate among philosophers for many years. Husserl claimed that we note similar behaviour and infer other minds; whilst Satre claimed we had a more direct access to them. In more recent times Cognitive Scientists and Neuroscientists have entered the debate. Two well known theories have emerged from the debate; "Theory" theory, and "Simulation" theory.

"Theory" theory makes use of theory of mind (TOM), and fits in quite well with Husserl’s account. We observe others behaviour and then postulate or infer causal mental states that must be behind such actions. We naturally think in very mentalistic ways because of our immersion in language. We take a folk psychological understanding in our everyday intuitions of others behaviour. Thus we say things like; “Jim went to the shop because he believed they had orange juice for sale and he desired to quench his thirst.” Fodor claims that we can't imagine humans, as such socio-linguistic beings, not thinking in mentalistic terms and using a strong TOM.

Supporters of the theory claim that there is a special purpose mechanism in the brain that facilitates the process. Autistics are said to have a minimal TOM due to a defect in this brain mechanism. On the other hand Williams Syndrome patients who are very social (good TOM) but lack good general intelligence are said to have a well developed mechanism. However the details of such a mechanism are unclear and the theory has received much criticism for being overly mentalistic in its assumptions.

The other theory; “Simulation” theory, claims we have a more direct cognitive access to other minds, which is less inferential and more instinctive. It claims when we see the actions of others we use our brains to simulate their minds . We postulate differences between ourselves and others and incorporate such factors into our simulations. The theory is not that dissimilar to TOM but implies a greater disassociation from personal perspective and greater emphasis on the perspective of the other, and thus is thought to account for empathy more. However both theories make large reference to mentalistic abstracta.

I would argue that it depends on the individual and situation in a real world context as to which method is used. Once again I am placing great emphasis on taking embodiment into account.

Certain people in certain situations might think about others actions by actually postulating the beliefs behind them ("Theory" theory). We can certainly imagine someone consciously thinking; “Jim is going to the shop because he thinks/believes he can get a drink there and he desires to quench his thirst.” My intuition however is that most people in most situations don’t go through such rigorous inferences (I know I certainly don’t most of the time.)

Other people in other situations might try to simulate the mind behind the actions of a good friend perhaps. For example upon seeing a friend crying one might put themselves in their friends shoes, so to speak, and think about what things might make that friend cry. Thus by realising a more direct access one could understand related emotions and empathize to a greater extent. Once again though my intuition is that a lot of the time we don’t go as far as this. If we did it all the time then we would be using up an awful lot of brain power that could otherwise be avoided. We have to take into account that such mechanism came to be via natural selection, and thus brain power usage is a factor.

More recently, neuroscientific works into the mechanics of mirror neurons lead to a more detailed view on the issue. Thompson and Gallagher (1999) argue that following perception of others actions, mirror neurons (found in the ventral premotor cortex of monkeys) are simulated which gives a direct access to other minds. This evolved mechanism means that usually no inference is necessary; saving precious brain power. It is as simple as an action observed – simulation triggered reflex. When we see another in pain we don’t go through long and unnecessary inferences; we feel empathy towards them due to the triggering of mirror neurons. Think of when you see someone really hurt themselves and it makes you squirm; it almost feels like you feel their pain. The explanation for this is mirror neuron theory.

The leading neuroscientist V.S.Ramachandran argues that this mechanism would have originally evolved for social dynamics and later caused self reflection of own intentions leading to a theory of mind. It is certainly true that social species require such a mechanism and we can’t imagine them having a strong TOM before language evolved. Primate faces can show a lot of information in the way they are expressed. Thus individuals by looking at others facial expression can gain access to their emotions using mirror neuron mechanism. This sharing of information increases social dynamics and chances of survival for the group. It also decreases IP load on inferential modules that can then be freed up for other functions. Language is the vehicle of mentalistic cognition and so before is evolution other mechanism would be crucial for survival and social dynamics.

Supporters of the theory admit that at times we do use more complex inference mechanism when we want to work out more precise details of other minds. Thus at times we do use something like “Theory” theory or “Simulation” theory. But through embodied practise during development, most of knowing other minds is automated into the mirror neuron system. It also provides an empirical explanation for autistic brain area deficiency (although this hypothesis is still under study). Thus I conclude that Mirror Neuron Theory really seems to be the key to our understanding on the issue.

Monday, 20 August 2007

Our Lives, Controlled From Some Guy’s Couch

I just read an article related to the idea that our reality/mind could be nothing more than a computer simulation. Its worth reading even just for some of the ridiculous comments. It does also raise some interesting philisophical points but Dr.Bostrom actually thinks we can put probability on the issue: “is that there’s a 20 percent chance we’re living in a computer simulation.” It even has a link for survival strategies in this "computer simulation" which I found absolutely hilarious.

http://www.nytimes.com/2007/08/14/science/14tier.html?_r=2&oref=slogin&ref=science&pagewanted=all&oref=slogin

I havn't written a blog for about a week, but I intend to write one on knowing other minds, and another on where the boundaries of the mind end soon. Untill then hope this provides some amusement.

Tuesday, 14 August 2007

What is the Self?

“The self” is one of the most challenging and interesting issues in philosophy of mind and Cognitive Science. We all have a phenomenal conscious feeling that our experiences are unified into one identity – a self; that our experiences belong to someone. However what this self actually is or whether it actually exists is open to debate.

Such an issue has been debated for hundreds of years, long before modern science. Descartes (Meditations on first philosophy 1641) claimed that the self was the most clear and distinct thing known, in his famous statement “Cogito ego sum”. Hume on the other hand claimed the self was nothing more than a bundle of linked perceptions (A Treatise of Human Nature 1739).

More recently the philosopher Daniel Dennett has followed from Hume’s intuitions by claiming that the self is a theorist fiction. He claims it is a mistake to look for selves in the brain or mind. He makes analogy to a storytelling type writing machine – “Gilbert” that tells the story of a robot exploring a room. The narrative of the typewriter and the movement of the robot are very coherent but there is no self. By analogy we are centres of narrative gravity. We are all confabulators of our own stories independent of any realist truth. The self has no privileged place, it is an abstracta that allows us to understand and explain our actions.

What I want to focus on in this blog is Thomas Metzinger’s account of the self, which he addresses in his book “Being No-one”. He claims that there is not enough conceptual clarity of the self and hopes to build a multileveled constrained notion of the self called “the self model theory of subjectivity”. Such a theory eventually leads to the conclusion that there is no such thing as a self.

He distinguishes between minimal and maximal notions of self. Minimal defines the boundaries of a system and its environment with a dynamic generation of coherent actions (3 constraints). Maximal defines a more extended notion of the self with an autobiographical memory (further 7 constraints). These constraints form a very long account, so I will focus on the three constraints for minimal self; globality, presentationality and transparency.

  1. Globality – The idea here is that the contents of consciousness are what are being currently used by the system for functional and behavioural processes. This he draws from Global Work Space Theory (Baars 1988). One obvious problem here is that empirical studies show that unconscious states also function in behaviour and thus the contents of consciousness are not the only thing available for use. Nonetheless Metzinger maintains this constraint.
  1. Presentationality – Any system with a minimal notion of self will experience a constant flow of consciousness. It will have a feeling of being in the now and the outside world will appear unified. This seems very reasonable, but it can be pointed out that this constraint can only be tested on systems that can report their experiences, and thus other animals are rules out.
  1. Transparency – The contents of our phenomenal consciousness are represented by a vehicle in our brain to which we have no direct access to. We can’t see how things are being processed in our brains; such representational causes are left in the dark as it were. This gives any such system a form of naïve realism in which the system assumes its phenomenal perceptions and thoughts are the real self because they are the only things accessible

Following this Metzinger goes on to claim that any system generating a notion of self generates a phenomenal self model (PSM). Due to the transparency of our phenomenal consciousness we experience consciousness as a global whole that leads us to the belief that the self is whatever we subjectively experience from moment to moment. Metzinger’s point is that there is no-one in the strong sense having the experience. There is no Cartesian notion of a grand unified mental substance as Descartes proposed. The self is a simulated illusion; a virtual reality of an ongoing dynamic representational process.

Metzinger isn’t claiming that the self completely doesn’t exist and that we don’t experience a notion of the self, because clearly most of us do .What he is claiming is that that no such “things” as selves exist, in the way that “things” such as molecules and atoms do exist as realist entities. We just have access to a transparent phenomenal self model. Thus contrary to the way we experience the world; there is no unified Cartesian substance (or a soul) having these experiences.

Now this is a very hard concept for most to grasp but there are several real world examples that help clarify this abstract idea. These examples should help the reader see where Metzinger is coming from.

In terms of perceived body image, there are clear cases where a person can experience a distorted or complete lack of body image. Astronauts in weightless conditions for example, frequently claim they experience a complete lack of self body image. Similarly disassociative anaesthetics such as Ketamine can produce the experience of body image absence. The phenomenal experience of body image is caused by representations in the neural networks of the somatosenory cortex. If inputs into the networks are missing (as in the astronaut case) or connections within the networks are messed with then (as in the ketamine case) distorted body image follows. A further good example is that of phantom limbs. Patients with amputated limbs often report fully experiencing the feeling of a present limb where one does not exist. Again the case shows how the experience of the self can be an illusion.

When we look at the autobiographical notion of the self, the notion that our thoughts are linked and we have unified memories, other cases can show illusions as well. Patients of mania and schizophrenia show how this form of the self can be highly distorted. Mania induces experiences that the self goes beyond the limits of one’s body and mind. Patients report experiencing the world as an extension of their selves. Schizophrenics or people with split personality disorder report experiencing multiple selves. Clearly in these cases the boundaries of the autobiographical self are not clear cut.

One really interesting criticism of Metzinger’s account comes from Marcello Ghin, who argues that there are real patterns in the world that can be described as selves and thus Metzinger’s idea of abandoning the notion of the self is mistaken. While Metzinger claims that the subjective experience of the self emerges if a system operates under a transparent Phenomenal Self Model, Ghin adds the constraint that such a system must be self sustaining. He argues that the phenomenal experience of self must have had a self-sustaining function to have been selected in evolution. Thus phenomenal self modelling must help us regulate homodynamics, such as avoiding danger, seeking food, or gaining hierarchical status in a group. Thus the causal representational patterns of the PSM are real entities useful for scientific predictions and we should not abandon the notion of the self as a scientific concept.

As someone who believes that everything in reality can be understood as complex informational patterns, I see a lot of sense in where Ghin is coming from. Perhaps if we could find the neural correlates that cause the phenomenal experience of the self, we could conclude that we had found “the self”.

However the issue of embodiment raises problems here. Since every individual is embodied in a different context and therefore is formed from entirely different informational content, the patterns that form their unique self will consist of an entirely different pattern. Thus one could argue that any kind of universal theory would be impossible because the causal patterns of the self would be unique to the individual. None the less Ghin’s point does provide reason to believe in a realist notion of the self; just perhaps one that will never be able to be measured and predicted universally.

The issue of the self is huge topic and a whole volume of books could be written about it, so I’ll stop here. I may well pick up some of the issues in a latter blog. Until then I hope some of the points here can get the reader thinking on the issue.

Wednesday, 8 August 2007

Metziner on the self: Being no-one

This is an excellent lecture by Thomas Metzinger on the self: being no-one. I studied him last year and so his ideas have been quite influencial on my own. When I get time I will right a blog on the self. Untill then I would recommend this: http://video.google.co.uk/videoplay?docid=-3658963188758918426

Tuesday, 7 August 2007

Could a machine in principal be more conscious than us?

This is a very hypothetical theory that relies on firstly that machines could be conscious like us (which I discuss in the blogs below), and secondly that machines of such complexity, and with the correct causal properties, are eventually actualized (which we will only know in the future). When I say machines I am refering to complex embodied AI.

Consciousness comes in degrees of intensity which is a reflection (I would argue) on how aware of self and surrounding a system is, which comes as a result of efficient information processing between the system and its environment, and within the system itself. In short: greater awareness = higher consciousness, which is reflected in the intensity and vividness of one’s phenomenal conscious experience.

e.g.1:

Normal dream: minimal awareness (and no Meta self-awareness) of self and surrounding = not very conscious state

Lucid dream: much higher awareness of self and surrounding = more conscious state

e.g.2:

Just woken up: minimal awareness = not very conscious state

Woken up and alert: greater awareness of self and surrounding = more conscious state

e.g.3

Further higher self and surrounding awareness perhaps due to meditation or psychedelic drugs = Even higher state of consciousness

Given that machines in principal could far surpass the limits of biological IP and thereby achieve far greater states of awareness. Then if they do have phenomenal consciousness like us, their phenomenal experiences might be millions of times more vivid and intense; and thus we might have to claim that they have far surpassed the levels of consciousness a human can achieve.

Of course there are different forms of consciousness. For example, visual consciousness, auditory consciousness, emotional consciousness, self-consciousness. Whether a machine could surpass all of these forms, one could argue, would rely on its ability to process information efficiently in the relevant domain. Again everything I have said is reliant on artificial consciousness being possible; but I think what I have written makes an interesting point.

Could a machine ever be conscious? REVISITED

If you have not read the previous blog then I suggest you do so you understand the context of this after thought. In the previous blog I concluded that in order for an artificial machine to be conscious its physical hardware had to develop with its software; just as our neural networks physical developments and changes directly (intrinsically) cause our phenomenal consciousness (whatever that might be) to change aswell. I argued that if a machine did not do this then it would not have conscious experience like us; although it might have some very abstract form of consciousness.

However I thought of the case of cochlear implants, which allow the user to have normal functioning hearing; hearing being one such part of conscious experience. Now I don’t really know how the software or hardware of these implants work, but I assume they are fairly stable and static in their design, and not dynamic like biological neural networks. Therefore perhaps my software-hardware unified development theory needs reconsidering. If any reader knows about this I’d be interested to hear from them.

However reconsidering once again, it may well be that at such an early stage of processing the unity of software and hardware development is unnecessary because the biological cochlear doesn’t change much once it has developed, i.e. it has quite a static function.

There are other cases of combining artificial and biological parts in the brain, such as rats that can be remote controls by having electrodes implanted in their neurons. Clearly biological and non-biological mechanisms can be combined to facilitate significant and interesting functions. However at least on the level of rats we don’t know if they have phenomenal consciousness like us due to their lack of reportability. If we were to start replacing many of a persons neural networks with computer chips simulating such a function but through a different physical medium, they might start to lose their usual phenomenal consciousness, or even develop a different one. (A good outline of this thought experiment can be found here http://www.consciousentities.com/stories.htm#chip-head) One can only speculate on such an issue, but perhaps future research will shed light.

What do readers think?

Sunday, 5 August 2007

Could a machine ever be conscious?

The simple and straightforward answer is yes. We are in fact biological machines that have phenomenal subjective experiences and these experiences lead us to the concept of consciousness. However the real and interesting question here is; could a non biological entity ever be conscious? Or phrased differently could we ever create an artificial intelligence machine that was conscious in the same way as us, in that it for example experienced the redness of red, the sweetness of an apple, and the strong emotions that we all feel.

Many simply answer no, perhaps due to a religious belief of a soul, or just because of a strong hunch that has always been with them. However it is not until one has spent time properly contemplating and studying the issue deeply, that one realises the issue is not so straightforward. As someone in that latter category I hope my insights can shed some light on the issue.

The fact is we will never really know for sure whether an AI machine we create is conscious like us, just as we can never know for sure that each other of us is conscious, because subjective experience is personal and private to the individual. We cannot directly experience other minds. The best we can hope to achieve is to infer out the necessary and sufficient conditions for conscious experience. If an AI machine can meet these conditions, then perhaps we will be forced, however reluctant we are, to infer that it is in fact conscious.

Some strong AI theorists argue that programmed artificial intelligence, given the right design, enough complexity and the correct simulated virtual environment, would have conscious experience. I am of the opinion that real world embodiment is one of the crucial necessary conditions of consciousness, i.e. for AI to be conscious it would need to be situated in a robot that took its own sense data in and interacted with its environment. However perhaps I am wrong.

Take dreaming for example; lucid dreaming in particular. I would consider the lucid dreams I have experienced to be conscious experiences. In some lucid dreams the vividness of experience has been no different to my real world experiences, including the awareness or meta-awareness of my self. I also don’t consider embodiment, at least at the time of dreaming, to be a necessary condition for that experience. It’s purely the brain powering that experience (when I say embodiment I mean a body to physically interact with the world and apparatus to take in sense data.) Perhaps then embodiment is not a necessary condition.

Perhaps however I have overlooked the issue of embodiment here. One might reasonably argue that those dream experiences were only made possible because of my real world embodied experiences. The interactions in those dreams, the experience of qualia (if one can excuse the use of the term), is only made possible because of my real world experiences imprinting such like data into my brain. If one looks at it from this perspective, then perhaps the embodiment issue can be resolved. Embodiment remains a necessary condition for conscious experience. (Although perhaps philosophical issues from “The Matrix” could turn the tide back the other way.)

This does not rule out the possibility of artificial consciousness. Much research has and is going on in the field of situated AI and robotics. Rodney Brook and his team at MIT are making much progress on designing and creating realistic robots in real world contexts. Marvin Minsky even goes as far to say that human like robots such as COG and Kismet could in a sense be regarded as conscious. I personally think he is overlooking philosophical issues in making claims like that. I’m pretty sure that those robots don’t have conscious experience, although they are very impressive in their design (wikipedia them). Also I think we need to be wary that simulating the complex behaviour of a conscious entity is not sufficient condition alone to infer that it is conscious (although many might disagree). We need to be looking at how information is being processed at a lower level before such inferences.

We are in fact doing rather well at working out how the brain processes information and then recreating mathematical and computer simulations of such dynamics. Connectionist Artificial Neural Networks for example attempt to simulate the organization and firing of neurons in our brain. The idea is a large multitude of simple units (like neural cells), receiving an input and then interacting with units at higher levels via simple connection signals, or weights, through the system (like neural impulses along dendrites and axons), to receive an output from the resultant weight. It is also interesting to note that since connectionist models’ early developments, second generation models have been proposed with a greater emphasis on temporal structure, context and dynamics (after all our cognition is not static); and third generation models with even greater emphasis on such properties, plus more neurobiologically realistic features such as complex connectivity and deliberate noise, as Clark illustrates (“Mindware: An Introduction to the Philosophy of Cognitive Science” (Oxford Press 2001) – P.68-73).

However even so, such models are rather simplistic compared to real brain neural networks. They ignore the computation that occurs within cell bodies, such as neurotransmitter concentrations etc. Mathematical models have been created that display these finer details and I’m pretty sure it won’t be too long till researchers integrate various AI techniques to create much more realistic simulation of human brain neural network mechanics. If we look at the exponential growth rate of hardware IP capacity and complexity of software, we can see that this isn’t just a far fetched dream. However I do still think there is an issue that is being overlooked and one that I only came across very recently while reading a response to Ray Kurzweil’s “The Singularity is near” (2005).

With a computer the hardware and software are intrinsically separate. A computer is what we call a “virtual machine”. The hardware can be used to realise any number of different software, e.g. java, word. Cognitive Scientists have a tendency to make an analogy of hardware is to brain, as software is to mind. I take the latter to mean that. Software is the equivalent to what we call the conscious mind. Thus given the correct design and complexity (and arguably embodiment) a computer’s software would be its consciousness.

The thing is the hardware and software of our conscious experience are not separate, they are intrinsically linked. Our brains are not virtual machines. We can’t run a java script on them or a word document. When we consciously experience something it is as a direct result of the pattern of activity in and between our neurons. I’m not going to go into the intricacies of the hard problem of consciousness because it would drag this blog on far too long (for a more detailed account see: David Chalmers (2002): “Consciousness and its place in nature”).

However I have come to the conclusion over my studies that the physical and mental are one. . There are not two separate realms of physical and mental. Many physicalists intrinsically deny the mental altogether. However unless they are philosophical zombies with no phenomenal experience I don’t really think they approach the issue from the right angle. I think much of the problem comes from our misuse of language. I argue in some respects it is best to explain things via the physical and in others via the mental. You can explain consciousness in terms of physical brain interactions or in terms of what it appears you are experiencing (e.g. the redness of red); whichever you prefer, as long as you realise that the two are the same thing.

If what I am saying is true then AI software running on a computer would not have conscious experience like us (although it might have some very abstract form of consciousness) because the hardware that implemented it would not be intrinsically tied to it. The same software could be run on any number of virtual machines and give the same pattern of behaviour. Since the brain and mind are one, we have our exact personal conscious experience as a direct result of the physical patterns of activity in our brain and as a direct result of the exact physical patterns of our brain only. Thus a machine could only be conscious like us if hardware and software became one. Many Cognitive Scientists say hardware doesn’t matter; I’m saying it does!

Now I don’t rule out the possibility that AI software and hardware could be intrinsically linked to produce a realistic output; but it would be very difficult to design such dynamic hardware. Hardware that actually physically changed just like our brain neural networks do as it received new input would be incredibly difficult to create. Only time and a serious lot of research into the issue will shed any real light.

So what is my answer to could a machine (namely a non-biological AI) be conscious? Well yes, if firstly it was embodied in a robot that gave it a real world context; but more importantly that the software and hardware became one, so that the machines conscious experience was a direct result of its software and hardware development together.

Wednesday, 1 August 2007

Is human intelligence inherited through genetics or gained through the environment?

Due to my Cognitive Science background I have a tendency to regard intelligence from an information processing perspective. Of course intelligence is a very general term, but it makes sense to have some sort of criterion before one begins to dispute related issues. Thus Cognitive Scientists judge a systems intelligence through its ability to efficiently process information in response to a given task or problem.

The issue here is whether intelligence is inherited genetically or gained from the environment through experience. I will focus on human intelligence, although it equally applies to any animal with a brain. Furthermore I will ignore genetically inherited diseases that effect brain performance (e.g. Down syndrome), and concentrate on normally functioning brains.
In short I will show that both genetics and experience have a role to play in our intelligence.

The human brain consists of several 100 billion interconnected neurons that receive information from the environment through our sense organs, process information, and then send signals to our motor neurons so we can respond with intelligent and appropriate behavior. These neurons are organized into neural networks and their patterns of electrochemical activation are causally responsible for cognition, consciousness and the intelligent behavior that every one of us displays. There are around 100 different neural networks in the human brain, each performing a particular function (although these networks are by no means independent).

Our inherited genetics in part encode the size and physical structure of our neural networks during development before and after birth. Other factors such as provision of correct nutrients during gestation, and the correct diet during brain development after birth, also affect the structure; but a large part can be put down to genetics. Bigger neural networks increase information processing capacity by increasing processing power and memory (which is stored in neurotransmitter concentrations in the neural cell bodies). Hence people who inherent big brains tend to be more intelligent. However before a human experiences the world and provides real world informational input to their neural networks, much of the organizational structure of their networks is chaotic. Hence until babies' brains (and bodies) develop, they are unable to understand very much or perform anything intelligent.

As an infant's neural networks are fed input through their sense organs, their networks become more organized and more efficient at processing information. Repeated exposure to certain experiences allows certain skills to be developed. From motor skills such as walking, to linguistic skills, to logic skills; all of which contribute to intelligent behavior. This is due to the developing neurological organization of the networks, which allow them to process and store information in a more efficient way.

D.Hebb (The Organization of Behavior: A Neuropsychological Theory, (New York: Wiley 1949), p.62):
"When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing at it, some growth purpose or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing to B is increased."

Hence by being exposed to certain tasks/problems in our experiences we become better at processing related information, and can be regarded as more intelligent in that domain. For example expert chess players reach such an intelligent level of expertise due in large part to the fact they have played a lot of chess, and are thus able to recognize certain patterns from previous experience when they play a game and apply the correct moves.

Artificial neural network simulations have been computer programmed to demonstrate this ability of neural networks to increase IP efficiency following repeated exposure of input. For example Sejnowski and Rosenberg's NETtalk (1987), which contained a huge set of parallel artificial neural networks, learnt to read and correctly articulate English text phonetically through trained examples. Following network training, results of the phonetic output started from random noises, through to babbling like a baby, to a fairly high competence in word articulation of about the level of a four year old. Of course every artificial neural network simulation programmed so far is nowhere near the complexity of the human brain's neural networks; but they do provide evidence that the theory works. With the exponential growth rate of software complexity and computer IP power we are bound to see more realistic simulations in the future.

In conclusion then, evidence from Cognitive Science, Artificial Intelligence, and Developmental Neuroscience provide strong indication that our intelligence is in part inherited from our genetics, which encode a certain structure and size of our neural networks. But it isn't until we experience the world and the networks are given informational input, that we really develop any true intelligence. Hence a person disposed to the correct upbringing, education and general environment will be far more likely to develop intelligence, than someone less so disposed. Clearly then both genetics and environment are important to human intelligence, but environmental upbringing is the key.

Understanding the difference between intelligence and wisdom, and its implications for AI.

There are many definitions of intelligence and the usage largely depends on the context in which one wishes to apply it. As a student of Cognitive Science I would argue that information processing effiiciency and computational capacity is a criteria that can be universally applied to any intelligent system. This I include animal, human, and artificial intelligence (AI). The American Futurist Ray Kurzweil even goes as far as to say that matter down to the atomic level may be regarded as intelligent in the future once we have the technology to harness its computational and IP capabilities through mechanisms such as quantum computing.

There has been a debate for many years amongst the Cognitive Science community as to whether an AI program in an un-embodied context can be regarded as intelligent. Many strong AI theorists argue that computation alone is sufficient for intelligence. However more recently Cognitive Scientists have placed emphasis on a systems embodiment, arguing we cannot regard an IP system as intelligent if it has no real world context. I am of the opinion that embodiment is a key feature of true intelligence. Thus AI can only be regarded as intelligent if it is embodied in some sort of robot. Relatively speaking we have not made much progress, but progress is accelerating exponentially. Rodney Brooks and his team at MIT are making good progress in designing robots that could situate AI in a real world context. In terms of IP capacity and realism of robotics we are around the levels of real insects today.

Many might believe that today's supercomputers must have IP capacities of at least humans due to their speed, but this is not true. Bare in mind the human brain has several 100 billion neurons (processing units) each with over 1000 interconnections: That's a lot of IP power! The average human brain can perform 10 to the power of 25 calculations per second, while Blue Gene/P, the world's most powerful supercomputer, is stuck at 10 to the power of 7 calculations per second. Of course in terms of number crunching, supercomputers outperform by far; but they are nothing like as good as humans at IP in real world contexts such as pattern recognition. Bare in mind as well that human brains work through parallel processing that has to deal with millions of functions Only time will tell whether computers develop to our level but its more than likely at current rates of growth. (For further discussion see Ray Kurzweil (2005) "The Singularity is near")

Wisdom on the other hand is something that at least currently can only be applied to humans. I'm sure many readers will maintain that computer controlled robots will never be considered wise. However I am not going to rule out the possibility.

Again there are countless definitions of wisdom. A good definition I believe is our ability to utilize our knowledge and act upon it in the most efficient way. For us humans, this usually comes with experience. We learn from our mistakes and gain wisdom with age as we build up highly structured informational patterns in our brains. With wisdom we are also able to complete intelligent tasks without all the necessary information currently available. Largely I believe this is the case because we are such brilliant pattern recognizers.

Ultimately there is no objective definition of intelligence or wisdom. However there are clear differences between the two. There is strong reason to believe that once AI becomes embodied and considerably better at pattern recognition, we will be able to regard its behavior as not only intelligent; but wise aswell.