Wednesday, 3 October 2007

What philosophical insights can we draw from split brain syndrome and blindsight?

Both split brain and blindsight are two extremely bizarre conditions, which strongly suggest a certain relation of phenomenal conscious experience to the wiring of the physical brain.

Split brain syndrome occurs when a lesion to treat epilepsy between the left and right hemispheres of the brain causes the information between the two hemispheres to be stopped. This leads to the strange phenomena seen in the video link below: http://www.youtube.com/watch?v=ZMLzP1VCANo&eurl=

It’s hard for any of us without the syndrome to imagine what it would be like to have that experience. As Joe explains it doesn’t feel any different from before, the brain just adapts to it. It clearly shows that processes need not be conscious to work.

It’s tempting to think from the Doctors words in the conclusion that there is a central processor at the end that produces consciousness. However because it provides strong evidence for the modularity of the human brain (and for that matter probably other animal brain with similar anatomy), I would argue such a strong conclusion is misled. There are hundreds of neural networks in the human brain, each serving separate but interconnected functions. When a split occurs between two related networks the brain is unable to process information between them and this is reflected in distorted and separated phenomenal conscious experiences as shown in split brain syndrome. The final stage would not be possible with at least most of the earlier processes, and it functions in the same way as any neural network; parallel distributed processing PDP.

The brain does not have a soul, unified self or a central processor behind it. It functions using distributed and interconnected network processing, and the strange phenomenal experience related to lesions between such networks is clear evidence of this. However it does indicate that only once a certain level of processing has occurred, consciousness can emerge. This leads us nicely to “Blindsight”.

Blindsight occurs when damage in higher processing areas of the visual cortex, lead to a lack of visual phenomenal conscious experience in both or one eye (depending on whether the left or right hand side of the visual cortex is damaged.) The interesting part is that in experiments, researchers have shown that patients are able to point at a dot on a screen with something like 99.9% accuracy (clearly showing that this wasn’t luck.) without any visual conscious phenomenal experience whatsoever.

Thats pretty incredible. The experiment is clear evidence not only for the modularity of various brain functions, but also that phenomenal experience only emerges at a higher level of brain processing. Lower order visual functions (like point recognition) are possible by processing in lower order areas without the use of phenomenal experience that emerge from higher order processing. We can only draw firm conclusions from blind sight on the emergence of visual consciousness since it only demonstrates that aspect of consciousness. However it would be strange and very counter-intuitive to reason that it wouldn’t apply to at least the other 4 senses if not higher order cognition as well.

It seems to me to strongly indicate that only animals with the correct brain modules or at least the necessary complexity of neural network processing, would have any visual consciousness. Thus it is possible that many insects don’t have visual consciousness and all their functions are facilitated by lower order unconscious processing. I’m not an expert in neuroscience or the anatomy of animal brains so I’m not sure how high up the animal chain we can go here. All insects? All reptiles? All mammals but us? Perhaps if any neuroscientists reading this have the expertise they can shed some light.

This condition also has implications for AI. We might well argue, if we believe artificial consciousness is possible at all, that only AI programmed with the correct modules and complexity would have any consciousness, visual or other! This may seem obvious to many, but blindsight experiments (and split brain ones to for that matter) gives clear empirical evidence for it.

I’m not going to go any further here as I don’t have sophisticated knowledge of the neuroscience and biological anatomy of the brain, but hopefully the philosophical insights have been interesting and thought provoking.

7 comments:

Anonymous said...

I recommend taking a look at "Brain Bisection and the Unity of Consciousness" by Thomas Nagel.

Essentially, he argues for a redefinition of the word "mind" as we currently use it, based on conclusions drawn from split-brain research.

Anonymous said...

I feel consciousness is about assembling scattered neurons and recruit them when the need be - there may not be any epicenter as such. It's more like a broom that comes together when the need be, to mop up our brain for working out its tasks.

Blindsight research and hemispherical neglect has been pried open philosophically a lot - notably by Alva Noe of UC Berkley in his Action and Perception

Eric said...

Very cool post.

I wouldn't mind (no pun intended) reading a post by you on the possibilities of A.I in the next 50 years...

If you get a chance check out our new community at: OPEN SOURCE INTEGRAL

Cheers

M~

cogscifreak said...

Hi Jack -- I'm an undergrad student in cognitive science in my second year, and before I add my comments, I wanted to let you know that I find your blog very informative.

I wanted to post some additional thoughts on modularity. When we look at some of the studies on animal conditioning, one of the things we observe is that apparently similar behaviors can arise from different underlying processes, e.g., goal-directed and habitual instrumental behaviors. Similarly, memory is also observed as being of many different types and categories. Another example is cue combination in Pavlovian conditioning, which can be additive, or can generalize across different configurations. My question here is, why should such fractionation arise at all? Why should this modularity and dissociation be employed by the brain? Isn't it inefficient to use duplicate strategies like this to exhibit the same behaviors?

I was thinking that maybe Marr's levels have something to do with this. Perhaps Marr's levels can shed some light on why the same computational goal can be attained via different algorithms and neural mechanisms, and thus the brain inevitably fractionates and this gives rise to modularity and ultimately, to dissociations. Any thoughts?

Jack J said...
This comment has been removed by the author.
Jack J said...

well think about it; if the human brain has lots of different modules it can perform lots of different tasks at the same time. In Normally functioning brains the modules are interconnected neurally (the physical level) and so the global work space is coherent (the functional level) and thus we normally have holistic coherent experiences (the phenomenal level) Dissociations occur when such connections are damaged leading to strange phenomenal experiences such as split brain syndrome.

If the brain was just one big IP unit (instead of lots of interconnected modules) it would be incredibly hard for us to achieve the range of functions and phenomenal experiences that we all have. thats why we are so different from many computers that can do huge sums very quickly because they put all their computational resources into one task. whereas our brains split the tasks amongst many specialised modules allowing us a great deal of flexibility as a species which its probably why it evolved. Flexible species are able to apadpt to more environments, survive, and pass on their genes.

Hopefully that answers your question although I may have misunderstood you. if so post again.

cogscifreak said...

Thank you, those are some good points.

Personally I have become a little wary of the computational metaphor because I feel that a lot of cognitive scientists are confusing metaphor for reality, which leads to a lot of misunderstandings. The brain is not like a simple machine, but is a very intricate system.

Sorry I might be rambling a little here, but I think what I'm focusing on right now is why the brain employs lots of apparently duplicative and fractious techniques (which is related to modularity but is not exactly the same thing). Isn't there a single best way to solve these problems (e.g. the cue combination in animals example)? And moreover, can we expect this fractionation to arise at all three of Marr's levels (computational, algorithmic and implementational)? It does appear to be a conundrum: how do different underlying processes give rise to similar behaviors, whereas the same behavior might, at different points in time, be supported by very different underlying processes?

I think a lot of this might be related to how we draw the line between the "cognitive/psychological functions" and the underlying "neurophysiological mechanisms/processes". To what extent are these identical, and to what extent can we separate them?

I definitely like your use of the terms phenomenal, functional and physical here. These relate nicely to Marr's levels.

Thanks for your help, and I hope I'm not taking up too much of your time!