Friday, March 30, 2012

A thought experiment on, identity

Many years ago I watched a lecture and subsequent Q&A the Dalai Lama gave to a large crowd of fellow monks and the general public. One questioner asked what is in many ways the obvious question, what is “it”, that reincarnates.

The Dalai Lama was quick to answer, “I don’t know”, and he then moved on.

That particular event has stayed in my memories all these years for two reasons. First, that is the question I too would ask anyone who professed a belief in or taught the idea of, reincarnation. Second, because he wasn’t afraid of giving an answer that while extraordinarily rare in religious or “spiritual” circles, would be for science a straightforward and common response; “I don’t know”. I liked that, it is honest.

Another thing the Dalai Lama has said repeatedly is that should science be shown to contradict Buddhist doctrine on matters related to empirical questions, one must always go with science. That too is rare within religious circles, and something we should admire.

Let me be clear that I am not a Buddhist, and I am certainly not someone who subscribes to the idea of reincarnation or other Tibetan Buddhist doctrines. My mentioning of the Dalai Lama is not an endorsement of his teachings, he has also made many baseless and superstitious claims. However, the Dalai Lama’s answer to the above question frames, in many ways, the larger issue that sits at the very heart of the modern study of consciousness.

I’ve written at length about what can be the vacuous nature of the word “spiritual, see:

I won’t repeat those arguments here. Setting that aside, the central question of “self”, what is the self, what is the nature of the self, what the implications of all that could be, etc., underpin all conversations related to values, ethics, morality, justice, pride, shame, desire, fear, purpose, meaning and even death. And this is true whether we acknowledge that fact or not. As the caterpillar in Alice in Wonderland famously asked, “Who are you?”

My reason for revisiting this issue is oddly enough, technology. Over the last year or so I’ve been reading the material by Ray Kurzweil (The Singularity is Near), as well as that of his critics. Whether you find his ideas plausible or not, it is hard not to find the conversation itself fascinating. It really is a case of truth being far stranger than fiction. Kurzweil’s main points revolve around what he calls the law of accelerating returns. And Kurzweil backs his claims with mountains of data. The idea is simple enough, most people, myself included until recently, think of technology on a linear time scale. We assume that a certain amount of time will have elapse before the next level of technology arises, and that span of time is almost always measured against our current state of technological development.

We really don’t have to look much farther than science fiction to realize the truth of this statement. When I grew up, Captain Kirk had a wireless communicator that he used to converse with his Starship. Mind you now, he was a Starship Captain, so of course had access to such ‘advanced’ technology. Take out your cell phone, look at your I-phone, it has far more capability than was even imagined back then, and even more importantly, nearly everyone in the first world owns one. Not only did we vastly underestimate the state of technological development that would exist in the years the Enterprise was supposed to be in operation, we also vastly underestimated the ubiquitous nature of technology itself. It becomes cheaper, and spreads faster, than anyone ever envisioned. As much as I love the original Star Trek, the truth is that even those often fantastic writers had trouble imagining the future we now live in a mere fifty years later.

Kurzweil’s idea is that this lack of foresight is due to linear thinking. Technology doesn’t increase in that fashion, instead it grows exponentially. We always use the current level of technology to create the next, and this process builds on itself to the point where the total sum capability of technology itself begins to double rapidly. How rapidly? According to Kurzweil technology will reach a point he calls the ‘singularity’ in the year 2045, a mere thirty three years away. At that time technological advancement will have reached some state of AI (artificial intelligence), and furthermore, it will have reached a point where it is capable of creating the next generation of itself. Given the law of accelerating returns, this would very quickly lead to a level of intelligence and knowledge well beyond human comprehension.

I won’t bore you with details related to Kurzweil’s evidence, but suffice it to say very smart people all over the planet take him seriously because he offers reliable data. This isn’t science fiction; this is a prediction about where science and technology will go based on the evidence of the past.

If we take an “eyes wide open” look at the state of technology as it exists today, we will see that we have drones infused with rat brains, giving them a sense of self preservation (see: We have the field of epigenetics, which has transformed how we look at the evolutionary process. We have the emerging science of nanotechnology which has implications that are truly mind blowing as it relates to all aspects of human life. And we have the first case of synthetic-biology at the Karolinska Institute in Stockholm, being successfully transplanted into a patient who had terminal cancer, a synthetic esophagus which was sprayed with the stem cells. This isn’t science fiction, this is right now.

As interesting as that is, I want to focus on Kurzweil’s specific claims related to consciousness. To do this one would first need to accept the premise that AI in some form or another is possible. And since we currently use various forms of weak AI daily, this is actually an undeniable fact. A much larger leap, to put it mildly, is to assume that some form of AI will become sentient, or even self-aware. I think this too isn’t only conceivable, but most likely inevitable. If you have trouble accepting this possibility, ask yourself the following question: If scientists are one day able to create an artificial synapse, and you suffer some form of head trauma, and these scientists are able to replace a small area of damaged synapses with these synthetic ones (much the same as Karolinska has with the synthetic esophagus), would that still be you? Assuming you answer yes, which I think most people will, take out a larger chunk. Perhaps a quarter of your brain suffered damage and required replacing, is that still you? Again, the presumption in this thought experiment is that these synthetic synapses work in a manner identical to the biological ones. If you still say yes, and I personally don’t find this idea hard to fathom, then assume for sake of argument that eventually your entire brain is replaced with these artificial synapses; Still you? If you can go that far then you can realize the possibility of sentient AI.

There is no reason to assume that AI would have to mimic a human brain. It could operate using a totally different schematic. But to accept that AI is possible requires nothing more than the realization that our own everyday consciousness is at its core a mechanical process that occurs within the brain, which is exactly what all the current evidence suggests. Given that fact, the only barrier to AI would be the technological knowhow, and that, according to Kurzweil, is something the singularity would achieve handily.

Kurzweil proposes that sometime in the future human beings will be able to download their consciousness, and in this case when I write “consciousness” I mean the entire sum of all your past memories, thoughts, ideas, and personality, into something akin to a hard drive. At the same time, we will be able to easily reproduce various genetic replicas of ourselves, or our body parts, for use in case of accident, or death. Some have even suggested we clone humans without brains and then occupy the bodies.

You may find this idea to be so far out there that it seems silly. But, for purposes of our thought experiment, suspend your skepticism and imagine for a moment that you live in such a reality. Now think of the following scenario. You are about to take a trip overseas. As a regular safety measure you download your consciousness, which includes all things ‘you’ up to that point and you head out of town. Your wife hears that the vehicle in which you were flying has crashed. There are no survivors. Distraught, she uploads your last consciousness update into a genetic replica of you which is created on a 3 dimensional biological printer, your exact DNA match.

It is easy up to this point to see that for your wife, and for everyone else who knew you prior to your last download, you are for all intents and purposes, you. She would know no difference.

Now let’s assume for sake of our thought experiment that you somehow managed to survive the crash. You arrive home ready to surprise your wife with the great news, and what do you find? You find her in the arms of another ‘you’. Aside from all the ethical dilemmas this kind of scenario creates, such as which one of you gets to be intimate with your wife, it would seem on first glance to point to something obvious; the you that your wife created was not in fact ‘you’, the ‘you’ that downloaded yourself into the hard drive, but instead a copy. It may be an exact replica, to her and everyone else they may never know a difference, but to you, it isn’t ‘you’.

In that sense what Kurzweil is offering wouldn’t in fact be a form of life extension, but merely the ability to copy oneself. And that is in fact the exact argument the author Michael Shermer makes in refutation of Kurzweil’s immortality ideas.

When I first heard that thought experiment I found myself agreeing completely with Shermer, that wouldn’t be ‘me’, it would be a copy. But my views on that have since evolved. Let me offer one more, short, simple thought experiment that ends with a yes or no question.

Imagine that we live in the same reality as described above. We have the technology do all that is being suggested. You create an ‘exact’ (that is important for the purposes of this thought experiment) replica of your body. You also download the entirety of your consciousness (again all memories, thoughts, ideas, past experiences), into your hard drive. You set the computer to start transfer 3 seconds after your death, and you then kill yourself. Three seconds later your consciousness is uploaded into the replica body of yourself. Here is the yes or no question, is that you?

Some of my philosopher friends may at this point be wringing their hands at the undefined use of the word ‘you’. Absent a solid definition they will say, no real conversation can go forward. But that delineation is in and of itself hard to grasp concretely when we are talking about the conscious, subjective experience of being. Within the confines of this thought experiment let us say that by ‘you’ we are talking about some continuity of subjective consciousness. And in that sense of the word, is there anything found within the brain or body, absent the patterns of nature and nurture, absent the impact of both genes and environment that would stop and not re-start within the new ‘you’?

I’d be willing to bet that most people will certainly say no, that is not me. Because although it contains all that I was, and is my exact replica, it isn’t the “I” that killed itself just seconds before. In other words, some-thing would have died. There would not be the continuity of subjective consciousness from one vessel to another.

The more I think about this, the more I think that conclusion might be wrong. If you answered no that is not you, then what is it that ‘died’ when the first ‘you’ killed yourself? Your body according to our experiment is identical, your mind, absent the one or two seconds between downloads, is identical and contains all past memories, emotions and experiences. So what is it then that would be absent? What is it that dies?

This brings me full circle to the start of the essay. This is the same question, in a different form, asked by the young woman to the Dalai Lama, if something re-incarnates, what is it?

Assuming I am right, and most readers side with the idea that the second you isn’t actually ‘you’, one is forced to posit either something supernatural, such as a “soul”, or something as of yet unknown, something essential to consciousness that is not in fact related to the biology of the body. Absent those two things, neither of which there is any evidence for, what is the ‘it’ that could possibly cease continuing?

*(As a side note, the original Shakyamuni Buddha had his own answer to a very similar question roughly 2500 years ago. When asked, if consciousness is insubstantial and impermanent, and there is no underlying soul or self, by what mechanism do the impressions of virtuous or non-virtuous actions in one lifetime bear fruit in another? He answered that just as one can light one candle with the flame from another that does not mean they are the identical flame.)

I believe that we as human beings are natural dualists. Most of us seem to assume, due to the very nature of subjective consciousness itself and perhaps due to language, that there is some ghost in the machine. There must be a ‘you’ inside that head somewhere that is the observer of the observed. After all, how can you see thoughts you imagine within your brain, absent some-thing that sees them? However, what modern neuroscience tells us about consciousness seems to be pretty clear, at least so far, there is no homunculus within the brain switching levers or observing from inside. In that sense ‘mind’ is simply a word we use to describe what the brain does, much the same as ‘roll’ would be to 'tire'.

If Kurzweil is right, and if my take on the thought experiment above is right, then the Dalai Lama would have an answer he could give the questioner at his talk. Assuming he holds true to his ethos that in matters of fact science must always trump tradition, when someone asks “what is it that reincarnates”, he could offer a one word reply, “nothing”.

There is, after all, no self.