Sign in. Get my own profile Cited by View all All Since Citations h-index 37 30 iindex Tim Bayne Monash University Verified email at monash. Guy Kahane University of Oxford Verified email at philosophy. Joshua Shepherd Carleton University Verified email at carleton. Macquarie University. Verified email at mq. Philosophy free will philosophy of mind. Articles Cited by Co-authors. Canadian Journal of Philosophy 36 3 , , Journal of Law and the Biosciences 1 1 , , Articles 1—20 Show more. Help Privacy Terms. Amputees by choice: body integrity identity disorder and the ethics of amputation T Bayne, N Levy Journal of applied philosophy 22 1 , , The good, the bad and the blameworthy N Levy J.
Addiction is not a brain disease and it matters N Levy Frontiers in Psychiatry 4, 24 , Moral relativism: A short introduction N Levy Oneworld publications , In fact, as Safire himself acknowledges, the term predates his usage. Before most people saw no need for any such field, but so rapid have been the advances in the sciences of mind since, and so pressing have the ethical issues surrounding them become, that we cannot any longer dispense with the term or the field it names.
Neuroethics has two main branches; the ethics of neuroscience and the neuroscience of ethics Roskies The ethics of neuroscience refers to the branch of neuroethics that seeks to develop an ethical framework for regulating the conduct of neuroscientific enquiry and the application of neuroscientific knowledge to human beings; the neuroscience of ethics refers to the impact of neuroscientific knowledge upon our understanding of ethics itself.
In this book I shall have little to say about this set of questions, at least directly though much of what I shall say about other issues has implications for the conduct of neuroscience. Instead, I shall focus on questions to do with the application of our growing knowledge about the mind and the brain to people. Neuroscience and allied fields give us an 2 introduction apparently unprecedented, and rapidly growing, power to intervene in the brains of subjects — to alter personality traits, to enhance cognitive capacities, to reinforce or to weaken memories, perhaps, one day, to insert beliefs.
Are these applications of neuroscience ethical? Under what conditions?
Do they threaten important elements of human agency, of our self-understanding? Chapters 2 through 5 will focus on these and closely related questions. The neuroscience of ethics embraces our growing knowledge about the neural bases of moral agency. Neuroscience seems to promise to illuminate, and perhaps to threaten, central elements of this agency: our freedom of the will, our ability to know our own minds, perhaps the very substance of morality itself. Its findings provide us with an opportunity to reassess what it means to be a responsible human being, apparently making free choices from among alternatives.
It casts light on our ability to control our desires and our actions, and upon how and why we lose control. It offers advertisers and governments possible ways to channel our behavior; it may also offer us ways to fight back against these forces. If the neuroscience of ethics produces significant results, that is, if it alters our understanding of moral agency, then neuroethics is importantly different from other branches of applied ethics. Unlike, say, bioethics or business ethics, neuroethics reacts back upon itself.
The neuroscience of ethics will help us to forge the very tools we shall need to make progress on the ethics of neuroscience. Neuroethics is therefore not just one more branch of applied ethics. It occupies a pivotal position, casting light upon human agency, freedom and choice, and upon rationality. It will help us to reflect on what we are, and offer us guidance as we attempt to shape a future in which we can flourish. We might not have needed the term before ; today the issues it embraces are rightly seen as central to our political, moral and social aspirations.
The kinds of cases that fall within its purview include some of the most controversial and strange ethical issues confronting us today. In this section, I shall briefly review two such cases. Body integrity identity disorder Body integrity identity disorder BIID is a controversial new psychiatric diagnosis, the principal symptom of which is a persisting desire to have some part of the body — usually a limb — removed First A few sufferers have been able to convince surgeons to accede to their requests Scott However, following press coverage of the operations and a public outcry, no reputable surgeon offers the operation today.
In the absence of access to such surgery, sufferers quite often go to extreme lengths to have their desire satisfied. For instance, they deliberately injure the affected limb, using dry ice, tourniquets or even chainsaws. Their aim is to remove the limb, or to damage it so badly that surgeons have no choice but to remove it Elliott A variety of explanations of the desire for amputation of a limb have been offered by psychiatrists and psychologists.
It has been suggested that the desire is the product of a paraphilia — a psychosexual disorder. On this interpretation, the desire is explained by the sexual excitement that sufferers supposedly feel at the prospect of becoming an amputee Money et al. Another possibility is that the desire is the product of body dysmorphic disorder Phillips , a disorder in which sufferers irrationally perceive a part of their body as ugly or diseased. The limited evidence available today, however, suggests that the desire has a quite different aetiology.
In this interpretation, BIID is analogous to what is now known as gender identity disorder, the disorder in which sufferers feel as though they have been born into a body of the wrong gender. BIID is a neuroethical issue because it raises ethical questions, and because answering those questions requires us to engage with the sciences of the mind. The major ethical issue raised by BIID focuses on the question of the permissibility of amputation as a means of treating the disorder. Now, while this question cannot be answered by the sciences of the mind alone, we cannot hope to assess it adequately unless we understand the disorder, and understanding it properly requires us to engage in the relevant sciences.
Neuroscience, psychiatry and psychology all have their part to play in helping us to assess the ethical question. It might be, for instance, that BIID can be illuminated by neuroscientific work on phantom limbs. The experience of a phantom limb appears to be a near mirror image of BIID; whereas in the latter, subjects experience a desire for removal of a limb that is functioning normally, the experience of a phantom limb is the experience of the continued presence of a limb that has been amputated or, occasionally, that is congenitally absent.
The experience of the phantom limb suggests that the experience of our bodies is mediated by a neural representation of a body schema, a schema that is modifiable by experience, but which resists modification Ramachandran and Hirstein Phantom limbs are sometimes experienced as the site of excruciating pain; unfortunately, this pain is often resistant to all treatments.
If BIID is explained by a similar mismatch between an unconscious body schema and the objective body, then there is every chance that it too will prove very resistant to treatment.sisoundcusomu.ml/hosa-busco-novio.php
Neil Levy - Google Scholar Citations
On the other hand, if BIID has an origin that is very dissimilar to the origin of the phantom limb phenomenon, treatments less radical than surgery might be preferable. Surgery is a drastic course of ne uroethic s: s o me ca se studies action: it is irreversible, and it leaves the patient disabled. If BIID can be effectively treated by psychological means — psychotherapy, medication or a combination of the two — then surgery is impermissible.
If BIID arises from a mismatch between cortical representations of the body and the objective body, then — at least given the present state of neuroscientific knowledge — there is little hope that psychological treatments will be successful. But if BIID has its origin in something we can address psychologically — a fall in certain types of neurotransmitters, in anxiety or in depression, for instance — then we can hope to treat it with means much less dramatic than surgery.
BIID is therefore at once a question for the sciences of the mind and for ethics; it is a neuroethical question. Automatism Sometimes agents perform a complex series of actions in a state closely resembling unconsciousness. They sleepwalk, for instance: arising from sleep without, apparently, fully awaking, they may dress and leave the house. Or they may enter a closely analogous state, not by first falling asleep, but by way of an epileptic fit, a blow on the head, or very rarely psychosis.
Usually, the kinds of actions that agents perform in this state are routine or stereotyped. Someone who enters the state of automatism while playing the piano may continue playing if they know the piece well; similarly, someone who enters into it while driving home may continue following the familiar route, safely driving into their own drive and then simply sitting in the car until they come to themselves Searle Occasionally, however, an agent will engage in morally significant actions while in this state.
Consider the case of Ken Parks Broughton, et al. In , Parks drove the twenty-three kilometres to the home of his parents-in-law, where he stabbed them both. He then drove to the police station, where he told police that he thought he had killed someone. Only then, apparently, did he notice that his hands had been badly injured. Parks was taken to hospital where the severed tendons in both his arms were repaired.
He was 5 6 introduction charged with the murder of his mother-in-law, and the attempted murder of his father-in-law. Parks did not deny the offences, but claimed that he had been sleepwalking at the time, and that therefore he was not responsible for them. Answering it requires both sophisticated philosophical analysis and neuroscientific expertise. What does it mean to act voluntarily? We might hope to answer both questions by highlighting the role of conscious intentions in action; that is, we might say that agents are responsible for their actions only when, prior to acting, they form a conscious intention of acting.
However, this response seems very implausible, once we realize how rarely we form a conscious intention. Many of our actions, including some of our praise- and blameworthy actions, are performed too quickly for us to deliberate beforehand: a child runs in front of our car and we slam on the brakes; someone insults us and we take a swing at them; we see the flames and run into the burning building, heedless of our safety. Against this hypothesis we have the evidence that Parks was a gentle man, who had always got on well with his parents-in-law.
The fact that the crime was out of character and apparently motiveless counts against the hypothesis that it should be considered an ordinary action. If we are to understand when and why normal agents are responsible for their actions, we need to engage with the relevant sciences of the mind. These sciences supply us with essential data for ne uroethic s: s o me ca se studies consideration: data about the range of normal cases, and about various pathologies of agency. Investigating the mind of the acting subject teaches us important lessons. We learn, first, that our conscious access to our reasons for actions can be patchy and unreliable Wegner : ordinary subjects sometimes fail to recognize their own reasons for action, or even that they are acting.
ISBN 13: 9780521687263
We learn how little conscious control we have over many, probably the majority, of our actions Bargh and Chartrand But we also learn how these actions can nevertheless be intelligent and rational responses to our environment, responses that reflect our values Dijksterhuis et al. The mere lack of conscious deliberation, we learn, cannot differentiate responsible actions from non-responsible ones, because it does not mark the division between the voluntary and the non-voluntary.
On the other hand, the sciences of the mind also provide us with good evidence that some kinds of automatic actions fail to reflect our values. Some brain-damaged subjects can no longer inhibit their automatic responses to stimuli. They compulsively engage in utilization behavior, in which they respond automatically to objects in the environment around them Lhermitte et al. Under some conditions, entirely normal subjects find themselves prey to stereotyped responses that fail to reflect their consciously endorsed values.
Fervent feminists may find themselves behaving in ways that apparently reflect a higher valuation of men than of women, for instance Dasgupta Outlining the precise circumstances under which this is the case is a problem for neuroethics: for philosophical reflection informed by the sciences of the mind. Parks was eventually acquitted by the Supreme Court of Canada. I shall not attempt, here, to assess whether the court was right in its finding we shall return to related questions in Chapter 7.
My purpose, in outlining his case, and the case of the sufferer from BIID, is instead to give the reader some sense of how fascinating, and how strange, the neuroethical landscape is, and how significant its 7 8 introduction findings can be. Doing neuroethics seriously is difficult: it requires a serious engagement in the sciences of the mind and in several branches of philosophy philosophy of mind, applied ethics, moral psychology and meta-ethics.
But the rewards for the hard work are considerable. We can only understand ourselves, the endlessly fascinating, endlessly strange, world of the human being, by understanding the ways in which our minds function and how they become dysfunctional. To begin our exploration of these ethical questions, it is important to have some basic grasp of what the mind is and how it is realized by the brain.
If we are to evaluate interventions into the mind, if we are to understand how our brains make us the kinds of creatures we are, with our values and our goals, then we need to understand what exactly we are talking about when we talk about the mind and the brain. Fortunately, for our purposes, we do not need a very detailed understanding of the way in which the brain works. We shall not be exploring the world of neurons, with their dendrites and axons, nor the neuroanatomy of the brain, with its division into hemispheres and cortices except in passing, as and when it becomes relevant.
All of this is fascinating, and much of it is of philosophical, and sometimes even ethical, relevance. But it is more important, for our purposes, to get a grip on how minds are constituted at much a higher level of abstraction, in order to shake ourselves free of an ancient and persistent view of the mind, the view with which almost all of us begin when we think about the mind, and from which few of us ever manage entirely to free ourselves: dualism. Shaking ourselves free of the grip of dualism will allow us to begin to frame a more realistic image of the mind and the brain; moreover, this more realistic image, of the mind as composed of mechanisms, will itself prove to be t h e m i n d a n d t h e br a i n important when it comes time to turn to more narrowly neuroethical questions.
Dualism — or more precisely substance dualism in order to distinguish it from the more respectable property dualism — is the view that there are two kinds of basic and mutually irreducible substances in the universe. This is a very ancient view, one that is perhaps innate in the human mind Bloom It is the view presupposed, or at least suggested by, all or, very nearly all, religious traditions; it was also the dominant view in philosophical thought for many centuries, at least as far back as the ancient Greeks. According to Descartes, roughly, there are two fundamental kinds of substance: matter, out of which the entire physical world including animals is built, and mind.
Human beings are composed of an amalgam of these two substances: mind or soul and matter. It is fashionable, especially among cognitive scientists, to mock dualists, and to regard the view as motivated by nothing more than superstition. If the soul is immaterial, then there is no reason to believe that it is damaged by the death and decay of the body; the soul is free, after death, to rejoin God and the heavenly hosts themselves composed of nothing but soul-stuff.
But dualism also had a more philosophical motivation. We can understand, to some extent at least, how mere matter could be cleverly arranged to create complex and apparently intelligent behavior in animals. Descartes himself used the analogy of clockwork mechanisms, which are capable of all sorts of useful and complex activities, but are built out of entirely mindless matter. Today, we are accustomed to getting responses magnitudes more complex from our machines, using electronics rather than clockwork.
But even so, it remains difficult to 9 10 introduction see how mere matter could really think; be rational and intelligent, and not merely flexibly responsive. Equally, it remains difficult to see how matter could be conscious. How could a machine, no matter how complex or cleverly designed, be capable of experiencing the subtle taste of wine, the scent of roses or of garbage; how could there be something that it is like to be a creature built entirely out of matter?
Dualism, with its postulation of a substance that is categorically different from mere matter, seems to hold out the hope of an answer. Descartes thought that matter could never be conscious or rational, and it is easy to sympathize with him. Indeed, it is easy to agree with him even today some philosophers embrace property dualism because, though they accept that matter could be intelligent, they argue that it could never be conscious. Matter is unconscious and irrational — or, better, arational — and there is no way to make it conscious or rational simply by arranging it in increasingly complex ways or so it seems.
It is therefore very tempting to think that since we are manifestly rational and conscious, we cannot be built out of matter alone. The part of us that thinks and experiences, Descartes thought, must be built from a different substance. Animals and plants, like rocks and water, are built entirely out of matter, but we humans each have a thinking part as well. It follows from this view that animals are incapable not only of thought, but also of experience; notoriously, this doctrine was sometimes invoked to justify vivisection of animals.
If they cannot feel, then their cries of pain must be merely mechanical responses to damage, rather than expressions of genuine suffering Singer But the centuries since Descartes have witnessed a series of scientific advances that have made dualism increasingly incredible. First, the idea that there is a categorical distinction to be made between human beings and other animals no longer seems very plausible in light of the overwhelming evidence that we have all evolved from a common ancestor. Human beings have not always been around on planet Earth — indeed, we are a t h e m i n d a n d t h e br a i n relatively young species — and both the fossil evidence and the morphological evidence indicate that we evolved from earlier primates.
Our ancestors got along without souls or immaterial minds, so if we are composed, partially, of any such stuff, it must have been added to our lineage at some relatively recent point in time. But when? The evolutionary record is a story of continuous change; there are no obvious discontinuities in it which might be correlated with ensoulment. Our immediate ancestors and cousins, the other members of the primate family, are in fascinating — and, for some, disturbing — ways very close to us in behavior, and capable of feats of cognition of great sophistication.
Gorillas and chimpanzees have been taught sign language, with, in some cases, quite spectacular success Savage-Rumbaugh et al. Moreover, there is very strong evidence that other animals are conscious; chimpanzees at least also seem to be self-conscious DeGrazia ; for a dissenting view see Carruthers Surely it would be implausible to argue that the moment of ensoulment, the sudden and inexplicable acquisition by an organism of the immaterial mind-stuff that enables it to think and to feel, occurred prior to the evolution of humanity — that the first ensouled creatures were our primate ancestors, or perhaps even earlier ancestors?
If souls are necessary for intelligent behavior — for tool use, for communication, for complex social systems, or even for morality or perhaps, better, proto-morality — then souls have been around much longer than we have: all these behaviors are exhibited by a variety of animals much less sophisticated than we are. It appears that mere matter, arranged ingeniously, had better be capable of allowing for all the kinds of behavior and experiences that mind-stuff was originally postulated to explain. Evolutionary biology and ethology have therefore delivered a powerful blow to the dualist view. The sciences of the mind have delivered another, or rather a series of others.
The cognitive sciences — the umbrella term for the disciplines devoted to the study of mental phenomena — have begun to answer the Cartesian challenge in the most direct and decisive way possible: by laying bare the mechanisms and pathways from sensory input to rational response and conscious awareness. We do not have space to review more than a tiny fraction of their results here. But it is worth pausing over a little of the evidence against dualism these sciences have accumulated.
Some of this evidence comes from the ways in which the mind can malfunction. When one part of the brain is damaged, due to trauma, tumor or stroke, the person or animal whose brain it is can often get along quite well natural selection usually builds in quite a high degree of redundancy in complex systems, since organisms are constantly exposed to damage from one source or another.
But they may exhibit strange, even bizarre, behavioral oddities, which give us an insight into what function the damaged portion of the brain served, and what function the preserved parts perform. From this kind of data, we can deduce the functional neuroanatomy of the brain, gradually mapping the distribution of functions across the lobes. This data also constitutes powerful evidence against dualism. It seems to show that mind, the thinking substance, is actually dependent upon matter, in a manner that is hard to understand on the supposition that it is composed of an ontologically distinct substance.
Why should mind be altered and its performance degraded by changes in matter, if it is a different kind of thing? Recall the attractions of the mind-stuff theory, for Cartesians. First, the theory was supposed to explain how the essence of the self, the mind or soul, could survive the destruction of the body. Second, it was t h e m i n d a n d t h e br a i n supposed to explain how rationality and consciousness were possible, given that, supposedly, no arrangement of mere matter could ever realize these features.
The evidence from brain damage suggests that soul-stuff does not in fact have these alleged advantages, if indeed it exists: it is itself too closely tied to the material to possess them. Unexpectedly — for the dualist — mind degrades when matter is damaged; the greater the damage, the greater the degradation. Given that cognition degrades when, and to the extent that, matter is damaged, it seems likely that any mind that could survive the wholesale decay of matter that occurs after death would be, at best, sadly truncated, incapable of genuine thought or memory, and entirely incapable of preserving the identity of the unique individual whose mind it is.
Moreover, the fact that rationality degrades and consciousness fades or disappears when the underlying neural structures are damaged suggests that, contra the dualist, it is these neural structures that support and help to realize thought and consciousness, not immaterial mind — else the coincidental degradation of mind looks miraculous. Perhaps these points will seem more convincing if we have some actual cases of brain lesions and corresponding mind malfunction before us.
Consider some of the agnosias: disorders of recognition. There are many different kinds of agnosias, giving rise to difficulty in recognizing different types of object. Sometimes the deficit is very specific, involving, for instance, an inability to identify animals, or varieties of fruit. One relatively common form is prosopagnosia, the inability to recognize faces, including the faces of people close to the sufferer.
The response that best fits with our common sense, dualist, view of the mind preserves dualism by relegating some apparently mental functions to a physical medium that can degrade. For instance, we might propose that sufferers have lost access to the store of information that represents the people or objects they fail to recognize. Perhaps the brain contains something like the hard drive of a 13 14 introduction computer on which memories and facts are stored, and perhaps the storage is divided up so that memories of different kinds of things are each stored separately.
But in prosopagnosia, the store is corrupted, or access to it is disturbed. If something like this was right, then we might be able to preserve the view that mind is a spiritual substance, with the kinds of properties that such an indivisible substance is supposed to possess such as an inability to fragment. We rescue mind by delegating some of its functions to non-mind: memories are stored in a physical medium, which can fragment, but mind soars above matter.
Unfortunately, it is clear that the hypothesis just sketched is false. The agnosias are far stranger than that. Sufferers have not simply lost the ability to recognize objects or people; they have lost a sensespecific ability: to recognize a certain class of objects visually or tactilely, or aurally, and so on. The prosopagnosic who fails to recognize his wife when he looks at her knows immediately who she is when she speaks. Indeed, he may be able to describe what he sees as well as you or I.
Consider Dr. I think this could be an infloresence or flower. Now, suddenly, he came to life. His visual system functions perfectly well, allowing him to perceive and describe in detail the object with which he is presented. But he is forced to try to infer, haltingly, what the object is — even though he knows full well what a rose is and what it looks like. It appears that something very strange has happened to Dr. Its fabric has unravelled, in some way and at one corner, in a manner that no spiritual substance could conceivably do. It is difficult to see how to reconcile what he experiences with our common sense idea of what the mind is like.
- Neuroethics: Challenges for the 21st Century.
- Methods of constructing functions in involution?
- Neuroethics and the 21st Century Brain.
Perhaps a way might be found to accommodate Dr. One more example: another agnosia. In mirror agnosia, patients suffering from neglect mistake the reflections of objects for the objects themselves, even though they know in some sense that they are looking at a mirror image, and only when the mirror is positioned in certain ways. First, a brief introduction to neglect, which is itself a neurological disorder of great interest.
Someone suffering from neglect is profoundly indifferent to a portion of their visual field, even though their visual system is undamaged. Usually, it is the left side of the field that is affected: a neglect sufferer might put makeup on or shave only the right side of their face; when asked to draw a clock, they typically draw a complete circle, but then stuff all the numbers from one to twelve on the right hand half. All four patients tested correctly reached behind them to grab the object, just as you and I would. Rather than reach behind them for the object, they reached toward the mirror.
When asked where the object was, they replied that it was in, or behind, the mirror. They knew what a mirror was, and what it does, but when the object reflected was in their neglected field, this knowledge guided neither their verbal responses nor their actions. A similar confusion concerning mirrors occurs in the delusion known as mirror misidentification.
In this delusion, patients mistake their own reflection for another person: not the reflection of another person, but the very person. Presented with a mirror, the sufferer says that the person they see is a stranger; perhaps someone who has been following them about. But once again their knowledge concerning mirrors seems intact. Consider this exchange between an experimenter and a sufferer from mirror misidentification. The experimenter positions herself next to F. She points to her own reflection, and asks who that person is.
And who is the person standing next to me, she asks? All I want to do, right now, is to draw your attention to how strange malfunctions of the mind can be — far stranger than we might have predicted from our armchairs — and also how merely physical dysfunction can disrupt the mind. The mind may not be a thing; it may not be best understood as a physical object that can be located in space.
But it is entirely dependent, not just for its existence, but also for the details of its functioning, on mere things: neurons and the connections between them. Perhaps it is possible to reconcile these facts with the view that the mind is a spiritual substance, but it would seem an act of great desperation even to try. Now I want to explore them a little further, in order to accomplish several things.
First, and most simply, I want to demonstrate how strange and apparently paradoxical the mind can be, both when it breaks down and when it is functioning normally. This kind of exploration is fascinating in its own right, and raises a host of puzzles, some of which we shall explore further in this book.
I also have a more directly philosophical purpose, however. I want to show to what extent, contra what the dualist would have us expect, unconscious processes guide intelligent behaviour: to a very large extent, we owe our abilities and our achievements to subpersonal mechanisms. Showing the ways in which mind is built, as it were, out of machines will lay the ground for the development of a rival view of the mind which I will urge we adopt. This rival view will guide us in our exploration of the neuroethical questions we shall confront in later chapters.
Typically, they infer function by seeking evidence of a double dissociation between abilities and neural structures; that is, they seek 17 18 introduction evidence that damage to one part of the brain produces a characteristic dysfunction, and that damage to another produces a complementary problem.
Consider prosopagnosia once more. There is evidence that prosopagnosia is the inverse of another disorder, Capgras delusion. Prosopagnosics, recall, cannot identify faces, even very familiar faces; when their spouse or children are shown to them, they do not recognize them, unless and until they hear them talk. Capgras sufferers have no such problems; they immediately see that the face before them looks familiar, and they can see whose face it resembles.
But, though they see that the face looks exactly like a familiar face, they deny that it is the person they know. Instead, they identify the person as an impostor. What is going on, in Capgras delusion? An important clue is provided by studies of the autonomic system response of sufferers. The autonomic system is the set of control mechanisms which maintain homeostasis in the body, regulating blood pressure, heart rate, digestion and so on.
We can get a read-out of the responses of the system by measuring heart rate, or, more commonly, skin conductance: the ability of the skin to conduct electricity.
Skin conductance rises when we sweat since sweat conducts electricity well ; by attaching very low voltage electrodes to the skin, we can measure the skin conductance response SCR , also known as the galvanic skin response. Normal subjects exhibit a surge in SCR in response to a range of stimuli: in response, for instance, to loud noises and other startling phenomena, but also to familiar faces. When you see the face of a friend or lover, your SCR surges, reflecting the emotional significance of that face for you.
Capgras sufferers have normal autonomic systems: they experience a surge in SCR in response to loud noises, for instance. But their autonomic system does not differentiate between familiar and unfamiliar faces Ellis et al. Prosopagnosics exhibit the opposite profile: though they do not explicitly recognize familiar faces, they do have normal autonomic responses to them. We are now in a position to make a stab at identifying the roles that the autonomic system and the face recognition system play in normal recognition of familiar faces, and explaining how Capgras and prosopagnosia come about.
One, currently influential, hypothesis is this: because Capgras sufferers recognize the faces they are presented with, but fail to experience normal feelings of familiarity, they think that there is something odd about the face. They therefore infer that it is not mom, but a replica. Capgras therefore arises when the autonomic system fails to play its normal role in response to outputs from the facial recognition system. Prosopagnosia, on the other hand, is a dysfunction of a separate facial recognition system; prosopagnosics have normal autonomic responses, but abnormal explicit recognition Ellis and Young On this account, normal facial recognition is a product of two elements, one of which is normally below the threshold of conscious awareness.
Capgras sufferers are not aware of the lack of a feeling of familiarity; at most, they are consciously aware that something is odd about their experience. The inference from this oddity to its explanation — that the person is an impostor — is very probably not drawn explicitly, but is instead the product of mechanisms that work below the level of conscious experience.
Cognitive scientists commonly call these mechanisms subpersonal, to emphasize that they are partial constituents, normally unconscious and automatic, of persons. Prosopagnosics usually cannot use their autonomic response to familiar faces to categorize them, since they — like all of us — have great difficulty in becoming aware of these responses. The distinction between the personal level and the subpersonal level is very important here. If we are to understand ourselves, and 19 20 introduction how our brains and minds make us who, and what, we are, we need to understand the very large extent to which information processing takes place automatically, below the level of conscious awareness.
This is exactly what one would predict, on the basis of our evolutionary past. Evolution tends to preserve adaptations unless two conditions are met: keeping them becomes costly, and the costs of discarding them and redesigning are low. These conditions are very rarely met, for the simple reason that it would take too many steps to move from an organism that is relatively well-adapted to an environment, to another which is as well or better adapted, but which is quite different from the first.
Since evolution proceeds in tiny steps, it cannot jump these distances; large-scale changes must occur via a series of very small alterations each of which is itself adaptive. Evolution therefore tends to preserve basic design features, and tinker with add-ons thus, for instance, human beings share a basic body plan with all multicellular animals. Now, we know that most organisms in the history of life on this planet, indeed, most organisms alive today, got along fine without consciousness.
- Neuroscience and society in the 21st century: Neuroethics Down-Under 2014.
- Hybrid Heritage on Screen: The ‘Raj Revival’ in the Thatcher Era!
- Neuroethics: Challenges for the 21st Century - AbeBooks - Neil Levy: ;
- Account Options.
- ISBN 10: 0521687268.
- Martha J. Farah!
- Similar books and articles;
They needed only a set of responses to stimuli that attracted and repelled them according to their adaptive significance. Unsurprisingly, we have inherited from our primitive ancestors a very large body of subpersonal mechanisms which can get along fine without our conscious interference. Another double dissociation illustrates the extent to which our behavior can be guided and driven by subpersonal mechanisms. Vision in primates including humans is subserved by two distinct systems: a dorsal system which is concerned with the guidance of action, and a ventral system which is devoted to an internal representation of the world Milner and Goodale These systems are functionally and anatomically distinct; probably the movementguidance system is the more primitive, with the ventral system being a much later add-on since guidance of action is something that all organisms capable of locomotion require, whereas the ability to form complex representations of the environment is only useful to p e e r i n g in t o t h e m i n d creatures with fairly sophisticated cognitive abilities.
Populations of neurons in the ventral stream are devoted to the task of object discrimination, with subsets dedicated to particular classes of objects. Studies of the abilities of primates with lesioned brains — experimental monkeys, whose lesions were deliberately produced, and of human beings who have suffered brain injury — have shown the extent to which these systems can dissociate. Monkeys who have lost the ability to discriminate visual patterns nevertheless retain the ability to catch gnats or track and catch an erratically moving peanut Milner and Goodale Human beings exhibit the same kinds of dissociations: there are patients who are unable to grasp objects successfully but are nevertheless able to give accurate descriptions of them; conversely, there are patients who are unable to identify even simple geometric shapes but who are able to reach for and grasp them efficiently.
Such patients are able to guide their movements using visual information of which they are entirely unconscious Goodale and Milner We all have dorsal systems which compute shape, size and trajectory for us, and which send the appropriate signals to our limbs. Sometimes we make the appropriate movements without even thinking about it; for instance, when we catch a ball unexpectedly thrown at us; sometimes we might remain unaware that we have moved at all for instance when we brush away a fly while thinking about something else. Action guidance without consciousness is a normal feature of life.
We can easily demonstrate unconscious action-guidance in normal subjects, using the right kind of experimental apparatus. Consider the Titchener illusion, produced by surrounding identical sized circles with others of different sizes. A circle surrounded by larger circles appears smaller than a circle surrounded by small circles. Aglioti and colleagues wondered whether the illusion fooled both dorsal and ventral visual systems.
To test this, they replaced the circles with physical objects; by surrounding 21 22 introduction identical plastic discs with other discs of different sizes, they were able to replicate the illusion: the identical discs appeared different sizes to normal subjects. But when the subjects reached out to grasp the discs, their fingers formed exactly the same size aperture for each. The ventral system is taken in by the illusion, but the dorsal system is not fooled Aglioti et al. Milner and Goodale suggest that the ventral system is taken in by visual illusions because its judgments are guided by stored knowledge about the world: knowledge about the effects of distance on perceived size, of the constancy of space and so on.
Lacking access to such information, the dorsal system is not taken in Milner and Goodale If the grasping behavior of normal subjects in the laboratory is subserved by the dorsal system, which acts below the level of conscious awareness, then normal grasping behavior outside the laboratory must similarly be driven by the same unconscious processes. The dorsal system does not know that it is in the lab, after all, or that the ventral system is being taken in by an illusion.
It just does its job, as it is designed to. Similarly for many other aspects of normal movement: calculating trajectory and distance, assessing the amount of force we need to apply to an object to move it, the movements required to balance a ball on the palm of a hand; all of this is calculated unconsciously. The unconscious does not consist, or at least it does not only consist, in the seething mass of repressed and primitive drives postulated by Freud; it is also the innumerable mechanisms, each devoted to a small number of tasks, which work together to produce the great mass of our everyday behavior.
What proportion of our actions are produced by such mechanisms, with no direct input or guidance from consciousness? Certainly the majority, probably the overwhelming majority, of our actions are produced by automatic systems, which we normally do not consciously control and which we cannot interrupt Bargh and Chartrand This should not be taken as a reason to disparage or devalue our consciously controlled and initiated actions.
We routinely take consciousness to be the most significant element of the self, and it is p e e r i n g in t o t h e m i n d indeed the feature of ourselves that is in many respects the most marvellous. The capacity for conscious experience is certainly the element that makes our lives worth living; indeed, makes our lives properly human.
Consciousness is, however, a limited resource: it is available only for the control of a relatively small number of especially complex and demanding actions, and for the solution of difficult, and above all novel, problems. The great mass of our routine actions and mental processes, including most sophisticated behaviors once we have become skilful at their performance, are executed efficiently by unconscious mechanisms.
We have seen that the identification of the mind with an immaterial substance is entirely implausible, in light of our everincreasing knowledge of how the mind functions and how it malfunctions. However, many people will find the argument up to this point somewhat mystifying. Why devote so much energy to refuting a thesis that no one, or at least no one with even a modicum of intellectual sophistication, any longer holds?
Neuroethics: Challenges for the 21st Century
It is true that people prepared to defend substance dualism are thin on the ground these days. Nevertheless, I suggest, the thesis continues to exert a significant influence despite this fact, both on the kinds of conceptions of selves that guide everyday thought, and in some of the seductive notions that even cognitive scientists find themselves employing.
The everyday conception of the self that identifies it with consciousness is, I suspect, a distant descendant of the Cartesian view. On this everyday conception, I am the set of thoughts that cross my mind. This conception of the self might offer some comfort, in the face of all the evidence about the ways in which minds can break down, and unconsciously processed information guides behavior. Our conscious thoughts are produced, at least in very important part, by unconscious mechanisms, which send to consciousness only that subset of information which needs further processing by resource-intensive and slow, but somehow 23 24 introduction clever, consciousness.
Many of our actions, too, including some of our most important, are products of unconscious mechanisms. Think, finally, of the magic of ordinary speech: we speak, and we make sense, but we learn precisely what we are going to say only when we say it as E. Our cleverest arguments and wittiest remarks are not first vetted by consciousness; they come to consciousness at precisely the same time they are heard by others.
Sometimes we wonder whether a joke or a pun was intentional or inadvertent. Clearly, there are cases which fit both descriptions: when someone makes a remark that is interpreted by others as especially witty, but he is himself bewildered by their response, we are probably dealing with inadvertent humor, while the person who stores up a witty riposte for the right occasion is engaging in intentional action.
Often, though, there may be no fact of the matter whether the pun I make and notice as I make it counts as intentional or inadvertent. Identifying the self with consciousness therefore seems to be hopeless; it would shrink the self down to a practically extensionless, and probably helpless, point. Few sophisticated thinkers would be tempted by this mistake. But an analogous mistake tempts even very clear thinkers, a last legacy of the Cartesian picture. This mistake is the postulation of a control centre, a CPU in the brain, where everything comes together and where the orders are issued.
One reason for thinking that this is a mistake is that the idea of a control centre in the brain seems to run into what philosophers of mind call the homunculus fallacy: the fallacy of explaining the capacities of the mind by postulating a little person a homunculus inside the head. The classic example of the homunculus fallacy involves vision.
How do we come to have visual experience; how, that is, are the incoming wavelengths of light translated into the rich p e e r i n g in t o t h e m i n d visual world we enjoy? Well, perhaps it works something like a camera obscura: the lenses of the eyes project an image onto the retina inside the head, and there, seated comfortably and perhaps eating popcorn, is a homunculus who views the image. The reason that the homunculus fallacy is a fallacy is that it fails to explain anything.
We wanted to know how visual experience is possible, but we answered the question by postulating a little person who looks at the image in the head, using a visual system that is presumably much like ours. Postulating the homunculus merely delays answering the question; it does not answer it at all. The moral of the homunculus fallacy is this: we explain the capacities of our mind only by postulating mechanisms that have powers that are simpler and dumber than the powers they are invoked to explain. We cannot explain intelligence by postulating intelligent mechanisms, because then we will need to explain their intelligence; similarly, we cannot explain consciousness by postulating conscious mechanisms.
It is not obvious, to me at any rate, that postulating a controller must commit the homunculus fallacy. However, recognition of the fallacy takes away much of the incentive for postulating a control centre. We do not succeed in explaining how we become capable of rational and flexible behavior by postulating a rational and flexible CPU, since we are still required to explain how the CPU came to have these qualities. Sooner or later we have to explain how we come to have our most prized qualities by reference to simpler and much less impressive mechanisms; once we recognize that this is so, the temptation to think there is a controller at all is much smaller.
We rightly want our actions and thoughts to be controlled by an agent, by ourselves, and we want ourselves to have the qualities we 25 26 introduction prize. And it is indeed the entire agent that is the controller of controlled processes. In principle, there could be a CPU inside the head though, as we have seen, this CPU would have to be a much less impressive mechanism than is generally hoped. Central controllers constitute bottlenecks in decision-making machines; all the relevant information must get to the controller and be processed by it.
CPUs are serial processors; they deal with one task at a time. Because computers are very fast serial processors, they easily outrun human brains at serial tasks: they can perform long series of mathematical calculations that would take human beings hours in mere seconds. But long series of mathematical calculations do not represent the kinds of tasks that human brains are evolved to perform. Brains are much better than any computer at solving the incredibly complex information-processing problems that confront the organism as it navigates it way around its environment — catching a ball, keeping track of social networks, reading subtle clues in a glance or making a tool.
Rather than confronting problems in a serial fashion, brains are massively parallel processors: they process many pieces of information simultaneously. When the brain is confronted with a processing task — that is, all the time, even when the organism is asleep — that task is performed by many, many different brain circuits and mechanisms, working simultaneously.
Moreover, these circuits might be widely distributed across the brain; hence, the kind of thinking the brain performs is often described as a kind of parallel distributed processing. Catching a ball, for instance, requires that a number of different problems be solved simultaneously. We must get our bodies near that point, which involves coordinating leg muscles and distributing our weight to maintain balance. Then we must move our hands to exactly the right point in space to snatch the ball out of the air.
Naturally, some of us are better at this kind of task than are others. But with a little practice most fit human beings can learn to perform this task pretty well. It all happens too fast for serial processors to handle the thousands of calculations necessary. Yet humans — and other animals — do it with ease. To appreciate how difficult, in computational terms, catching a ball is, we must recognize that it is not three tasks in parallel — tracking the ball, moving the body, moving the hands — but at least dozens.
Each one of these tasks can itself be subdivided into many other, parallel, processes. There are, for instance, separate systems for motion detection in the visual system, for calculating trajectories and for guiding our actions.
All of this computational work is carried out subpersonally. We become aware only of the results of the calculations, if we become aware of anything at all. Where do all these distributed processes come together? There is no place in the brain, no CPU equivalent, which takes all the various sources of information into account and makes a decision. Or to put the same point differently , the only place where all the distributed processes come together consists in the entire agent herself. The agent just is this set of processes and mechanisms, not something over and above them.
Human beings, like all complex organisms, are communities of mechanisms. The unity of the agent is an achievement, a temporary and unstable coalition forged out of persisting diversity. Under the right conditions, the diversity of mechanisms can be revealed: we can provoke conflict between parts of the agent, which reveal the extent to which they are a patchwork of systems and processes, sometimes with inconsistent interests.
Indeed, this kind of conflict is also a 27 28 introduction striking feature of everyday life. Consider weakness of the will, the everyday phenomenon in which we find ourselves doing things of which we do not rationally approve. As Plato noticed years ago, these kinds of incidents seem to indicate the presence within the single agent of different centres of volition and desire: parts of the self with their own preferences, each of which battles for control of the agent so as to satisfy its desires.
This is not to say that our everyday view of ourselves as unified agents, as a single person with a character and with relatively consistent goals, is false. Rather, the unified agent is an achievement: we unify ourselves as we mature. If we do not manage to impose a relatively high degree of unity on ourselves, we shall always be at odds with ourselves, and our ability to pursue any goal which requires planning will be severely curtailed. Unification is a necessary condition of planning, for without a relatively high degree of unity we shall always be undermining our own goals.
Lack of unity is observed in young children, and undermines their ability to achieve the goals they themselves regard as desirable. Longitudinal studies show that children who do not acquire the skills to delay gratification generally do worse on a range of indicators throughout their lives, but delaying gratification requires the imposition of unity. In order to become rational agents, capable of long-term planning and carrying out our plans, we need to turn diversity into unity.
We achieve this not by eliminating diversity, but by forging coalitions between the disparate elements of ourselves. These coalitions remain forever vulnerable to disruption, short and long term. One way to understand drug addiction, for instance, is as a result of a disruption of the imposed unity of the agent. The drug-addicted agent might genuinely desire to give up his drug, but because he cannot extend his will across time and across all the relevant subagents which constitute him, he is subject to regular preference reversals.
When he t h e e xt e n d e d m i n d consumes his drug, he does so because he temporarily prefers to consume; when he sincerely asserts that he wishes to be free of his drug, his assertion reflects his genuine, but equally temporary, preference.