Sunday, June 29, 2008

PRELIMINARY EXAMINATION FOR SOPHIA MEMBERS

Make a Comparative Analysis of Max More's BEYOND THE MACHINE: Technology and Post-Human Freedom and David Hulme's Material Facts from a Non-Materialist Perspective.

*Presentation Format:

I. A Summary of Each of the Two Selections
II. A Comparative Analysis of the Two Selections
III. A Personal Evaluation of the Selections In Terms of Philosophical Tenability/Reasonability

*Required Number of Pages: At least three (3) pages; Times New Roman font # 12.

*Submission Date: On or before 16 July 2008.

Friday, June 27, 2008

HOW TO PROTECT HUMAN DIGNITY FROM SCIENCE


Daniel C. Dennett
Many people fear that science and technology are encroaching on domains of life in a way that undermines human dignity, and they see this as a threat that needs to be resisted vigorously. They are right. There is a real crisis, and it needs our attention now, before irreparable damage is done to the fragile environment of mutually shared beliefs and attitudes on which a precious conception of human dignity does indeed depend for its existence. I will try to show both that the problem is real and that the most widely favored responses to the problem are deeply misguided and bound to fail. There is a solution that has a good chance of success, however, and it employs principles that we already understand and accept in less momentous roles. The solution is natural, reasonable, and robust instead of fragile, and it does not require us to try to put the genie of science back in the bottle-a good thing, since that is almost certainly impossible. Science and technology can flourish open-endedly while abiding by restrictive principles that are powerful enough to reassure the anxious and mild enough to secure the unqualified endorsement of all but the most reckless investigators. We can have dignity and science too, but only if we face the conflict with open minds and a sense of common cause.

The Problem

Human life, tradition says, is infinitely valuable, and even sacred: not to be tampered with, not to be subjected to "unnatural" procedures, and of course not to be terminated deliberately, except (perhaps) in special cases such as capital punishment or in the waging of a just war: "Thou shalt not kill." Human life, science says, is a complex phenomenon admitting of countless degrees and variations, not markedly different from animal life or plant life or bacterial life in most regards, and amenable to countless varieties of extensions, redirections, divisions, and terminations. The questions of when (human) life begins and ends, and of which possible variants "count" as (sacred) human lives in the first place are, according to science, more like the question of the area of a mountain than of its altitude above sea level: it all depends on what can only be conventional definitions of the boundary conditions. Science promises-or threatens-to replace the traditional absolutes about the conditions of human life with a host of relativistic complications and the denial of any sharp boundaries on which to hang tradition.

Plato spoke of seeking the universals that "carve Nature at its joints,"i and science has given us wonderful taxonomies that do just that. It has identified electrons and protons (which have the mass of 1,836 electrons and a positive charge), distinguished the chemical elements from each other, and articulated and largely confirmed a Tree of Life that shows why "creature with a backbone" carves Nature better than "creature with wings." But the crisp, logical boundaries that science gives us don't include any joints where tradition demands them. In particular, there is no moment of ensoulment to be discovered in the breathtakingly complicated processes that ensue after sperm meets egg and they begin producing an embryo (or maybe twins or triplets-when do they get their individual souls?), and there is no moment at which the soul leaves the body and human life ends. Moreover, the more we understand, scientifically, about these complexities, the more practical it becomes, technologically, to exploit them in entirely novel ways for which tradition is utterly unprepared: in vitro fertilization and cloning, organ harvest and transplant, and, at the end of life, the artificial prolongation of life-of one sort or another-after most if not all the sacred aspects of life have ceased. When we start treating living bodies as motherboards on which to assemble cyborgs, or as spare parts collections to be sold to the highest bidder, where will it all end? It is not as if we could halt the slide by just prohibiting (some of) the technology. Technology may provide the faits accomplis that demonstrate beyond all controversy that the science is on the right track, but long before the technology is available, science provides the huge changes in conceptualization, the new vistas on possibility, that will flavor our imaginations henceforth whether or not the possibilities become practical. We are entering a new conceptual world, thanks to science, and it does not harmonize comfortably with our traditional conceptions of our lives and what they mean.ii

In particular, those who fear this swiftly growing scientific vista think that it will destroy something precious and irreplaceable in our traditional scheme, subverting the last presumptions of human specialness which ground-they believe-our world of morality. Oddly enough, not much attention has been paid to the question of exactly how the rise of the scientific vista would subvert these cherished principles-in this regard, it is a close kin to the widespread belief that homosexual marriage would somehow subvert traditional "family values"-but in fact there is a good explanation for this gap in the analysis. The psychologist Philip Tetlock identifies values as sacred when they are so important to those who hold them that the very act of considering them is offensive.1 The comedian Jack Benny was famously stingy-or so he presented himself on radio and television- and one of his best bits was the skit in which a mugger puts a gun in his back and barks "Your money or your life!" Benny just stands there silently. "Your money or your life!" repeats the mugger, with mounting impatience. "I'm thinking, I'm thinking," Benny replies. This is funny because most of us think that nobody should even think about such a trade-off. Nobody should have to think about such a trade-off. It should be unthinkable, a "no-brainer." Life is sacred, and no amount of money would be a fair exchange for a life, and if you don't already know that, what's wrong with you? "To transgress this boundary, to attach a monetary value, to one's friendships, children, or loyalty to one's country, is to disqualify oneself from the accompanying social roles."2 That is what makes life a sacred value.

Tetlock and his colleagues have conducted ingenious (and sometimes troubling) experiments in which subjects are obliged to consider "taboo trade-offs," such as whether or not to purchase live human body parts for some worthy end, or whether or not to pay somebody to have a baby that you then raise, or pay somebody to perform your military service. As their model predicts, many subjects exhibit a strong "mere contemplation effect": they feel guilty and sometimes get angry about being lured into even thinking about such dire choices, even when they make all the right choices. When given the opportunity by the experimenters to engage in "moral cleansing" (by volunteering for some relevant community service, for instance) subjects who have had to think about taboo trade-offs are significantly more likely than control subjects to volunteer-for real-for such good deeds. (Control subjects had been asked to think about purely non-sacred trade-offs, such as whether to hire a house-cleaner or buy food instead of something else.)3

So it is not surprising that relatively little attention has been paid to charting the paths by which science and technology might subvert the value of life. If you feel the force of the admonition, "Don't even think about it!", you will shun the topic by distracting your own attention from it, if at all possible. I know from experience that some readers of this essay will already be feeling some discomfort and even guilt for allowing themselves to broach these topics at all, so strong is the taboo against thinking the unthinkable, but I urge them to bear with me, since the policy that I will propose may have more going for it than their own.
The fact that the threat has not been well articulated does not mean it is not real and important. Let me try to make it plain by drawing some parallels. Like climate change, the threat is environmental and global (which means you can't just move to a different place where the environment hasn't yet been damaged), and time is running out. While global warming threatens to affect many aspects of the physical environment-the atmosphere, the flora and fauna, the ice caps and ocean levels-and hence alter our geography in catastrophic ways from which recovery may be difficult or impossible, the threat to human dignity affects many aspects of what we may call the belief environment , the manifold of ambient attitudes, presumptions, common expectations-the things that are "taken for granted" by just about everybody, and that just about everybody expects just about everybody to take for granted.
The belief environment plays just as potent a role in human welfare as the physical environment, and in some regards it is both more important and more fragile. Much of this has been well-known for centuries, particularly to economists, who have long appreciated the way a currency can become worthless almost overnight, for example, and the way public trust in financial institutions needs to be preserved as a condition for economic activity in general. Today we confront the appalling societal black holes known as failed states, where the breakdown of law and order makes the restoration of decent life all but impossible. (If you have to pay off the warlords and bribe the judges and tolerate the drug traffic.just to keep enough power and water and sanitation going to make life bearable, let alone permit agriculture and commerce to thrive, your chances of long-term success are minimal.) What matters in these terrible conditions is what people in general assume whether they are right or wrong . It might in fact be safe for them to venture out and go shopping, or to invest in a clothing factory, or plant their crops, but if they don't, in general, believe that, they cannot resume anything like normal life and rekindle a working society. This creates a belief environment in which there is a powerful incentive for the most virtuous and civic-minded to lie, vigorously, just to preserve what remains of the belief environment. Faced with a deteriorating situation, admitting the truth may only accelerate the decline, while a little creative myth-making might- might -save the day. Not a happy situation.

And this is what people fear might happen if we pursue our current scientific and technological exploration of the boundaries of human life: we will soon find ourselves in a deteriorating situation where people-rightly or wrongly-start jumping to conclusions about the non -sanctity of life, the commodification of all aspects of life, and it will be too late to salvage the prevailing attitudes that protect us all from something rather like a failed state, a society in which the sheer security needed for normal interpersonal relations has dissolved, making trust, and respect, and even love, all but impossible. Faced with that dire prospect, it becomes tempting indeed to think of promulgating a holy lie, a myth that might carry us along for long enough to shore up our flagging confidence until we can restore "law and order."
That is where the doctrine of the soul comes in. People have immortal souls, according to tradition, and that is what makes them so special. Let me put the problem unequivocally: the traditional concept of the soul as an immaterial thinking thing, Descartes's res cogitans , the internal locus in each human body of all suffering, and meaning, and decisions, both moral and immoral, has been utterly discredited. Science has banished the soul as firmly as it has banished mermaids, unicorns, and perpetual motion machines. There are no such things. There is no more scientific justification for believing in an immaterial immortal soul than there is for believing that each of your kidneys has a tap-dancing poltergeist living in it. The latter idea is clearly preposterous. Why are we so reluctant to dismiss the former idea? It is obvious that there must be some non -scientific motivation for believing in it. It is seen as being needed to play a crucial role in preserving our self-image, our dignity. If we don't have souls, we are just animals! (And how could you love, or respect, or grant responsibility to something that was just an animal?)

Doesn't the very meaning of our lives depend on the reality of our immaterial souls? No. We don't need to be made of two fundamentally different kinds of substance, matter and mind-stuff, to have morally meaningful lives. On the face of it, the idea that all our striving and loving, our yearning and regretting, our hopes and fears, depend on some secret ingredient, some science-proof nugget of specialness that defies the laws of nature, is an almost childish ploy: "Let's gather up all the wonderfulness of human life and sweep it into the special hidey-hole where science can never get at it!" Although this fortress mentality has a certain medieval charm, looked at in the cold light of day, this idea is transparently desperate, implausible, and risky: putting all your eggs in one basket, and a remarkably vulnerable basket at that. It is vulnerable because it must declare science to be unable to shed any light on the various aspects of human consciousness and human morality at a time when exciting progress is being made on these very issues. One of Aristotle's few major mistakes was declaring "the heavens" to be made of a different kind of stuff, entirely unlike the matter here on Earth-a tactical error whose brittleness became obvious once Galileo and company began their still-expanding campaign to understand the physics of the cosmos. Clinging similarly to an immaterial concept of a soul at a time when every day brings more understanding of how the material basis of the mind has evolved (and goes on evolving within each brain) is a likely path to obsolescence and extinction.

The alternative is to look to the life sciences for an understanding of what does in fact make us different from other animals, in morally relevant ways. We are the only species with language, and art, and music, and religion, and humor, and the ability to imagine the time before our birth and after our death, and the ability to plan projects that take centuries to unfold, and the ability to create, defend, revise, and live by codes of conduct, and-sad to say-to wage war on a global scale. The ability of our brains to help us see into the future, thanks to the culture we impart to our young, so far surpasses that of any other species, that it gives us the powers that in turn give us the responsibilities of moral agents. Noblesse oblige . We are the only species that can know enough about the world to be reasonably held responsible for protecting its precious treasures. And who on earth could hold us responsible? Only ourselves. Some other species-the dolphins and the other great apes-exhibit fascinating signs of protomorality, a capacity to cooperate and to care about others, but we persons are the only animals that can conceive of the project of leading a good life . This is not a mysterious talent; it can be explained.iii

Here I will not attempt to survey the many threads of that still unfolding explanation, but rather to construct and defend a perspective and a set of policies that could protect what needs to be protected as we scramble, with many false steps, towards an appreciation of the foundations of human dignity. Scientists make their mistakes in public, but mostly only other scientists notice them. This topic has such momentous consequences, however, that we can anticipate that public attention-and reaction-will be intense, and could engender runaway misconstruals that could do serious harm to the delicate belief environment in which we (almost) all would like to live.

I have mentioned the analogy with the ominous slide into a failed state; here is a less dire example of the importance of the belief environment, and the way small changes in society can engender unwanted changes in it. In many parts of rural America people feel comfortable leaving their cars and homes unlocked, day and night, but any country mouse who tries to live this way in the big city soon learns how foolish that amiably trusting policy is. City life is not intolerable, but it is certainly different. Wouldn't it be fine if we could somehow re-engineer the belief environment of cities so that people seldom felt the need to lock up! An all but impossible dream. At the same time, rural America is far from utopia and is sliding toward urbanity. The felicitous folkways of the countryside can absorb a modest amount of theft and trespass without collapse, but it wouldn't take much to extinguish them forever. Those of us who get to live in this blissfully secure world cherish it, for good reason, and would hate to abandon it, but we also must recognize that any day could be the last day of unlocked doors in our neighborhood, and once the change happened, it would be very hard to change back. That too is like global climate change; these changes are apt to be irreversible. And unlike global climate change, drawing attention to the prospect may actually hasten it, by kindling and spreading what Douglas Hofstadter once called "reverberant doubt."4 The day that our local newspaper begins running a series about what percentage of local people lock their doors under what circumstances is the day that door-locking is apt to become the norm. So those who are in favor of diverting attention from too exhaustive an examination of these delicate topics might have the right idea. This is the chief reason, I think, for the taboo against thinking about sacred values: it can sometimes jeopardize their protected status. But in this case, I think it is already too late to follow the tip-toe approach. There is already a tidal wave of interest in the ways in which the life sciences are illuminating the nature of "the soul," so we had better shift from distraction to concentration and see what we can make of the belief environment for human dignity and its vulnerabilities.

The Solution

How are we to protect the ideal of human dignity from the various incursions of science and technology? The first step in the solution is to notice that the grounds for our practices regarding this are not going to be local features of particular human lives, but rather more distributed in space and time. There is already a clear precedent in our attitude toward human corpses. Even people who believe in immortal immaterial souls don't believe that human "remains" harbor a soul. They think that the soul has departed, and what is left behind is just a body, just unfeeling matter. A corpse can't feel pain, can't suffer, can't be aware of any indignities-and yet still we feel a powerful obligation to handle a corpse with respect, and even with ceremony, and even when nobody else is watching. Why? Because we appreciate, whether acutely or dimly, that how we handle this corpse now has repercussions for how other people, still alive, will be able to imagine their own demise and its aftermath. Our capacity to imagine the future is both the source of our moral power and a condition of our vulnerability. We cannot help but see all the events in our lives against the backdrop of what Hofstadter calls the implicosphere of readily imaginable alternatives-and the great amplifier of human suffering (and human joy) is our irresistible tendency to anticipate, with dread or delight, what is in store for us.5

We live not just in the moment, but in the past and the future as well. Consider the well-known advice given to golfers: keep your head down through the whole swing. "Wait a minute," comes the objection: "that's got to be voodoo superstition! Once the ball leaves the club head, the position of my head couldn't possibly affect the trajectory of the ball. This has to be scientifically unsound advice!" Not at all. Since we plan and execute all our actions in an anticipatory belief environment, and have only limited and indirect control over our time-pressured skeletal actions, it can well be the case that the only way to get the part of the golf swing that does affect the trajectory of the ball to have the desirable properties is to concentrate on making the later part of it, which indeed could not affect the trajectory, take on a certain shape. Far from being superstitious, the advice can be seen to follow quite logically from facts we can discover from a careful analysis of the way our nervous systems guide our muscles.

Our respect for corpses provides us with a clear case of a wise practice that does not at all depend on finding, locally, a special (even supernatural) ingredient that justifies or demands this treatment. There are other examples that have the same feature. Nobody has to endorse magical thinking about the gold in Fort Knox to recognize the effect of its (believed-in) presence there on the stability of currencies. Symbols play an important role in helping to maintain social equilibria, and we tamper with them at our peril. If we began to adopt the "efficient" policy of disposing of human corpses by putting them in large biodegradable plastic bags to be taken to the landfill along with the rest of the "garbage," this would flavor our imaginations in ways that would be hard to ignore, and hard to tolerate. No doubt we could get used to it, the same way city folk get used to locking their doors, but we have good reasons for avoiding that path. (Medical schools have learned to be diligent in their maintenance of respect and decorum in the handling of bodies in their teaching and research, for while those who decide to donate their bodies to medicine presumably have come to terms with the imagined prospect of students dissecting and discussing their innards, they have limits on what they find tolerable.)

The same policy and rationale apply to end-of-life decisions. We handle a corpse with decorum even though we know it cannot suffer, so we can appreciate the wisdom of extending the same practice to cases where we don't know. For instance, a person in a persistent vegetative state might be suffering, or might not, but in either case, we have plenty of grounds for adopting a policy that creates a comforting buffer zone that errs on the side of concern. And, once again, the long-range effect on community beliefs is just as important as, or even more important than, any locally measurable symptoms of suffering. (In a similar spirit, it is important that wolves and grizzly bears still survive in the wilder regions of our world even if we almost never see them. Just knowing that they are there is a source of wonder and delight and makes the world a better place. Given our invincible curiosity and penchant for skepticism, we have to keep checking up on their continued existence, of course, and could not countenance an official myth of their continued presence if they had in fact gone extinct. This too has its implications for our topic.)

What happens when we apply the same principle to the other boundary of human life, its inception? The scientific fact is that there is no good candidate, and there will almost certainly never be a good candidate, for a moment of ensoulment , when a mere bundle of living human tissue becomes a person with all the rights and privileges pertaining thereunto. This should not be seen as a sign of the weakness of scientific insight, but rather as a familiar implication of what science has already discovered. One of the fascinating facts about living things is the way they thrive on gradualism. Consider speciation: there are uncounted millions of different species, and each of them had its inception "at some point" in the nearly four billion year history of life on this planet, but there is literally no telling exactly when any species came into existence because what counts as speciation is something that only gradually and cumulatively emerges over very many generations. Speciation can emerge only in the aftermath . Consider dogs, the millions of members of hundreds of varieties of Canis familiaris that populate the world today. As different as these varieties are-think of St. Bernards and Pekinese-they all count as a single species, cross-fertile (with a little mechanical help from their human caretakers) and all readily identifiable as belonging to the same species, descended from wolves, by their highly similar DNA. Might one or more of these varieties or subspecies become a species of its own some day? Absolutely. In fact, every puppy born is a potential founder of a new species, but nothing about that puppy on the day of its birth (or for that matter on any day of its life) could be singled out as the special feature that marked it as the Adam or Eve of a new species. If it dies without issue, it definitely won't found a new species, but as long as it has offspring that have offspring.it might turn out, in the fullness of time, to be a good candidate for the first member of a new species.

Or consider our own species, Homo sapiens . Might it divide in two some day? Yes it might, and in fact, it might, in a certain sense, already have happened. Consider two human groups alive today that probably haven't had any common ancestors in the last thirty thousand years: the Inuit of Cornwallis Island in the Arctic, and the Andaman Islanders living in remarkable isolation in the Indian Ocean. Suppose some global plague sweeps the planet sometime in the next hundred years (far from an impossibility, sad to say), leaving behind only these two small populations. Suppose that over the next five hundred or a thousand years, say, they flourish and come to reinhabit the parts of the world vacated by us-and discover that they are not cross-fertile with the other group! Two species, remarkably similar in appearance, physiology and ancestry, but nevertheless as reproductively isolated as lions are from tigers. When, then, did the speciation occur? Before the dawn of agriculture about ten thousand years ago, or after the birth of the Internet? There would be no principled way of saying. We can presume that today, Inuits and Andaman Islanders are cross-fertile, but who knows? The difference between "in principle" reproductive isolation (because of the accumulation of genetic and behavioral differences that make offspring "impossible") and de facto reproductive isolation, which has already been the case for many thousands of years, is not itself a principled distinction.

A less striking instance of the same phenomenon of gradualism is coming of age , in the sense of being mature enough and well enough informed to be suitable for marriage, or-to take a particularly clear case-to drive a car. It will come as no surprise, I take it, that there is no special moment of driver-edment, when a teenager crisply crossed the boundary between being too immature to have the right to apply for a driver's license, and being adult enough to be allowed the freedom of the highway behind the wheel. Some youngsters are manifestly mature enough at fourteen to be reasonable candidates for a driver's license, and others are still so heedless and impulsive at eighteen that one trembles at the prospect of letting them on the road. We have settled (in most jurisdictions) on the policy that age sixteen is a suitable threshold, and what this means is that we simply refuse to consider special pleading on behalf of unusually mature younger people, and also refrain from imposing extra hurdles on those sixteen-year-olds who manage to pass their driving test fair and square in spite of our misgivings about the safety of letting them on the road. In short, we settle on a conventional threshold which we know does not mark any special internal mark (brain myelination, IQ, factual knowledge, onset of puberty) but strikes us as a good-enough compromise between freedom and public safety. And once we settle on it, we stop treating the location of the threshold as a suitable subject for debate. There are many important controversies to consider and explore, and this isn't one of them. Not as a general rule. Surprising new discoveries may in principle trigger a reconsideration at any time, but we foster a sort of inertia that puts boundary disputes out of bounds for the time being.

Why isn't there constant pressure from fifteen-year-olds to lower the legal driving age? It is not just that they tend not to be a particularly well-organized or articulate constituency. Even they can recognize that soon enough they will be sixteen, and there are better ways to spend their energy than trying to adjust a policy that is, all things considered, quite reasonable. Moreover, there are useful features of the social dynamics that make it systematically difficult for them to mount a campaign for changing the age. We adults have created a tacit scaffolding of presumption, holding teenagers responsible before many of them have actually achieved the requisite competence, thereby encouraging them to try to grow into the status we purport to grant them and discouraging any behavior-any action that could be interpreted as throwing a tantrum, for instance-that would undercut their claim to maturity. They are caught in a bind: the more vehemently they protest, the more they cast doubt on the wisdom of their cause. In the vast array of projects that confront them, this is not an appealing choice.
The minimum driving age is not quite a sacred value, then, but it shares with sacred values the interesting feature of being considered best left unexamined, by common consensus among a sizable portion of the community. And there is a readily accessible reason for this inertia. We human beings lead lives that cast long beams of anticipation into the foggy future, and we appreciate-implicitly or explicitly-almost any fixed points that can reduce our uncertainty. Sometimes this is so obvious as to be trivial. Why save money for your children's education if money may not be worth anything in the future? How could you justify going to all the trouble of building a house if you couldn't count on the presumption that you will be able to occupy it without challenge? Law and order are preconditions for the sorts of ambitious life-planning we want to engage in. But we want more than just a strong state apparatus that can be counted on not to be vacillating in its legislation, or whimsical in enforcement. We, as a society, do need to draw some lines-"bright" lines in legalistic jargon-and stick with them. That means not just promulgating them and voting on them, but putting an unequal burden on any second-guessing, so that people can organize their life projects with the reasonable expectation that these are fixed points that aren't going to shift constantly under the pressure of one faction or another. We want there to be an ambient attitude of mutual recognition of the stability of the moral-not legal-presumptions that can be taken for granted, something approximating a meta-consensus among those who achieve the initial consensus about the threshold: let's leave well enough alone now that we've fixed it. In a world where every candidate for a bright line of morality is constantly under siege from partisans who would like to change it, one's confidence is shaken that one's everyday conduct is going to be above reproach. Consider that nowadays, in many parts of the world, women simply cannot wear fur coats in public with the attitudes their mothers could adopt. Today, wearing a fur coat is making a political statement, and one cannot escape that by simply disavowing the intent. Driving a gas-guzzling SUV carries a similar burden. People may resent the activities of the partisans who have achieved these shifts in opinion even though they may share many of their attitudes about animal rights or energy policy; they have made investments-in all innocence, let us suppose-that now are being disvalued. Had they been able to anticipate this shift in public opinion, they could have spent their money better.
These observations are not contentious, I think. How, though, can we apply this familiar understanding to the vexing issues surrounding the inception-and manipulation and termination-of human life, and the special status it is supposed to enjoy? By recognizing, first, that we are going to have to walk away from the traditional means of securing these boundaries, which are not going to keep on working. They are just too brittle for the 21st century.

We know too much. Unlike traditional sacred values that depend on widespread acceptance of myths (which, even if true, are manifestly unjustifiable-that's why we call them myths rather than common knowledge), we need to foster values that can withstand scrutiny about their own creation. That is to say, we have to become self-conscious about our reliance on such policies, without in the process destroying our faith in them.

Belief in Belief

We need to appreciate the importance in general of the phenomenon of belief in belief .6 Consider a few cases that are potent today. Because many of us believe in democracy and recognize that the security of democracy in the future depends critically on maintaining the belief in democracy, we are eager to quote (and quote and quote) Winston Churchill's famous line: "Democracy is the worst form of government except for all the others that have been tried." As stewards of democracy, we are often conflicted, eager to point to flaws that ought to be repaired, while just as eager to reassure people that the flaws are not that bad, that democracy can police itself, so their faith in it is not misplaced.

The same point can be made about science. Since the belief in the integrity of scientific procedures is almost as important as the actual integrity, there is always a tension between a whistle-blower and the authorities, even when they know that they have mistakenly conferred scientific respectability on a fraudulently obtained result. Should they quietly reject the offending work and discreetly dismiss the perpetrator, or make a big stink?iv
And certainly some of the intense public fascination with celebrity trials is to be explained by the fact that belief in the rule of law is considered to be a vital ingredient in our society, so if famous people are seen to be above the law, this jeopardizes the general trust in the rule of law. Hence we are not just interested in the trial, but in the public reactions to the trial, and the reactions to those reactions, creating a spiraling inflation of media coverage. We who live in democracies have become somewhat obsessed with gauging public opinion on all manner of topics, and for good reason: in a democracy it really matters what the people believe. If the public cannot be mobilized into extended periods of outrage by reports of corruption, or of the torturing of prisoners by our agents, for instance, our democratic checks and balances are in jeopardy. In his hopeful book, Development as Freedom and elsewhere,7 the Nobel laureate economist Amartya Sen makes the important point that you don't have to win an election to achieve your political aims. Even in shaky democracies, what the leaders believe about the beliefs that prevail in their countries influences what they take their realistic options to be, so belief-maintenance is an important political goal in its own right.
Even more important than political beliefs, in the eyes of many, are what we might call metaphysical beliefs. Nihilism-the belief in nothing - has been seen by many to be a deeply dangerous virus, for obvious reasons. When Friedrich Nietzsche hit upon his idea of the Eternal Recurrence-he thought he had proved that we relive our lives infinitely many times-his first inclination (according to some stories) was that he should kill himself without revealing the proof, in order to spare others from this life-destroying belief.
8 Belief in the belief that something matters is understandably strong and widespread. Belief in free will is another vigorously protected vision, for the same reasons, and those whose investigations seem to others to jeopardize it are sometimes deliberately misrepresented in order to discredit what is seen as a dangerous trend.9 The physicist Paul Davies has recently defended the view that belief in free will is so important that it may be "a fiction worth maintaining."10 It is interesting that he doesn't seem to think that his own discovery of the awful truth (what he takes to be the awful truth) incapacitates him morally, but that others, more fragile than he, will need to be protected from it.

This illustrates the ever-present risk of paternalism when belief in belief encounters a threat: we must keep these facts from "the children," who cannot be expected to deal with them safely. And so people often become systematically disingenuous when defending a value. Being the unwitting or uncaring bearer of good news or bad news is one thing; being the self-appointed champion of an idea is something quite different. Once people start committing themselves (in public, or just in their "hearts") to particular ideas, a strange dynamic process is brought into being, in which the original commitment gets buried in pearly layers of defensive reaction and meta-reaction. "Personal rules are a recursive mechanism; they continually take their own pulse, and if they feel it falter, that very fact will cause further faltering," the psychiatrist George Ainslie observes in his remarkable book, Breakdown of Will .11 He describes the dynamic of these processes in terms of competing strategic commitments that can contest for control in an organization-or an individual. Once you start living by a set of explicit rules, the stakes are raised: when you lapse, what should you do? Punish yourself? Forgive yourself? Pretend you didn't notice?

After a lapse, the long-range interest is in the awkward position of a country that has threatened to go to war in a particular circumstance that has then occurred. The country wants to avoid war without destroying the credibility of its threat and may therefore look for ways to be seen as not having detected the circumstance. Your long-range interest will suffer if you catch yourself ignoring a lapse, but perhaps not if you can arrange to ignore it without catching yourself. This arrangement, too, must go undetected, which means that a successful process of ignoring must be among the many mental expedients that arise by trial and error-the ones you keep simply because they make you feel better without your realizing why.12
This idea that there are myths we live by, myths that must not be disturbed at any cost, is always in conflict with our ideal of truth-seeking and truth-telling, sometimes with lamentable results. For example, racism is at long last widely recognized as a great social evil, so many reflective people have come to endorse the second-order belief that belief in the equality of all people regardless of their race is to be vigorously fostered. How vigorously? Here people of good will differ sharply. Some believe that belief in racial differences is so pernicious that even when it is true it is to be squelched. This has led to some truly unfortunate excesses. For instance, there are clear clinical data about how people of different ethnicity are differently susceptible to disease, or respond differently to various drugs, but such data are considered off-limits by some researchers, and by some funders of research. This has the perverse effect that strongly indicated avenues of research are deliberately avoided, much to the detriment of the health of the ethnic groups involved.v

Ainslie uncovers strategic belief-maintenance in a wide variety of cherished human practices:
Activities that are spoiled by counting them, or counting on them, have to be undertaken through indirection if they are to stay valuable. For instance, romance undertaken for sex or even "to be loved" is thought of as crass, as are some of the most lucrative professions if undertaken for money, or performance art if done for effect. Too great an awareness of the motivational contingencies for sex, affection, money, or applause spoils the effort, and not only because it undeceives the other people involved. Beliefs about the intrinsic worth of these activities are valued beyond whatever accuracy these beliefs might have, because they promote the needed indirection.
13

So what sort of equilibrium can we reach? If we want to maintain the momentousness of all decisions about life and death, and take the steps that elevate the decision beyond the practicalities of the moment, we need to secure the appreciation of this very fact and enliven the imaginations of people so that they can recognize, and avoid wherever possible, and condemn, activities that would tend to erode the public trust in the presuppositions about what is-and should be-unthinkable. A striking instance of failure to appreciate this is the proposal by President Bush to reconsider and unilaterally refine the Geneva Convention's deliberately vague characterization of torture as "outrages on personal dignity." By declaring that the United States is eager to be a pioneer in the adjustment of what has heretofore been mutually agreed to be unthinkable, this policy is deeply subversive of international trust, and of national integrity. We as a nation can no longer be plausibly viewed as above thinking of arguable exceptions to the sacred value of not torturing people, and this diminishes us in ways that will be difficult if not impossible to repair.

What forces can we hope to direct in our desire to preserve respect for human dignity? Laws prohibit ; traditions encourage and discourage , and in the long run, laws are powerless to hold the line unless they are supported by a tradition, by the mutual recognition of most of the people that they preserve conditions that deserve preservation. Global opinion, as we have just seen, cannot be counted on to discourage all acts of degradation of the belief environment, but it can be enhanced by more local traditions. Doctors, for instance, have their proprietary code of ethics, and most of them rightly covet the continuing respect of their colleagues, a motivation intensified by the system of legal liability and by the insurance that has become a prerequisite for practice. Then there are strict liability laws, which target particularly sensitive occupations such as pharmacist and doctor, preemptively removing the excuse of ignorance and thereby putting all who occupy these positions on notice that they will be held accountable whether or not they have what otherwise would be a reasonable claim of innocent ignorance. So forewarned, they adjust their standards and projects accordingly, erring on the side of extreme caution and keeping a healthy distance between themselves and legal consequences. Anyone who attempts to erect such a network of flexible and mutually supporting discouragements of further tampering with traditional ideas about human dignity will fail unless they attend to the carrot as well as the stick. How can we kindle and preserve a sincere allegiance to the ideals of human dignity? The same way we foster the love of a democratic and free society: by ensuring that the lives one can live in such a regime are so manifestly better than the available alternatives.

And what of those who are frankly impatient with tradition, and even with the values that tradition endorses? We must recognize that there are a vocal minority of people who profess unworried acceptance of an entirely practical and matter-of-fact approach to life, who scoff at romantic concerns with Frankensteinian visions. Given the presence and articulateness of these proponents, we do well to have a home base that can withstand scrutiny and that is prepared to defend, in terms other than nostalgia, the particular values that we are trying to protect. That is the germ of truth in multiculturalism. We need to articulate these values in open forum. When we attempt this, we need to resist the strong temptation to resort to the old myths, since they are increasingly incredible, and will only foster incredulity and cynicism in those we need to persuade. Tantrums in support of traditional myths will backfire, in other words. Our only chance of preserving a respectable remnant of the tradition is to ensure that the values we defend deserve the respect of all.vi
_______________________
FOONOTES
i.
Phaedrus 265d-266a.
ii.
The philosopher Wilfrid Sellars, in his essay "Philosophy and the Scientific Image of Man" (in Science, Perception, and Reality [London: Routledge and Kegan Paul, 1963]), distinguished between the manifest image of everyday life, with its tables and chairs, trees and rainbows, people and dreams, and the scientific image of atoms and particles and waves of electromagnetic radiation, and noted that the task of putting these two images into registration is far from straightforward. The dimension of meaning, which resides solely-it seems-in the manifest image, is resistant both to reduction (the way chemistry, supposedly, reduces to physics) and to any less demanding sort of unification or coordination with the scientific image. The tension we are exploring here is a particularly vivid and troubling case of the tension between these two images.
iii.
My 2003 book, Freedom Evolves , is devoted to an explanation of how our capacity for moral agency evolved and continues to evolve. It begins with a quotation from a 1997 interview with Giulio Giorelli: " Sì, abbiamo un'anima. Ma è fatta di tanti piccoli robot.- Yes, we have a soul, but it's made of lots of tiny robots!" These "robots" are the mindless swarms of neurons and other cells that cooperate to produce a thinking thing-just not an immaterial thinking thing, as Descartes imagined and tradition has tended to suppose.
iv.
As Richard Lewontin recently observed, "To survive, science must expose dishonesty, but every such public exposure produces cynicism about the purity and disinterestedness of the institution and provides fuel for ideological anti-rationalism. The revelation that the paradoxical Piltdown Man fossil skull was, in fact, a hoax was a great relief to perplexed paleontologists but a cause for great exultation in Texas tabernacles." See his "Dishonesty in Science," New York Review of Books , November 18, 2004, pp. 38-40.
v.
There are significant differences in breast cancer, hypertension and diabetes, alcohol tolerance, and many other well-studied conditions. See Christopher Li, et al., "Differences in Breast Cancer Stage, Treatment, and Survival by Race and Ethnicity," Archives of Internal Medicine 163 (2003): 49-56; for an overview, see Health Sciences Policy Board (HSP) 2003, Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care.
vi.
Thanks to Gary Wolf, Tori McGeer and Philip Pettit for asking questions that crystallized my thinking on these topics.
_______________________
EndNOTES
1.
See Philip Tetlock, "Coping with Trade-offs: Psychological Constraints and Political Implications," in Political Reasoning and Choice , ed. Arthur Lupia, Matthew D. McCubbins and Samuel L. Popkin (Berkeley, California: University of California Press, 1999); "Thinking the unthinkable: sacred values and taboo cognitions," Trends in Cognitive Science 7 (2003): 320-324; and Philip Tetlock, A. Peter McGraw, and Orie V. Kristel, "Proscribed Forms of Social Cognition: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals," in Relational Models Theory: A Contemporary Overview , ed. Nick Haslam (Mahway, New Jersey: Erlbaum, 2004), pp. 247-262, the latter also available online as Philip E. Tetlock, Orie V. Kristel, S. Beth Elson, Melanie C. Green, and Jennifer Lerner, "The Psychology of the Unthinkable: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals," at http://faculty.haas.berkeley.edu/tetlock/docs/thepsy~1.doc. 2. Tetlock, et al., op. cit., p. 6 of online version. 3. Material in the previous two paragraphs is drawn from my Breaking the Spell: Religion as a Natural Phenomenon (New York: Viking Penguin, 2006), pp. 22-23. 4. Douglas Hofstadter, "Dilemmas for Superrational Thinkers, Leading up to a Luring Lottery," Scientific American , June, 1983, reprinted with a discussion of reverberant doubt in Metamagical Themas (New York: Basic Books, 1985), pp. 752-755. 5. Douglas Hofstadter, "Metafont, Metamathematics and Metaphysics," in Visible Language , August, 1982, reprinted with comments in Hofstadter, Metamagical Themas , pp. 290, 595.6. What follows is drawn, with revisions, from my Breaking the Spell , chapter 8. 7. Amartya Sen, Development as Freedom (New York: Knopf, 1999); see also his"Democracy and Its Global Roots," New Republic , October 6, 2003, pp. 28-35. 8. For a discussion of Nietzsche and his philosophical response to Darwin's theory of evolution by natural selection, see my Darwin's Dangerous Idea: Evolution and the Meanings of Life (New York: Simon & Schuster, 1995). 9. Daniel C. Dennett, Freedom Evolves (New York: Viking Penguin, 2003). 10. Paul Davies, "Undermining Free Will," Foreign Policy , September/October, 2004. 11. George Ainslie, Breakdown of Will (Cambridge: Cambridge University Press, 2001), p. 88. 12. Ibid., p. 150. 13. George Ainslie, précis of Breakdown of Will, in Behavioral and Brain Sciences 28 (2005): 635-650, p. 649.

Thursday, June 26, 2008

THINKING ABOUT THINKING


Neurophilosopher Paul Churchland speculates on when computers will have human intelligence - and when the average human's causal reasoning will rise above that of a chimp.


Max More, Ph.D.


As we break into a new millennium, we are still burdened with medieval beliefs about mind and consciousness. Paul Churchland, one of the foremost scientific philosophers of mind, leads the way toward a new neuroscientific understanding of our inner life. Professor of philosophy and a member of the cognitive science faculty at the University of California, San Diego, Churchland lays out a rational vision of mind in his latest book, The Engine of Reason, the Seat of the Soul. He has long championed developing neural networks to achieve artificial intelligence and stresses the importance of combining research from computer science, neurobiology, cognitive science, and philosophy of mind to better understand consciousness. No armchair Platonist, Churchland represents the best of today's scientifically savvy philosophers.


Wired: How close are we to machines with human-level intelligence?


Churchland: For isolated capacities, we already have neural networks that exceed humans in certain of their abilities. To create a big neural network that knits together all of the things we can do, especially things like writing symphonies or holding wide-ranging conversations, I don't think we'll see that within 50 years. I'm not even sure it'll be something we aim at. I think we'll be aiming for more specific kinds of cognitive abilities. After all, it's very easy and cheap to make a brain with the capacity of a human. It's called sexual reproduction.


Where are the obstacles?


The biggest problem will be building neural nets and getting them the training data. The speed of a human brain is about 100 meters per second - tops. The speed of transmission in a copper wire is a million times faster. If you've got a machine that can think a million times faster, then it can learn a million times faster - if you can feed it the information fast enough.


Maybe with high-speed, fast-forward videotapes you could train these things in artificial worlds in six minutes. Once you've got one trained neural net, you can make copies with no difficulty at all. You would be creating identical personalities with identical skills.But putting together a system of 100 billion neurons that's wired up roughly the way we are would be very hard to do, and I'm not sure there's much payoff. The payoffs will be better spent elsewhere, in specialized nets to do things like fly aircraft for us, or monitor meat-packing plants, or diagnose hospital patients.


Will we ever implant computers - synthetic neurons - in our brains, to take over damaged areas or to augment thinking capabilities?


Certainly! Of course. Absolutely. And the sooner the better, given the cruelty of many deficits. I don't think there's any difference between putting in an artificial cognitive prosthetic and giving someone a stainless-steel hip or a prosthetic hand. It's a functional device that steps in and takes over where nature was cruel enough to leave you off.


It's possible, of course, to imagine someone having new knees put in. Would we put in superknees, so that this person can win the 100-yard dash? Well, I suppose we could. Will we put in superbrain implants that let you think superintelligently? We might do it for specific purposes. Certainly we don't hesitate to make someone walk better than they ever have before because they had a genetic defect from birth. If we can make someone's brain function better than it has before, I don't view that with horror - I view that with enthusiasm.

When will it happen?


It's hard to say. The limiting factor isn't going to be developing the implants themselves. It's going to be getting the existing brain to reach out and grow onto the prosthetic. After all, the brain doesn't come with a bunch of neat little plugs like you have at the back of your computer. Brain hardware is profoundly proprietary. Making it compatible with hardware from a different company is going to be extremely difficult.


Which will tell us more about how the brain functions: computer science or neurobiology?


It's a false dichotomy. Empirical neuroscience and computational modeling are equally important. They stand to one another as theoretical physics stands to experimental physics. We'll learn most from a healthy, ongoing interaction between the two. That's what I think is so exciting about the 1980s and 1990s. We've finally got some computational models that are suggesting experimental questions. We go to empirical neuroscience and we get answers that send us scurrying back to the models to modify them. So you go back and forth, and you ratchet yourself up the ladder of understanding far more efficiently than is possible if you're just doing experimental groping or just doing free-wheeling theory.


In your books, you give several powerful arguments that the mind is not independent of the brain. Why do most people still believe that the mind survives after death?


People don't learn enough science. They are methodologically impaired: their causal reasoning is about the same as that of a chimpanzee or a fox. But that can be repaired.


What human science has managed to achieve over a period of 2,000 years is a system of checks and balances whereby these conceptual impulses that we have naturally are subjected to a unique systematic scrutiny. They are forced to go through a filter that knocks most of them out. That filter can come to be the possession of any individual who learns the scientific history of the human race, or at least enough of it.


I'd like us to better understand learning in neurophysiological and neuropharmacological terms. Even if we have the knowledge to change a particular child's learning capacity only 2 or 3 percent, that's like interest on an investment - it will compound as the years go by.

________________________________________________________

Copyright © 1993-2004 The Condé Nast Publications Inc. All rights reserved.


Copyright © 1994-2003 Wired Digital, Inc. All rights reserved.

BEYOND THE MACHINE: Technology and Posthuman Freedom

Max More, Ph.D.
Paper in proceedings of Ars Electronica 1997. (FleshFactor: informationmaschinemensch), Ars Electronica Center, Springer, Wien, New York, 1997.
"Living and unliving things are exchanging properties…" --Phillip K. Dick, A Scanner Darkly.
According to the introductory statement for this conference discussion, man and machine are diametric opposites. I will seek to contribute to the discussion by constructively disagreeing with this statement. I will contend, first, that although humans are not machines they are composed of mechanical parts. Second, by appreciating this we can see how machines and technology can enable to become "more human than human", i.e., less mechanical than we remain today.

If it were true that humans and machines are diametric opposites then it would have to be true that humans are not in the least machinelike and that machines cannot have humanlike properties. Yet biochemistry shows us that we are comprised of billions of machines. Each of our organs and tissues is a machine with a particular function. Each organ is made up of cells which themselves are made up of smaller, simpler biochemical machines. We call these "ribosomes", "mitochondria", "RNA" and the like. Even the seat of our consciousness and personality, the brain itself is made up of many billions of machines—neurons, synapses, hormonal systems, neurotransmitters. Ultimately body and brain are composed of the simplest mechanical parts: subatomic particles. Ultimately we are all quarks in motion.

The alternative view—that humans are the very opposite of machines—can only be true if we accept vitalism. Vitalism holds that life results not from biochemical reactions but from a vital force unique to living things. Whereas modern science sees life as resulting from the complex interactions of mechanistic parts forming an organic whole, vitalism sees life as suffused with a substance not found in non-living nature.

To say that humans are composed of machines is not to say that we are merely machines. Humans are dignified machines. We are (so far) the most extropic, most complex product of billions of years of evolution. All machines are not created equal. Living organisms display properties not shared by simpler machines. These emergent properties (homeostasis, reproduction, learning, intelligence) result not from the addition of a mysterious vital force but from the complexity of functional interrelationships. If we define "machine" and "mechanical" to imply rigid, unvarying, stupid, inflexible function, then humans are not machines, despite being entirely composed of machines. When enough machines work together in complex ways, new properties emerge, properties we refer to with terms like "organic", "living", "feeling", and "thinking".

The idea that humans and machines are opposites also fails to recognize that machines continue to evolve more organic, living qualities. Already we are developing robots that display some qualities of animals; we have artificial life software that mutates, reproduces, and evolves, as do computer viruses and worms; we have computers that learn using fuzzy logic, genetic algorithms, and other computational techniques. Whether a creature or an organ is made of carbon-based organic material, or of silicon or other inorganic materials does not matter. What is important is the complexity of the result: is the structure able to learn, to self-modify, to respond dynamically to changing input?

We can say that humans are especially subtle, complex, and dignified machines. Or we can say that humans are not machines though composed of them. The facts matter more than the words we use, though words bring connotations that have power over attitudes. The crucial point is that humans and machines are not opposites. As machines continue their rapid evolution and as we increasingly tinker with our bodies and brains to repair and improve them, this fact will become ever more obvious. This realization will open the way to improving ourselves by upgrading the machine components of humans.

A human brain reasons, creates, feels, plans, calculates, appreciates. These properties of living, conscious beings result from the immensely complicated connections among our 100 billion neurons. Any individual neuron displays no consciousness, no reasoning, no creativity. Even more clearly, the molecular and atomic parts making up the neurons do not display these properties. The neuron is a biochemical machine. We should therefore be able to replace or supplement biological neurons with synthetic neurons while retaining the same functions. We should be able to repair damaged neural tissue with implants. We should be able to add memory, processing power, and new abilities by supplementing natural neurons with synthetic neurons. In principle, we could replace all our neurons until we had an entirely synthetic or prosthetic brain. If the new neurons worked similarly to the old, and were connected up the same, we would never notice a difference. (Except that we might be able to process information faster and would not slow down with age.)

Since misunderstanding is easy I want to stress here that I claim only that humans are composed of mechanical parts, not that we ARE machines. In some sense we can reasonably say that humans are machines since we are entirely composed of mechanical parts and we have no sound reason to believe in non-material parts to us. If we were to describe ourselves as machines we would giving wide latitude to the meaning of the term. "Machine" usually connotes something rigid, unvarying, planned, and programmable. Since we think of ourselves as free, responsible, moral, rational beings, we may reasonably restrict the term "machine" and refuse to apply it to ourselves. This is the option I favor. However, we must then accept that our computers and robots and electronic ecosystems can also then cease to be machines in this sense. Whether something is a machine depends on the complexity and subtlety of its function, not on what substance it is made of. Simple biological organisms such as enzymes and viruses certainly count as machines, while an advanced artificial intelligence would not count as a machine.

Although I see no decisive metaphysical objection to describing humans as machines, the connotations of the word convince me that we would be wise not to apply the term to ourselves (nor to our mind children—the artificial persons we will eventually create). Since connotations of terms affect our attitudes we should avoid labeling persons with terms that may encourage us to regard them as tools, as objects, or purely as means to our ends. Machines are usually understood as arrangements of parts to perform useful work, modifying mechanical energy into more useful forms. Machines come in various forms from simple levers and screws to engines (machines that transform heat and other forms of energy into mechanical energy) and computers (machines that process information). Obviously a vast gulf exists between a crude lever and supercomputer running millions of lines of code. If both can be called machines we might stretch the term to include humans. But because "machine" implies a tool to be used for external purposes I prefer to refrain from attaching this term to persons.

To further locate my position among the alternatives, I suggest we can distinguish at least four views on the relation of humans and machines:

View 1: Humans are machines. This appears to be the position of Daniel Dennett, according to the June 1997 FleshFactor interview and, more originally that of Lucretius and La Mettrie. According to this view, not only do humans contain machines, they are machines.

View 2: Humans have a dual nature, having a mechanical physical body and a spiritual body or soul that is entirely non-mechanical. I take this view to be scientifically and philosophically indefensible although extremely popular among most people.

View 3: Humans are mysteriously non-mechanical. They (or at least their brains) have essential properties that are not mechanical at any level of understanding and which cannot be recreated in any devices we might construct (i.e., artificial intelligence). These "New Mysterians" (or modern day vitalists) as they have been called include Roger Penrose and John Searle.

View 4: Humans are composed of mechanistic parts but the arrangement of these parts produces emergent, non-mechanical properties. The non-mechanistic properties would not exist but for the mechanistic parts comprising them, but we cannot fully understand the emergent properties solely by examining the mechanistic level. This is my view.

The humans-as-machines metaphor, though superficially scientific (in stressing the absence of supernatural elements) strikes me as outdated. In the human science of economics realization of the inappropriateness of machine language has been spreading. For decades economics talked of the "engine" of the economy, of "priming the pump", of "fine-tuning" and so on. The Austrian school of economists first challenged this approach by emphasizing the market as a discovery procedure (especially the work of Friedrich Hayek). Recently a bionomics model has taken hold in which the economy is understood in terms of an ecosystem that is best carefully nurtured and fertilized but not centrally controlled like a machine.

While William Paley in his Design Argument for the existence of a god portrayed the world as a gigantic mechanism designed for a purpose, evolutionary theory has revealed a world ordered by distributed processes over millions of years. Though each organism in the world can be broken down into mechanistic components (bones, ligaments, cells, organelles) the principles embodied in the ecosystem as a whole, like those embodied in the economy have little in common with the working principles of paradigmatic machines.

The statement "humans are machines" cannot decisively be declared true or false. We can draw no sharp line between machines and complex systems that are not machines, just as we cannot draw a sharp line between life and non-life or between night and day. I stand on the side of those who prefer to say humans are not machines because I see us moving ever further from rigidity, inflexibility, and mindlessness. If the term "machine" ever loses these connotations, I will then see no reason to object to describing humans as elegant organic machines.

I started by claiming that technology can allow us to become "more human than human". Now I can clarify that claim, showing how the understanding that we are made of mechanical parts is a cause for optimism and humanism (or transhumanism) and the fostering of freedom, not fear and nihilism nor a policy of social control. Although humans have evolved more complex brains than any other animal, still we have not fully escaped our biological-machine heritage. Too easily humans are manipulated. We have little control over our emotions, our moods, our personality. We respond to external influences and to internal chemical, hormonal, and neural events often without much consciousness or choice. While more self-determining and self-aware than other creatures, humans still show clear signs of being mechanical and other-determined. The whole appeal of seeing that we are a complex functionally interrelated collection of mechanical parts is that it opens up an appealing prospect: that technology will allow us to modify our nature, to alter ourselves, to augment and shape ourselves according to our values.

Advanced technologies such as genetic engineering, smart drugs, prostheses, and soon brain implants (neuroprostheses) represent the next step in the long march of evolution. Evolutionary processes have brought order out of chaos, extropy out of entropy. Extropy is the extent of a system’s intelligence, information, order, vitality, diversity, and capacity for improvement. Extropy has (so far) reached its peak on this planet in human beings. The original physical processes that led to stellar and planetary formation gave way to biological evolution. Biological evolution has yielded its primary place to memetic and technological evolution. As the extropic processes of evolution have proceeded, the complexity of nervous systems has grown. The purely chemical responses of single-celled creatures led to tropistic behavior. Tropism became supplemented in animals by instinctual behavior stimulated by integrated perception and recognition. With the advent of our species new possibilities for flexible behaviors arose thanks to our capacity for conceptual thought, for rationality, creativity, self-restraint, and self-transformation.

Properly used, technology will not mechanize us but expand our freedom as we move from human to posthuman, continuing the extropic evolutionary process. The scientifically untenable ideas of dualism and vitalism have led us into the false idea that freedom is all or none. In Descartes’ version of dualism, all animals are merely machines, unable to make choices. Only humans, imbued with a spiritual substance or soul, have freedom and responsibility. In addition from pushing animals outside the realm of moral standing this view was doubly unfortunate. Those who believe in a soul will be unable to see how alterations to the physical constitution of persons could increase freedom. On their view, our uniquely human freedom and rationality resides outside the material realm. If we lose our belief in a supernatural realm, the dualist legacy may lead us to abandon any conception of genuine choice, freedom, and responsibility. (Philosophers refer to the view that physical causation and freedom cannot coexist as incompatibilism.)

Similarly, vitalists (whether the nineteenth century variety or today’s New Mysterians) by locating human freedom in a mysterious vital force will not understand how alterations to our physical structure could increase our freedom. If our freedom depends on this vital force, we can only lose it by technologizing ourselves such as by implanting synthetic neurons or using prosthetics.

We may do best not to claim that humans ARE machines. Yet understanding and accepting that we are composed of an arrangement of mechanistic parts provides a key to our further demechanization. Being aware of our origin in mindless nature, we can see that we have not completed our evolutionary journey from unconsciousness and rigidity to maximum freedom and self-definition. With this awareness and by applying our burgeoning scientific knowledge and technological prowess, we can hasten our development. We can bring about the triumph of consciousness over mindlessness.

Anyone who declares that humans are today totally free beings should consider why the compulsive eater does not stop, why the addicted smoker does not quit, why the depressive does not snap out of it, why the procrastinator does not change his behavior, and why all of us find it so hard to rewrite our behavioral programs. While we all may have more range of choice than we are usually aware of, still we cannot choose to be who we wish to be. Our emotions resist us. Anger, hostility, envy, lust, unhappiness, anxiety, fear, excitation, lethargy, all dominate us to varying degrees. Cognitive techniques give us a measure of influence, yet cannot easily shift ingrained habits of thought or powerful moods. Our childhood experiences and our genes largely shape our personality. Our hormones and the structure of our brains set limits to our choices of how to feel, how to behave, how to think, and who to be.

A specific example of how an advanced neurotechnology could allow us more choice over our emotions: Our brains evolved in such a way that emotional centers like the amygdala strongly influence the cortex. Because of the plentiful pathways going from the amygdala to the cortex, our feelings dictate much of our attention and shape our thoughts, whether we like it or not. We have few connections going the other way, from cortex to amygdala. This makes it hard to shut off emotions once activated. If we could add new pathways from emotional to cognitive centers (accelerating an evolutionary process that has already given us more such connections that have other mammals) we could acquire deep awareness of our emotions and the ability to modify them. We would then free ourselves from an unchosen emotional domination, achieving a better integration of reason and passion.

Rewiring the brain is not in our immediate future (though it will come much sooner than most scientists expect). Other technologies are already starting to expand the range of our choice over the self we want. The first to benefit from these technologies are the emotionally and cognitively impaired. Drug therapies, beginning with crude anti-depressants and anti-anxiety drugs and now more refined, targeted drugs that selectively affect neurotransmitter sites, have allowed millions to have more say in how they feel and act. Nootropics or "smart drugs", though still in their early stages, can improve cognition in the elderly and others with cognitive deficits, and sometimes even in young, healthy persons. Gene therapy is rapidly becoming a practical tool for altering somatic and neurological functions. Retinal and cochlear implants have begun to restore perceptual abilities. Neuroprostheses, though further in the future, have enormous potential to take back control from the shaping forces of evolution and upbringing.

With all these technologies, whether chemical or genetic modification or implants, our concern should be using them to expand our range of choice. Genetic engineering and mood drugs could be used to narrow our abilities, to create happy slaves, or to pacify ourselves. These dangers are real but must not deter us from developing technological means of freeing ourselves from our still half-mechanistic nature. In developing means of modifying ourselves we should seek to give ourselves more choice over how to feel and who to be, not to use these means to push ourselves and others into specific functions or ways of being.

The increased freedom of self-creation, the augmented capacity for self-definition, will mean an expanded arena of personal responsibility. We will have less and less room to blame our troubles on our genes, our parents, our hormones, our society. Many will feel uncomfortable with this level of choice to self-define. I look forward to it as the next phase of our development away from unconscious nature towards a posthuman condition of self-creation or automorphing.
Finally I note that the idea of man-as-machine has sometimes been used to promote social engineering—the pushing of individuals into centrally-determined positions and roles. B.F. Skinner’s behavioristic views and his horrendous portrayal of what he took to be a utopia warn us to beware the machine metaphor. I have granted that we could describe ourselves as especially elegant, sophisticated machines. Adding to the reasons I gave for refraining from this usage we can add the danger that the metaphor will give impetus to today’s social engineers. By recognizing that we have mechanistic components but that our goal is to carry ourselves further from our machine heritage, we can resist those who would make us tools to their ends. And we can affirm our own determination to treat other humans are ends in themselves. We are composed of mechanistic parts but possess emergent properties of free choice, self-ownership, and personal responsibility.

My message, then, is that we should grant the obvious truth in the assertion that humans are machines. We cannot reasonably regard humans and machines are utterly opposed. By understanding our origins and underlying nature we can accelerate our development from rigidity, mindlessness, and external determination to flexibility, mindfulness, and self-determination and self-definition. We can increasingly choose a self, become artists of the self. The age of the automorpher is arriving.
Max More, Ph.D.
4607 Lyra Circle, Austin, TX 78744, USA.
Telephone: 512-263-2749

Rethinking Our Mental Framework


If the Human Genome Project has taught us anything, it is that we humans are, genetically, virtually identical: our DNA is more than 99.9 percent the same. Yet despite this underlying sameness, each of us is different—in appearance, in chemistry, and, most importantly, in our minds. In the same way that rows of houses built from the same basic plan can differ in details, furnishings and decorations, so human similarity must give way to individuality, to each of us having a mind of our own.

How does this occur? What is the process that makes each human a unique individual—largely the same, yet different from every other? Speculations arise from all corners, running the gamut from physics to metaphysics. Some search the commonly accepted view of our evolutionary history for an answer. Did something unusual happen to Homo sapiens between the days of hunting and gathering across the plains and the ordering of take-out meals via the office telephone?

Or does the answer lie within the equally mystifying “soul,” thought by many to be a self-existent and self-conscious essence imparted by God at man’s creation? (For a discussion of the biblical meaning of soul see “After Life” in the Spring 2003 issue of Vision.)
While the Bible doesn’t attempt to provide a scientific explanation for human individuality, it does supply a vital dimension to our understanding of the subject. The apostle Paul, writing to the first-century followers of Christ in Corinth, also used the analogy of buildings as he pointed out a key aspect of our individuality: “You are God’s building. . . . As a wise master builder I have laid the foundation. . . . But let each one take heed how he builds on it” (1 Corinthians 3:9–10, emphasis added).

Personal responsibility for building your “house” on a right foundation? The idea seems a million miles removed from the science of the mind and the brain. The two are actually much more closely related, however, than one might expect. A closer look at the science may help shed light on a key spiritual concept.

CONSCIOUSLY HUMAN

There is ample reason to believe that the physical structure of the brain is important in human individuality. Although research has not yet found any cellular, molecular or physiological difference between human and animal brain structure, neurobiologists are making significant headway into the question of consciousness and human uniqueness.

Of course, science is foremost a materialist enterprise and is therefore limited to exploring the physical, observable aspects of mental function. A spiritual dimension is not within the realm of scientific hypothesis. What neuroscience is revealing, however, is that the physical brain possesses the unique capacity to integrate information in a way that generates self-awareness and individuality. Who we are, it is telling us, is more a matter of the choices we make than of whatever instincts we possess.

Professor Joseph LeDoux of New York University’s Center for Neural Sciences outlines much of our current understanding in “Synaptic Self: How Our Brains Become Who We Are” (2002). He believes brain research is showing that the physiology of the brain itself, the synergy of synaptic connections between neurons, produces human self-awareness, the awareness of being a person.
In this LeDoux is amplifying an “astonishing hypothesis” postulated a decade ago by Nobel laureate Francis Crick and his colleague, Christof Koch; namely, that human behavior and consciousness seem to find their physiological foundation within the network of connections between brain cells. Writing in The Astonishing Hypothesis, Crick extrapolated from research concerning human visual perception to conclude that human individuality is found in the “complex, ever-changing pattern of interactions of billions of [nerve cells], connected together in ways that, in their details, are unique to each one of us.”

While acknowledging that consciousness is an unexplained property, Crick anticipated that further research would pinpoint the physiological seat of free will, possibly in neural tissue located just behind the forehead. In keeping with his materialist viewpoint—that everything stems from matter and is therefore physical—he found it astonishing that anyone would think otherwise. The idea that science would not ultimately dispel what he called “fuzzy folk notions” created by nonscientific thinking was, to him, ridiculous.
It comes as no surprise, then, that Crick would pronounce a person to be “no more than the behavior of a vast assembly of nerve cells and their associated molecules.”

WONDERS OF SELF-AWARENESS

While the anticipated tissue or “module” of free will remains undiscovered, a more accurate understanding of how information is integrated throughout the brain is emerging. Taking the familiar “I think, therefore I am” statement from René Descartes to its logical next step, LeDoux seeks to explore the complexities of “How do I know I think?” At the heart of this elusive quest is the concept of “self.” All animals have a self, but only some are self-aware, says LeDoux. “The existence of a self is a fundamental concomitant of being an animal,” he believes. “All animals, in other words, have a self, regardless of whether they have the capacity for self-awareness.”
LeDoux describes this capacity for self-awareness as the integration of what he calls the implicit self, the inner, unconscious workings of the brain; and the explicit self, our conscious knowledge of self.

Through heredity and experience, every human brain becomes uniquely “wired” (see “The Synaptic Connection,”). This “wiring” allows our minds to physically function; we perceive, integrate, store and retrieve, all without realizing we are doing it. Because the actual processing of these myriad synaptic connections and the memory they store is unconscious, LeDoux calls it implicit, a hidden process. It is the “self” LeDoux believes is found in all animals.

How we consciously describe or see ourselves, on the other hand (our understanding of who we are), is our explicit self. It is our personal vision of self, created through what LeDoux labels “working memory.” It is here that sensory information is integrated and analyzed in conjunction with memory—where the implicit mind meets the world. The result is conscious awareness and the capacity to connect the present with the past, which is what defines human decision making.
The synaptic chatter that results in our conscious view of self is bewilderingly complex. Imagine the neurons of the brain as all the cell phones across the planet. Then think of each phone sending a tone to every other one at the same time and the result being not an atonal squawk but a symphony.

Research concerning the processing of sensory input in conjunction with short- and long-term memory—all linked through synaptic space—shows that all areas of the brain are engaged simultaneously. Averaging 1,400 cubic centimeters, the volume of the human brain is only about 1.4 liters (the equivalent of six or seven cups of coffee). Yet the amount of synaptic traffic constantly traversing that space is enormous. Like the imagined symphony, we are truly greater than the sum of our (mental) parts.

How the explicit self actually arises from the implicit remains a mystery, but LeDoux offers his best estimate: “Life requires many brain functions, functions require systems, and systems are made of synaptically connected neurons. We all have the same brain systems, and the number of neurons in each brain system is more or less the same in each of us as well. However, the particular way those neurons are connected is distinct, and that uniqueness, in short, is what makes us who we are.”

YOU ARE WHAT YOU THINK

As complex as the science is, the conclusion is rather obvious: we are the product of our thoughts. “If a thought is a pattern of neural activity in a network,” explains LeDoux, “not only can it cause another network to be active, it can also cause another network to change, to be plastic.”

This plasticity can be both a frightening and a heartening scenario. How we choose to behave and think, and what we choose to view and take into our mind, affects not only our present reality but also (implicitly) the wiring of our brain. We have the capacity to condition ourselves: our character is under our own control. “With thoughts empowered this way,” notes LeDoux, “we can begin to see how the way we think about ourselves can have powerful influences on the way we are, and who we become.” In other words, science is beginning to recognize that we are, to a greater or lesser extent, personally responsible for who we are and what we become. This is reminiscent of something Solomon said nearly 3,000 years ago: “As [a person] thinks in his heart, so is he” (Proverbs 23:7).

A CURSE OR A BLESSING?

LeDoux notes that “one’s self-image is self-perpetuating.” Some, however, find danger in this self-perpetuation: when things go wrong, people often go from bad to worse. Some feel that our individuality is what has led to the strife and conflict that is evident throughout human history, and they labor under a sense of hopelessness as a result. Is the uniqueness of “self” effectively a curse? Will it serve only to create barriers between people and all other life?

Such is the opinion voiced by Pulitzer Prize winner Annie Dillard. In discussing the unique features of human consciousness, LeDoux quotes from Dillard’s Pilgrim at Tinker Creek: “It is ironic that the one thing that all religions recognize as separating us from our creator—our very self-consciousness—is also the one thing that divides us from our fellow creatures.”

This is an unfortunate and mistaken conclusion, however. Even as science delves more deeply into determining what it is to be human physiologically, a greater question remains: Is there purpose in this unique human malleability that makes us so different from the animals? Although the materialist approach assures us that evolutionary processes are responsible for our mental structure, many biologists find no satisfactory Darwinian explanation for how the human mind became unique among mammals. Why did these functions evolve? This is a question that LeDoux recognizes “concerns historical facts that are not easily verified scientifically.”
The answer requires rethinking Dillard’s lament. Are the differences and unique qualities that separate humankind from the rest of creation actually a curse? Or is our conflicted existence the result of something else? The truly astonishing hypothesis is that these qualities of consciousness, self-awareness and plasticity in fact make it possible for humans to form a right relationship both with the rest of creation and with the Creator.

It is heartening to understand that the human mind has the capacity to change. We are not fated to a hard-wired future or inescapably doomed to a downhill run. We experience, learn and act. We have the capacity to evaluate the consequences of our behavior. LeDoux recognizes that our physiology does not condemn us. “[It] doesn’t mean that we’re simply victims of our brains and should just give in to our urges,” he says. “It means that downward causation [the cascade from thought to action] is sometimes hard work. Doing the right thing doesn’t always flow naturally from knowing what the right thing to do is.”

While the unconscious processes underlying change may be unknown to science (and may well occur in ways that lie beyond the ability of the sciences to dissect), the inescapable conclusion is that we are not organisms that live by instinct. We are born not knowing who we are; we learn. And in learning, we begin to make choices that will establish our character and our values. Indeed, a successful future hinges on the development of sound character. But this can be done only on an individual basis.

The God-given capacity to change our character from the inside out is not divisive. It is not a curse. It is, in fact, our Creator’s greatest gift. The Bible refers to this kind of change as repentance: recognizing where we are wrong and choosing, with God’s help, to behave differently. The apostle Paul wrote that it is the goodness of God that leads us to repentance (Romans 2:4).

God long ago gave humanity a set of laws to act as a regulatory system against which to evaluate our choices. Those laws were intended to be internalized in each human mind (Deuteronomy 6:6–8), enabling us to be individually responsible for our actions. And we each will reap the results of the choices we make. As the prophet Ezekiel wrote, “The son shall not bear the guilt of the father, nor the father bear the guilt of the son. The righteousness of the righteous shall be upon himself, and the wickedness of the wicked shall be upon himself” (Ezekiel 18:20).

Writers of the Bible were not neuroscientists; indeed, they had little if any physiological understanding at all. But they conveyed a powerful message regarding how the moral framework of the mind was to be established. When we, as individuals, begin using the standards of our Creator in measuring and aligning the foundation of our character, we will find a contentment that is otherwise elusive. Adherence to those standards will result in the building and maintaining of well-furnished and harmonious mental homes, each individual and unique, yet each compatible and at peace with all others.

DAN CLOER