Friday, December 20, 2013

Eight Years Old and Counting

by Joel Marks
Originally published in Philosophy Now magazine, issue no.46, May/June 2004, p. 45
Any one thing can lead us to all other things. For me at one time the "one thing" was vision. As I have mentioned in a previous column (in issue no. 42) about the late perception psychologists, J.J. and Eleanor Jack Gibson, the personal discovery that there are not only objects in the world that I see (plus me or including me) but also my seeing them, ultimately led to my becoming a philosopher. But before I had even been introduced to the field of philosophy, I was captivated by vision. Perhaps the first significant manifestation was my hobby of black and white photography in college, under the tutelage of a rooming housemate, Pat Lau. Another housemate, Chip Porter, helped me build a darkroom in a corner of my living room. Then came my undergraduate studies with J.J. Gibson and his graduate students, and post-college I was hired to teach courses on visual psychology at an art school, where the director, Bill Collins, emulated the Bauhaus. Only after all of that did I consider studying philosophy. In the interim, vision had become a passion.
  I fancied myself a phenomenologist in that I cultivated visual experience per se. Perhaps a better label would be: visual naturalist -- a collector and cataloguer of specimens in the visual world, which I would record in a diary of observations.  And while I adored the work of the Belgian physicist, M. Minnaert, who did the same with optical phenomena, such as rainbows and halos, my quarry tended to be phenomena that could not be explained by optics alone (if at all!). A typical entry:
  “Wearing a brown shirt ... but I noticed the very thin rim of it rising above my beige sweater looked PURPLE. I checked the light fixture on the ceiling: it seemed to be regular incandescent/exposed bulbs. So then I performed one of those amazing, delicious life experiments (like being in a dream where you know you can fly): I pulled the sweater downward ever so slowly ... and RIGHT BEFORE MY VERY EYES ... the purple turned to brown!!! I repeated, up and down, several times. Albers [Joseph Albers, a modern artist who was famous for his studies of color, including contrast phenomena of this kind]. The watched pot [i.e., I had, in effect, witnessed the magic moment of boiling, or followed the rainbow to its source].”
  Optics plus physiology, you say? Perhaps. But my interest lay not so much in explanation as in implication (this being the nascent philosopher in me). I also simply indulged in the wonder of it, so in a way I positively did not want it to be explained! Over time the observations became more and more fantastic. There is magic in this mundane world of ours, if you take the time to look at it and reflect (a nicely ambiguous word under the circumstances). I hope to write about these experiences at length some day, but for now let me cut to the quick.
  I found that there were two poles of visual phenomena that were instructive in opposite ways. First were those which were commonplaces of veridical recognition, my favorite being the wind. As my stepson Sean had exclaimed one day when he was eight years old while looking out the window: "Look at the wind outside. Man!" It was plain to him, as it had always been so to me, that the wind is visible. Yet when I entered the scientific circles of perception psychologists, I discovered that this was almost universally denied (except by gibsonians). Why? Because the prevailing dogma was that anything which is visible must have color and shape; the wind having neither, its existence cannot be seen but must be inferred from other things seen which do have color and shape, such as bending branches and flying hats.
  Well, why let obvious facts get in the way of a good theory, eh? Ridiculous! Thus, I was developing my first skepticism of the "experts" (like a good Socratic) ... and of scientific psychologists in particular (as the psychologist Carol Gilligan was doing from her exposure to their equally ludicrous male biases). But what came as an even more startling revelation was finding that laypersons had also adopted the scientific viewpoint. There is nary an adult of my acquaintance who retains the vision of an eight year old ... and I'm not talking about physiology! This is a case of the emperor's new clothes: "You cannot see the wind, my child. Grow up!" But it is adults who deny their own senses. Just as it is adults who tell children fairy tales and expect them to believe them, even when they become adults themselves, as I continually discover to my amazement. (I spend much of my life being amazed, as you can see -- sometimes pleasantly, as by visual phenomena; other times unpleasantly, as by human stupidities.)
  But I spoke of two poles: another kind of visual experience presents us with clearly illusory phenomena, such as the bent stick in water (that isn't really bent). These too are commonplaces, but, despite sometimes giving delight (as when you shake a lead pencil in just the right way to make it appear rubbery), their philosophical "lesson" is usually completely overlooked. My favorite of this type is a wire cube that I keep in my office (I am looking at it right now), which is an absolute chunk of the Twilight Zone -- a true crack in the cosmic egg, to use Joseph Chilton Pearce's evocative phrase -- the looking glass I can walk through any time I please (and not just when I happen to dream of doing so). What this cube does, you see, is rotate ... except, it's not really rotating. Instead, the turning of my head as I gaze at it in a certain way (namely, by Gestalt-shifting it like a Necker cube) is translated into the cube (analogous to the way the earth's rotation is translated to the starry sky)*.
  Please, do visit me some time and I shall show you, because it is boggling. But what does it all mean? What fascinates me is that this rotating cube -- which is not "there" -- is sitting on a cabinet, which decidedly is "there" ... or is it? Doesn't this phenomenon prove that all we ever see is a kind of waking dream? I know that my "reasoning" herein has been quite "loose" ... that the very way I have phrased my account begs all the questions ... and that I have contradicted my own more "mature" musings about materialism in this very column (see issue no 44). But ... I don't want to stop being an eight-year-old!

NOTE
* Thus, I experienced a "reverse Buckminster." Buckminster Fuller used to claim that he had come to experience the apparent motion of the heavenly bodies across the sky as the actual motion of the rotating Earth. What I experienced was the actual rotation of my head as the apparent motion of the cube.

Monday, May 13, 2013

Belief

by Joel Marks
Published in Philosophy Now, Issue No. 70, November/December 2008, p. 39.

"JOOOOOOOOELLLLLLLLLL?!" The shrill call of my name made me jerk the phone from my ear. In that instant I thought, "Mom." I was flabbergasted: My mother had been dead for five years.
As the person on the wire continued to speak I had time to question myself. Could the death of my mother, even the sense that so many years had passed by, be only something I had dreamt the night before? Was it just that I had had no occasion to doubt it since waking up this morning, until this telephone call had jarred me back to my senses that my mother is still alive?
 I know there was a time in my life when I believed I could fly, for I remembered having done so. The image was of my body parallel with the ground, close to it but not touching it. My arms were crossed like a Cossack dancer's, and I was moving forward steadily, following the path of a sidewalk near my home. In my daily life I felt I had this power, although it wasn’t exactly clear to my child’s mind when I could exercise it. Then one day I realized it must have been something I had dreamt; so it was not a memory – or it was a memory, but of a recurring dream, not something that had actually happened. This was the kind of realization one experiences when, in the light of increasing knowledge, the belief in Santa Claus evaporates like dew at daybreak.
A few more words from the person on the telephone dispelled my current confusion. It was Svetlana, a new acquaintance. The way she had spoken my name had been her enthusiastic greeting, probably also prompted a bit by nervousness because of the novelty for her of speaking English without seeing the person she was speaking to. Now that I thought about it, her age was not far from my mother's when I was in college and would receive calls from her. That was how they would begin: "JOOOOOOOOELLLLLLLLLL?!"
The experience of Svetlana's call served as a reminder to me of the fragility of belief. For one second I had believed my mother was still alive. The belief was patently false, but I was taken in by it all the same. It is not an uncommon experience, is it? I’m sure you can empathize. Here is another example: How many times have I found myself gripping the chair when (for no particular reason) I have fallen into a momentary reverie of being in a plummeting airplane. I am feeling real fear. I believe I do believe at those moments that I am in an airplane. I am not asleep and dreaming; nor are my eyes even closed. It is just that there has been a shift of belief brought on by an image in my mind.
Yet, at other times, belief is recalcitrant. If somebody were to hold a gun to your head and demand that you believe in Santa Claus or he would shoot, could you do it? I doubt it. But that’s not because there is no Santa Claus; it’s because you don’t believe there is. If you were a Creationist, you would be just as much at a loss to conjure up a belief in evolution under the gun.
The startling revelation is that the entire world one inhabits is in some significant sense not the world that exists but the world one believes to exist. Everything that we know is first of all something that we believe, and in the end is that as well. In other words, what we know is, for all we know, something we only think we know. Our belief may be more or less justified, but even our deepest conviction is still a belief. And the hallmark of a belief, unlike a fact, is that it could be mistaken. That is the problem of skepticism: if beliefs are only buttressed by other beliefs, how can we know we have anything “right”? It is humbling, then, to realize that one's mind has a mind of its own.
But skepticism is, in the end, just a bugbear, for reasons that Wittgenstein explained in philosophy and sociobiologists have explained in science: We must be getting it all basically right or we couldn’t function -- we wouldn’t even be here. Indeed, for all the pleasure there is to be had from pondering the occasional lapse from perfection, such as mistaking Svetlana for Mom, the educated mind takes an even greater delight in understanding the inevitability of our exquisitely fine-tuned cognitive faculties. As others have pointed out: The question was never, “How could I have made such a mistake?” but, “How do we get it right so much of the time?” And now, amazingly, we know the answer: natural selection. What is more, the answer, now that we know it, seems totally obvious.
            Descartes’ intuitions were sound when he “forgave” the occasional illusions to which we are liable by pointing out that we also have the ability to disabuse ourselves of them (although he misattributed the source of that ability to the goodness of God rather than to the even more astonishing, because self-explanatory, mechanism of evolution). The late perception psychologist J.J. Gibson further developed this idea when he argued that illusions typically occur only under very limited or artificial circumstances, such as in the psychology laboratory, and are quickly remedied. Hence my swiftly figuring out that the person on the telephone was not my mother but Svetlana. 

Friday, May 10, 2013

Desire – Thirty Years Later

Published in Philosophy NowIssue No. 93, November/December 2012, p. 44.

In 1982 I had my first “major” philosophical publication, a journal article entitled “A Theory of Emotion” (Philosophical Studies vol. 42, no. 2., pp. 227-42). My thesis was that the new cognitivist revolution in the study of emotion, associated at the time with the philosopher Robert C. Solomon, needed a supplement, namely, desire. (O. H. Green had reached the same conclusion independently.) Solomon, and even more explicitly, my target in the article, William Lyons, held that emotions are essentially a type of belief. This was a welcome change from the previously prevailing view of emotions as “brute feelings.” But I argued that this was not enough, for one could believe, say, that one was about to be mauled by a rabid dog, and yet not be in an emotional state unless one also possessed a desire not to be so mauled.

This insight had no doubt been prompted by my dabblings in Buddhism, for the Buddha preached that all suffering comes from desire. The Buddha’s recommendation was that we therefore cease to desire. I defended this thesis in an article on “Dispassion and the Ethical Life” in a volume I co-edited with Roger T. Ames on Emotions in Asian Thought (Albany: SUNY press, 1995). But to deflect the obvious objection that eliminating desire would be throwing out the baby with the bath water – since what would be the point of living at all if we desired nothing? – I analyzed the Buddha’s notion of desire as emotion, and emotion in turn as involving strong desiring.

Subsequently I saw an opening to the study of motivation, for it seemed natural to extend the belief/desire analysis to what moves us to action. And it is not only emotions that do this but, more generally, what might be called attitudes. I analyzed these as belief/desire sets, but now without the “strong desire” qualifier, since one need not feel deeply about something in order for it to produce behavior (or, for that matter, to be a “mental feeling”).

But now I came up against a distinction, first brought to my attention by Wayne Davis in an essay he wrote for my edited volume on The Ways of Desire (Chicago: Precedent, 1986). For it seems that “desire” is ambiguous between two quite distinct psychic phenomena. On the one hand desire is simply synonymous with motivation, so to say that one was moved by desire is just to say that one was motivated. On the other hand desire is a specific type of mental state, on a par with belief, such that a particular belief and a particular desire could jointly constitute a motivation (or a feeling). The mental-state desire would be desire proper or genuine desire, since the other type of desire is only another name for motivation.

An example of desire (proper) is wanting to go for a walk for its own sake. An example of motivational desire is wanting to go for a walk because you believe it will help you lose weight and you want (desire) to lose weight. But here again the latter desire (to lose weight) is ambiguous, since you might simply wish to lose weight or you might be motivated to lose weight by some further belief/desire set, such as that you desire to date someone and you believe s/he will only date you if you lose weight. And so on. The thesis I defended in another essay in that same volume – “The Difference between Motivation and Desire” -- was that, even though motivation as such is not the same as genuine desire, a genuine desire is always involved in motivation, simply because the regress must stop if there is to be any action at all.

I am no longer so sure about that last thesis. Bill Lycan, on behalf of his graduate seminar a few years ago, planted a seed of doubt in my mind. But even if we could be sure that “genuine desire” is an essential component of all of our motivation, we would still want an account of what it is. More specifically, it has always been a teaser to tease apart desire from belief. The best accounts I’ve seen, quite different from each other, are by Dennis Stampe (in my desire volume) and, more recently, Timothy Schroeder (in Three Faces of Desire from Oxford).

Despite my uncertainty about what I am even talking about, however, I remain a fan of desire. In fact, my interest in it has returned with a vengeance after a long hiatus. This time I am taken with desire’s role in values. In fact, I have quite given up on objective value as anything but a figment, and see all value as subjective – specifically, as a function of our desires.

I do still find room for more than one legitimate category of value, but, instead of objective and subjective values, there are intrinsic and extrinsic (or instrumental) values. The latter pair corresponds to intrinsic and extrinsic desires. So for example, to want to go for a walk for its own sake is to value walking intrinsically, while to want to do so for one’s health is to value walking instrumentally. What I no longer accept is that in addition to these there is such a thing as objective or inherent value, such that, for example, going for a walk might be “good in itself.” In a word, I no longer recognize the reality of value that is independent of desire.

Therefore I now consider desire to be the key to ethics, and so it becomes incumbent on me to try once again to figure out what the hell desire is. For starters I think I will pick up a fading offprint of an article from 1982 entitled, “A Theory of Emotion”!


Joel Marks is Professor Emeritus of Philosophy at the University of New Haven and a Bioethics Center Scholar at Yale University. He continues the tale of desire in his trilogy: Ethics without Morals: In Defense of Amorality (Routledge, 2013), It's Just a Feeling: The Philosophy of Desirism (CreateSpace, 2013), and Bad Faith: A Philosophical Memoir (CreateSpace, 2013).

Thursday, May 09, 2013

Pons Asinorum

Copyright © 2002 by Joel Marks
Originally published in Philosophy Now magazine, no. 35, March/April 2002, page 48

Three travelers seek lodging for the night. They come upon a pension that charges 10 euros per person. It turns out that there is only one room available, but they don't mind sharing; so they pay the clerk 30 euros. When the proprietor returns, however, she decides that the guests should be given a discount for having to bunch up, so she summons the bellhop and hands him 5 euros to refund to them. Not being a completely honest fellow, the bellhop pockets two euros; this conveniently leaves one euro to be returned to each guest. Therefore each guest has now paid nine euros, for a total of 27 euros. But 27 plus the two in the bellhop's pocket = 29. What happened to the thirtieth euro?

When I first heard this puzzle, I was bedazzled. It seemed so simple; yet no matter how I turned it over in my mind, I could not come up with a solution. I even entertained the hypothesis that I must be dreaming, or under the influence of Descartes' evil daemon, "who has directed his entire effort to misleading me, [for] how do I know that I am not deceived every time I add two and three or count the sides of a square or perform an even simpler operation, if such can be imagined?" (Meditation One).

Soon, however, I came up with this surprising conclusion: There is no thirtieth euro! The travelers ended up paying 27 euros. The proprietor had 25, and the bellhop kept two. That's it. And yet ... I still could not shake from my head the notion that there was a missing euro. So it occurred to me that the puzzle could be conceived as a kind of illusion -- a calculative illusion, we might call it. An analogy can be drawn to a visual illusion, like the bent-stick-in-water, which is not really bent, but, even when one is fully knowledgeable of its straight shape, continues to appear bent at the waterline (due to the refraction of light). Just so, I now knew there was no thirtieth euro, but I couldn't dispel the mental impression that there was.

Finally I was able to dispel even the illusion. This came about precisely because of its refractoriness. I could not rid my mind of that thirtieth euro; there had to be a way to account for it. And so there is: For at the end, the proprietor has 25 euros, the bellhop two, and the guests three. Voila: 30 euros! So NOW the puzzle became: Why had there seemed to be a puzzle in the first place? Indeed, for some of my more logically adept friends and colleagues, there had been no puzzle about the 30th euro, and they were only puzzled about what was puzzling me. I can still experience a kind of Gestalt switching (as when viewing the picture of a vase and two facial profiles) between my puzzlement and my lack thereof. What makes for the difference?

The answer I have come up with is that this "puzzle" arises from a simple "mental mishearing": Where the situation at the end is that the guests have paid 27 euros, one might inattentively "hear" this as their now possessing 27 euros. Then indeed there would be a mystery (for the bellhop only possesses two, so where's the thirtieth?). But in fact at the end the guests only retain three euros of the original 30.

I have therefore passed through three stages: (1) puzzlement (indeed, astonishment), (2) knowledge, but with remaining unease or residual illusion, and (3) "total enlightenment" or "wisdom," with no puzzle or illusion extant (and even understanding why there had been puzzlement in the first place). The progression is instructive: From time to time life throws us for a loop, and, indeed, philosophy is in the very business of questioning fundamental assumptions. But sometimes, as with the three lodgers puzzle, we eventually discover a way to buttress our original conception of things; Wittgenstein considered philosophy itself to be one big faux-puzzle maker, which it was his calling to foil. However, the history of thought -- not to mention, the narratives of our individual lives -- is surely rife with cases of a new conception's replacing the old after some initial shock, such as the discoveries of pi, the stellar nature of the Milky Way, the absence of an ethereal medium, radioactivity, the expansion of the universe, the incompleteness of arithmetic, and so many others. So the truly philosophical task may be to discern which are the real and which the ersatz puzzles.

Which, for example, is the Anthropic Cosmological Principle? It seems that the various physical constants of our universe are exquisitely fine-tuned for the coming into being of ... us! The odds of this having come about by chance are said to be infinitesimal; ergo, we have empirical evidence of some (vast) intelligence and purposiveness (God?) pre-existing the universe. Is this a genuine problem for the secular mind?

Apparently not. Here is a homely analogy. Suppose you hit a golf ball into the air and it comes down in a dark forest. Well, no mystery there: Where it came down is where it came down. If we want to explain why it landed where it did, we would naturally look to physical laws and conditions. Now change the point of view: Pick a particular point hidden in the deep woods and challenge somebody to strike that precise location with the ball. We would expect only a Tiger Woods to attempt the feat, but even he would probably find it impossible.

Just so, the "fine-tuning" of nature that resulted in us may seem unlikely to the point of impossibility (sans an act of intentional design or creation), but the refutation of this "mystery" is that we are just "looking at things through the wrong end of the telescope": We pose the "problem" from the vantage of the end point, whereas causality works from the beginning, and then, whatever happens, happens. Thus, the "problem" needs no solution because it is not really a problem.

Yet there are others who see a deeper riddle posed by the constants of nature, and who consequently disparage the formulation above as the "Weak Anthropic Principle," or "WAP." Is there a Strong Anthropic Principle constituting a real puzzle? (Or would one just be a SAP to think so?) You will have to consider that for yourself outside the confines of this column.

Car Seats and the Absurd

Copyright © 2002 by Joel Marks
Originally published in Philosophy Now magazine, no. 38, October/November 2002, page 51

The extra minute you take to secure your child into her car seat could be just what it takes to bring your whole family into the path of a Mack truck half an hour down the road.

But that is obvious. It is the cruel, rueful, and ironic face of the contingency of existence. And of course it can work the other way around: Had you not taken the extra minute to secure your child into her car seat, you might have driven right into the path of a Mack truck. What does this tell us? Only, one might suppose, that we do not know the future. It doesn't change the fact that the only rational way to conduct one's affairs is to consider the odds: Children in automobile accidents are more likely to survive if they are strapped into a car seat. Therefore it is rational, not to mention morally obligatory, to do this for your child, even though it is within the realm of possibility that there will be a freak coincidence of circumstances, which converts your caring action into a contributing cause of the very catastrophe you were attempting to avert.

Only ... further reflection leads me to make a more bizarre inference. Put aside for the moment our epistemological situation and consider the metaphysics. Do you grant the following? Most accidents where there is a child passenger and an adult who has been responsible enough to purchase a car seat and secure the child into it, will not be due to some such aggravating factor as the driver drunkenly weaving in and out of traffic or drag racing or the like. Rather, the scenario will more likely be one of encountering some other car which has such a driver, or of the first driver's doing something foolishly spontaneous, like miscalculating when the light was going to change, OR of his being momentarily distracted, as by the family dog wagging his tail in the driver's face at a bend in the road, etc. In sum, I assume that the typical accident involving a child in a car seat occurs because the car was in the wrong place at the wrong time. Accidents are the thing of a moment, and moments are conditioned as delicately as a house of cards.

But if that is so, then do we not arrive at a rather startling conclusion, namely, that it is not the freak coincidence, but in fact the norm that accidents involving a child secured into a car seat would not have happened at all if the child had not been secured into the car seat? The logic of my argument is that everything else would have remained the same ... ceteris paribus, to use a logician's term. And I think that is a reasonable assumption in most cases. For instance, your not taking an extra minute with the car seat (because you were rushed, say) would not in any way affect whether the driver of the Mack truck takes another drink, or runs the stop light, etc. So that truck would still be at the very spot it would otherwise have been had you taken the extra minute. Except that because you didn't, there would be no accident: Your car and the Mack truck would pass through the same space but at different times.

In other words, although your alternative behavior would indeed affect the whole universe given enough time, the vast majority of the universe would remain the same in the short term. It is like the ripples in a pond after you plunk the pebble in: They will eventually reach the far shore and make the frog croak, but at first a nearby fish will not even notice anything has happened. Just so, the fate of the Mack truck and its driver, and of all who would be affected by them in turn into the indefinitely far future, would not begin to alter until later, after the moment at which the accident would have occurred. Up until then, all else with the truck and driver would be identical, so the accident won't occur provided you are careless about the car seat.

Singing the praises of car seats because your child's life has just been saved by one seems, therefore, as odd as extolling the virtues of kidnappers because your child has just been released by one. It is understandable, of course; there is a certain psycho-logic to it since your relief makes you feel grateful. But in strictly logical terms ... it ain't, is it?

Nonetheless, it is still true that it is rational (and, again, surely also ethical, even morally obligatory) to strap the child in. That is because the epistemology of the human condition leaves us with no rational option for deciding what to do other than relying on known, general probabilities. And in this case they presumably tell us that in otherwise matched populations, the one employing car seats will suffer fewer casualties. You simply cannot outwit Mother Nature on this one.

I conclude that ... life is absurd. (Although it is perhaps also absurd to employ logical argument to arrive at such a conclusion. But then ... life is absurd!) For the summation of the above is that it is rational to use a car seat for the safety of your child, even though on any actual occasion when the car seat shows its effectiveness for that purpose, it has likely also occasioned the risk to which your child has been exposed. In short, the car seat (in any given case but not in general) brings about the need for itself. It sounds like a marketer's dream ... or a metaphysical wizard's "perpetual justification engine" ... or the answer to a theologian's prayers for a Necessary Being ... but it is really a kind of joke, akin to: "Why am I hitting myself on the head with a hammer? Because it feels so good when I stop!" Also, this realization seems to have no practical import, and yet it changes everything, like a Gestalt shift (as from the contour of a vase to two facial profiles).

The Car Seat Paradox REDUX

August 5, 2018

Note: The following essay is a much expanded consideration of the above puzzle from 2002. At that time I concluded that life is absurd. I still think life is absurd, but at least I am now able to offer a detailed explanation of why it is (nevertheless) rational to use car seats.

Note: Despite the puzzle, I reiterate in the strongest terms that I believe (for the reasons given) it would be irrational not to use a car seat, and I encourage everybody to use a car seat when conveying a child.

We atheists or agnostics and even some thoughtful or compassionate believers know why it is ridiculous for the sole survivor(s) of an airplane crash to thank God: What loving, all-powerful and omniscient God would stand by, not to mention cause, a horrific event like this? Only an extreme egotism would suggest our meriting such special regard (God has a plan for me!) as to be spared the terrible fate that befell everyone else. Why not instead curse a deity who is so cruel and capricious, or at least callous?

            It may come as a surprise, however, that it is also ridiculous to thank God (literally or simply as an expression of emotion) that we strapped our child into a car seat when she has been spared injury or death in an automobile accident. (And let us suppose everyone else involved was also spared.) It certainly came as a surprise to me when I had this thought many years ago, and I have struggled to make sense of it ever since.

            Here is the basic idea. An accident while driving is typically a matter of bad timing (whatever else it may also involve, such as carelessness or bad luck). The smash-up occurred only because you entered the intersection at the exact moment a drunk coming down the cross street ran the light. The dog chose to jump into the front seat at just the moment you entered the bend. The deer leapt into the road just as you were passing by the thicket that had hidden her from view. You turned your attention from the road to your Google map just as the car next to you started drifting into your lane because the driver was texting his boss. And so on.

            Meanwhile, using a car seat takes a few moments. Here are instructions from a YouTube video:

To buckle the child you’re going to want to start with your harness straps nice and loose. Then you’re going to put the child’s arm through the hole. Make sure the shoulder strap is over their shoulder, and buckle between the legs. Do the same thing on the other side. … And now buckle the chest clip. But importantly keep the chest clip low. If you move it up to the right place right now, as you tighten your strap it’s going to get caught under the child’s throat, and that would not be comfortable. Now I’m going to take hold of the shoulder straps anywhere above the chest clip. I’m going to pinch them and pull firmly upward. See how I gathered all the slack out of the legs, out of the stomach and up to the shoulders? If I need to I can slide the chest clip down a little bit at this point. Now I’m going to take the tail at the bottom of the seat and I’m going to pull firmly. The I’m going to check. I’m going to pull upward again on the shoulder strap, checking that no slack comes up towards the shoulders. I’m going to put a finger at the collar bone and pull it away from the child’s body. One finger should fit. But if you can do a two-finger salute like this, that is too loose. So I have a little bit left to pull out from the tail, and now when I check – again, pull upwards – there’s no slack that came up. … Next I’m going to move the chest clip up so that the top of it is at the top of the armpits. I like to call it the tickle clip to remind you to run your fingers across the top and tickle the child’s armpits.           

So what first occurred to me was that an accident in which a child is saved by a car seat might very well not even have occurred if the driver had not used a car seat. Why not, then, curse God (or your partner or the manufacturers of car seats or your own conscientiousness or just your unlucky stars) for inducing you to spend so much time making sure your child was properly strapped into the car seat, since this served only to place your car in the wrong place at the wrong time?

But of course there is an obvious reply. On some other occasion you might with equal likelihood, and for the same reason of bad timing, have ended up in a different accident if you had not used a car seat. And this time – what is even worse – your child would not have been protected and so been more likely to be injured or killed.

Well, OK: This sounds like a good reason for people to use car seats. This is what makes it rational, perhaps even morally obligatory, to use them. Nevertheless, I find something peculiar about the situation. For one thing, it is not clear what bearing the rationality of using car seats has on your emotional reaction to your child being saved by your having used a car seat. While it is true (or I will assume) that a society in which people regularly use car seats has lower casualty figures for children in moving vehicles, it still seems, by my reasoning above, that on any particular occasion when a child is saved by a car seat, it might well or even usually have been better if a car seat had not been used. After all, your motivation for using a car seat is not public spirit. You are not first and foremost trying to make society safer (as your motive might be if, say, you sent your offspring off to war); you are trying to make your child safer. So how could it be rational to be happy you had used a car seat on the particular occasion when it would have been better if you hadn’t, just because (as if by a statistical hand) widespread use of car seats is beneficial to society?

I don’t buy, by the way, that you yourself would have been in a different accident with your child had you not used a car seat and hence avoided this accident. That’s just superstition. Most people do not get into an accident when driving with their children. So you would be mighty unlucky if you not only got into an accident while using a car seat but also would have gotten into one (on this or a different occasion) if you had not used a car seat on this occasion.

Now, it is old news that a rational action can lead to an undesired outcome. Rationality is what we rely on in practical affairs precisely in the absence of certainty. What is rational to do is what is the most likely to achieve our ends under the circumstances; but this implies that sometimes things won’t turn out as we want them to even when we behave rationally. It is irrational to refuse to fly just because, in an exceptional case, an airplane will crash; but if most flights crashed, it would be better not to have boarded an airplane most of the time, and hence it would not be rational to fly for routine purposes. What creates the air of paradox in the present case is that it is better not to have used a car seat in most of the cases when the car seat does exactly what we want it to. This is the rule, not the exception. How, then, could it be rational to use a car seat?

            The answer, I now think, goes like this. What we want and expect a car seat to do is protect our child in an accident. This is surely rational because car seats have (I presume) been amply demonstrated to reduce the likelihood of injuries to children in accidents. What is not rational, however, is something different, namely, to use car seats in order to prevent accidents. That is not rational because there is no reason to think that using car seats is more likely to prevent accidents than not using them. In particular, as we have seen, the timing argument works equally well either way. Ergo Q.E.D.: It is rational to use a car seat to protect your child in case of an accident, even though if an accident occurs, it might well or even usually have been better had you not used one.

Here’s another way to think about the kind of situation I am talking about. There are actually two main ways that using a car seat could help to protect your child. Only one of them is if you are in an accident. An even better way is if the mere passage of time it takes for you to strap that wiggly body into that complex array of straps causes you to miss out on being in an accident in the first place. This means that when you are expending time and effort in this way you are contributing to one of the following consequences (although you don’t know which): (1) You will narrowly miss being in accident; (2) You will be in an accident but the car seat plays no further role (the child will be hurt or unhurt as much with as without the car seat); (3) You will be in an accident and the car seat makes it worse (I will omit the gruesome details); (4) You will be in an accident and the car seat works as advertised, and intended and hoped, to spare your child (greater) injury. All of these are highly unlikely.

Much more likely is that your using a car seat makes no difference whatever: You won’t be in an accident when driving with your child whether you use a car seat or not. So why bother using a car seat? What is more, of the four scenarios wherein your use of the car seat does make a difference, three of the four cause an accident while only one (1) prevents it, and of those three, one (3) even makes things worse in the case of the accident. So it looks like using a car seat actually makes things worse!

But that last calculation is cheating, since the likelihood of (1) equals the combined likelihood of (2)-(4): Your using a car seat is just as likely to prevent as to cause an accident. So those cancel out. Furthermore, among (2)-(4), one (3) makes things worse, one (4) makes things better, and one (2) is neutral; so these too would seem to cancel out. However, this still leaves the question: What is now left to tip the balance toward the rationality of using a car seat? The answer is clear: the greater likelihood of (4) than (3). A car seat is much more likely to help than to harm in an accident.

We are not home free yet. The percentage of cases in which there is an accident involving a child in a car seat is still very small. Why, then, go to all the expense of purchasing a car seat and the trouble of using it? Is this just another capitalist scheme to scare us into buying something we don’t need? No. The standard analysis of risk provides the solution: We are concerned not only about the probability of an event, but also the nature and magnitude of the event. It is very unlikely that your house will burn down while you and your family are in it; but the magnitude of such a loss counsels the relatively minor expense and inconvenience of installing smoke detectors and testing them every week. Just so, the death or injury of your child in an automobile accident is very unlikely, but it would be a harm of such magnitude that you are wise to purchase and use a car seat.

So we have managed to dispel any suspicion that it is irrational to use a car seat. But this has not been my concern in the first place. No, the wrinkle that wrinkles my brow is that it is rational to use a car seat. Why does this perplex me? Because, as I keep saying, a car seat works as advertised in an accident just in case its use was (more often than not) responsible for causing the accident. And that’s not all: It remains rational (and probably also obligatory from a moral point of view) to use car seats despite its turning out to be the case that one’s happiness at having used a car seat in the case of an accident where it worked as advertised is misplaced. This to me has it all over Sisyphus in the absurdity department.

So I doubt that the logical explanation of the rationality of using car seats will penetrate so deeply into the psyches of even most of us who understand it to change our feelings (now that we have been bitten by the bug of paradox). Speaking for myself, if I am ever in an accident where a child is saved by my having used a car seat, I am sure I will thank my lucky stars that I used it. I liken this phenomenon to visual illusions, which will often persist even after we come to understand they are illusions. For example, the parallel lines in the Müller-Lyer will likely forever appear of unequal length, no matter how often we measure them with a ruler.

            Of course there is a legitimate source of joy after the accident we have been discussing, namely, that the child has not been hurt or hurt badly or killed. Even if you could kick yourself for having used a car seat on this occasion (which would be irrational – just as well rail against your partner for kissing you goodbye before you got into the car), since you did use one and ended up in an accident it is wonderful that she was not hurt. Thank God!

            But let me finally reiterate that I do not conclude that using a car seat is irrational. (If this article gains popular currency, I know that it will be misread on a thousand occasions by those who merely skim it.) Quite the contrary: I believe it would be irrational not to use one, and I encourage everybody in the strongest terms to use a car seat when conveying a child. But it is precisely this that creates the sense of puzzlement, namely, that it is rational to use a car seat despite its having the feature I have been describing. So I do not dispute that it is rational to use a car seat, but I marvel that it is.

Note to analytic philosophers: Our discipline is riddled with tantalizing thought experiments that have challenged both common sense and deeply held theories. There is the Gettier Problem, the prisoner’s dilemma, Mary the visual neuroscientist, Nozick’s experience machine, Parfit’s split brain speculations, the Chinese Room, the Nonidentity Problem, the Knobe Effect, and so on. The car seat paradox (so to speak?) has the feel of a perfect thought experiment to me. Unfortunately I have not been able to come up with any great issue it might speak to, so this could be yet another illusion generated by the rationality of using car seats. However, I am sure that the actual genesis of many of our favorite thought experiments was not from wrestling with a philosophical issue but just from having a puzzle suddenly occur to somebody. So I invite my colleagues to find some application(s) of this puzzle-in-search-of-an-issue that would ultimately earn it a place in the pantheon of Great Gedanken Experiments. Could this quirk in the rationality of using car seats be, as it were, the next precession of the orbit of Mercury that will change the universe of knowledge? My delusion of grandeur doth entertain the prospect.

            Alternatively, you are invited to argue or demonstrate that there is no quirk to begin with, for example, by coming up with a counterexample. That would be an act (other than using a car seat) that is rational despite the fact that, most of the time it achieves its purpose, it would be better if it had not been done, and yet does not strike us as odd for that fact. I have not been able to come up with one, nor (by my lights anyway) have my interlocutors; yet I certainly cannot rule out that there is a whole class of such acts. But even in that case, it may yet be possible to salvage something of value if the existence of this feature of some rational acts (viz., that, most of the time they achieve their purpose, it would be better if they had not been done) is felt to alter our conception of rationality in an interesting way, even if for no further reason than that it had never before been noticed.

Many thanks to Thomas Pölzler and Mitchell Silver for very helpful assistance in unraveling this puzzle (if not my puzzlement).

Sunday, April 21, 2013

Stop Think

by Joel Marks
Published in Philosophy Now magazine, issue no. 55, May/June 2006, page 38 

My stepson once gave me a book entitled Jewish as a Second Language (by Molly Katz). He need not have bothered because I was already fluent. Take the chapter on worrying: It explains that "Natural-born Jews leave the womb equipped with a worry reservoir that is filled early and replenished constantly. We worry about everything. ... It is our duty, our birthright, and our most profound satisfaction." I understand this implicitly. For those who are not thus genetically constituted, Katz offers the following practical advice: 

[S]imply make an enormous big deal out of some existing minor problem, such as: An ingrown toenail (it could get bad enough so you'd have to wear special shoes. But those wouldn't go with your business clothes, and you'd be fired for having a poor image. Then you'd lose your medical insurance, get blood poisoning, and die).

  I can add a suggestion: Become a philosopher. This is perfect training for worrying, except that we call it "reflecting." And, indeed, anything and everything is our oyster, er, ingrown toenail. The regular reader of this magazine, and of “Moral Moments” in particular has perhaps already picked up on that. It's no joke, as I indicated in a previous column; when one worries as a matter of both personality and profession, it can become quite painful.

  Fortunately, there is an alternative method of philosophizing which is almost the exact opposite of worrying. It is so different, in fact, that many so-called WESTERN philosophers do not consider it to be philosophy at all. I am not one of those. For me philosophy is defined as much by its goals (understanding the nature of reality, learning how to live properly) as by its recommended methods of attaining them, so I can be catholic about the latter and consider even apparently antithetical approaches to be kosher.

  The alternative method to which I allude is variously named meditation, yoga, mysticism, or even prayer. The variety I happen to employ is MANTRA meditation. Although I first learned it from the TM organization, i.e., Maharishi Mahesh Yogi's "transcendental meditation," I have not retained any ties to that or any other organization or sect. Instead, I went on to study meditation as a component of Hinduism, Buddhism, and Taoism during my otherwise-analytical philosophical education in graduate school and beyond. Meditation has been for me, then, a kind of oriental philosophic cure for an occidental philosophic disease.

  The method is simplicity itself. You say a word (the MANTRA) -- for example, "Om" or "One" -- over and over in your mind. THAT'S IT. Well, of course that's not all there is to "it." It is infinitely subtle. But if you could really just do that, that WOULD be it. It is amazing how difficult it can be to do something so simple -- that is what you learn straight off. (Although when I say "difficult," I don't mean to imply onerous; MANTRA meditation can be surprisingly relaxing and pleasant, and is surely not boring.) You discover, for example, that your mind is full of junk -- mental chatter, mental clutter -- and it's all competing for your attention with that MANTRA. When that happens, here's the key to the whole thing: You bring your mind back to the MANTRA. But you don't yank it; you just withdraw attention from the distraction and return it to the MANTRA, "gently."

  I have done that in "formal" sessions of twenty minutes at least once a day for thirty years. What do I have to show for it? Until recently, I could not say for sure. But in the last few years, I have certainly experienced a boon: I am able to "detach" from thinking about things in the obsessive manner of my New York Jewish upbringing and Western philosophic training. By withdrawing my attention from the thoughts -- precisely analogously to the method of MANTRA meditation, or perhaps even instantiating it -- I can enervate them so that they cease to press on me.

  What takes the place of the MANTRA in this real-life application? Simply whatever there is to attend to. I take my cue here from the philosophy (or practice) of Zen, a distinctive derivation of Buddhism and Taoism. The essence of wisdom is that there is only the here and now; therefore this is what one should attend to. The present moment and place contain all that is necessary for life; be alert to them and you will know what to do and how to live. In this way the ever-thinking, ever-preoccupied mind is side-stepped, so that there ceases to be an intermediary between the self and the object perceived. It is like the difference between walking as we normally do, which is Zen, and trying to walk by thinking to oneself, “Well, first I should extend this leg, then put down this foot, etc.” This is why Zen is sometimes called the philosophy of “No Mind.” But it is also mindfulness, as when you “Mind the gap” in the London underground.

  How do I know that my temperamental achievement has resulted from meditation? And why now, after thirty years? Maybe I'm just getting older and wiser. But as I have related, this new mental ability seems to mimic the skill I rehearse in my meditative sessions. That it would take as long to "undo" a personality trait as to have acquired it should perhaps come as no surprise about human psychology. Probably, then, there has been a confluence of the two influences (practice and maturity).

  However it came about, what it boils down to is self-control. I now have the hang of holding the upper hand with my own mind. A life-transforming technique, which heretofore I could only endorse as an abstract proposition, is now something I can wield (albeit still imperfectly, to be sure). Thus, while I have been emphasizing NOT thinking about things -- an odd-seeming desire for a philosopher, who is supposed to value the "examined life" -- a personality different from mine might benefit from more thinking rather than less. For me it has been the refraining from thinking so much, or in a particular way, that is appealing, as an antidote to despair, which must be an occupational hazard of those who dwell on the human condition, including their own personal prospects for happiness. But the general point is that one ought to be able to direct one's mind to think or not to think about something, independently of one's tendencies: to become autonomous rather than automatic.



Friday, April 19, 2013

The Dancing Philosopher

by Joel Marks
Published in Philosophy Now, Issue No. 95, March/April 2013, p. 52

Every afternoon at the end of my work day I head out for a walk. The locals can set their clocks by this latter day Immanuel Kant. Only when rain and cold and wind are absolutely wretched will this philosopher be kept from his appointed rounds. But on those occasions I make a substitute for my daily constitutional by dancing in my living room to the sounds of music on Pandora. I’ve got a station selected for songs with a fast, heavy beat.

            Thus was I engaged one day when I realized something: I was a marionette. When I’m strutting and shaking and jumping and twisting in the throes of these sounds, it is not by any act of will. “Somebody else” is pulling the strings. Whether it’s Pat Benatar singing “Heartbreaker” or  Billy Idol singing “White Wedding” or Steppenwolf playing “Magic Carpet Ride” or The Trammps playing “Disco Inferno,” my motions just happen in response. I would have to exert my will to stop them ... if I could. Similarly when I’m at a club. If the band begins to play rhythm and blues, or my stepson revs up his rock band, I simply cannot remain seated. Partner or no partner, I’m up on the dance floor; and you’d have to drag me off if the band was still playing.

So much for the idea that free will is something we feel. The only way I could accurately describe my feelings and consequent behaviors in these situations is that they are compelled by an outside force. Yet surely my dancing is an expression of me in the purest form. If this is not me acting feely, then what is? Would only my resistance count as truly free? Or my forcing myself to dance if I did not feel like it? My fellow walker (but presumably not dancer!) Kant might have thought so. He wrote, “suppose that, even though no inclination moves him any longer, he nevertheless tears himself from this deadly insensibility and performs the action without any inclination at all, but solely from duty – then for the first time his action has genuine moral worth” (from the First Section of his Grounding for the Metaphysics of Morals). Moral worth, for Kant, derives from acting freely (in accordance with the categorical imperative), but presumably my dancing would count only as acting from “inclination.”

This is not the first time I have noted my own roboticness in this column. In issue no. 77 I reported on my discovery at the kitchen sink. In that case my behavior was the result of thought processes; I was washing the breakfast dishes because I realized that they would just get in the way if I left them unwashed in the sink and furthermore become more difficult to clean as the dirt encrusted and they piled up, and I didn’t want any of that to happen. It required self-awareness and inference to figure out that what I was doing was therefore not something I had initiated de novo but rather the result of an ultimately billions-of-years-long chain of causes and effects.

In the present case, quite differently, the realization of roboticness was direct: It just felt that way. And that is because I did not have to become aware of what I was thinking in order to link my circumstances to my behavior. The “circumstances” were simply the music, which caused my dancing. Or even more graphically, the cause was a certain pattern of airwaves hitting my inner ear, and the effect was my body jerking around. The whole event was as physical as a hammer hitting a nail, or as if there really were strings attached to my body being pulled by a very strong puppeteer in the rafters. How could I miss that?

            Meanwhile it is child’s play – or more literally I should say oldster’s amusement, for experience helps – to pick out the automatic behavior of others. At my ripening age it has become downright tedious to observe the completely predictable behavior of people I know, people I read about in the news, as well as of political parties, nation-states, and other groupings of human beings. We are all marching to the beat of some drummer or other, and often the same one. This also makes us liable to manipulation by those who figure out the best beats and strike their drum accordingly. In the literal case of the dance music I like, it’s great to be manipulated in this way. But I, like all of us, have also been the victim countless times of drummers and string-pullers who used their implicit or explicit knowledge of my inner workings to gain some advantage over me. (Although they may not have understood at all what was making them do that.)      

But no matter which way the determinism reveals itself, it is a fact. And it is a fact which fascinates me. Really, what could be more amazing than realizing that one is an automaton? It has a definite science-fiction aura to it, like realizing you are a replicant in Blade Runner, or an alien pod in Invasion of the Body Snatchers. But this is reality, backed up by both science and philosophic reflection. I have long marveled at the implications. And more recently, with these mundane recognitions of my own determinism, I have taken delight in cultivating and compiling a phenomenology of determinism. What is it like to be an android? This is a question anyone can answer on one’s own: Just know thyself.

The Sleeper Wakes

by Joel Marks
Published in Philosophy Now, Issue No. 89, March/April 2012, p. 52

Now I lay me down to sleep,
I pray the Lord my soul to keep,
If I shall die before I wake,
I pray the Lord my soul to take.

Derek Parfit’s discussion of personal identity in his 1984 book Reasons and Persons is a timeless challenge to our deepest intuitions about who and what we are or even whether we are, that is, exist. Although his treatment of it was novel, the thesis is hardly new. Parfit himself realized its relation to Buddhism, drawing parallels in his last appendix; and in another famous appendix (of his Treatise) Hume dabbled with a similar notion. I have also written about the problem in this column (Issue no. 74) as well as in a science-fiction (or philosophy-fiction) story called “Teleporter on Trial” published in SciFiDimensions.

            My own intuition has been quite clear to me but also perplexing, and in both senses of the latter. Thus, suppose you enter a presumed teleporter and are beamed to Mars. In what seems to me the most likely scenario, only the information about you will be transmitted, since sending an electromagnetic signal is far more efficient and swift that transporting your entire body. So on Mars a brand new body, and in particular brain, will be shaped according to the blueprint of that information; and out of the transmission receiver will walk a person who is in every respect identical to the one who walked into the transmitter on Earth, including in his own mind. The person will believe he or she is you, no doubt about it.

            My feeling, however, is that he or she is not you at all but only an exact replica. I won’t repeat all of the arguments but only say why this is perplexing in two ways. First, it is puzzling: This is because we are left to wonder what you (or the I or self) could be, such that your existence depends on the existence of your body or brain and not on its blueprint. After all, even the existence of your body and brain is problematic; that is to say, in what sense is your body the same body over time, given that all of its component cells are replaced every number of years? (It is not clear to what degree this is true of the brain, but even here it seems plausible to imagine that we could replace your entire brain, cell by cell, if the technology were available to do so, while leaving it essentially the same brain.) Second, it is worrisome: This is because the implication is that instead of your having been teleported from Earth to Mars, if we simply disposed of your remaining body on Earth we would in fact be killing you (while bringing a new person into being on Mars).

            I am writing about these things again because all of a sudden I am possessed of a new intuition. It is common to take waking up from deep sleep as the archetype of continuing to exist as oneself. Even though it can seem puzzling that one is still oneself despite an apparent hiatus in consciousness, who would seriously doubt it? Or put it this way: if one did doubt it, then one would be close to doubting the very notion of a continuing self, which is pretty much the same as doubting the existence of the self altogether. For it hardly conforms to our conception of being so-and-so that we exist only for a single day (unless one is a mayfly).

Indeed, if one doubts that one is the same person upon awakening as the person who went to bed the night before, one could begin to doubt that one is the same person now as the person who began to read this article, and so on to the duration of a mere moment. For what do you really know about your own continuity? Right now you recollect that you have continued in existence since reading “Right now you recollect ….” But would this not also be the case if there were a sequence of selves or “you”s, each of which duplicated the mental content of the one immediately preceding it?

I won’t push that particular line of argument because, Parfit-like, I am more interested in implications for what matters than about the ultimate metaphysics. So return to the sleeping/waking case: There is this gap in consciousness of a clear sense of yourself existing through time, yet upon arising you (in a few moments if not at once) “collect yourself” back (?) into being. Is this really the same you? Up until now I have considered this not only obvious but the most important  fact in the world. One very real application would be as I related in my phi-fi story about teleporters: Any time a person entered one of these contraptions, he would be about to die.

But now for the first time I question that, or anyway that it matters. Instead of entering the teleporter, just put your head down on your pillow tonight. Tomorrow morning someone will awaken on that pillow, believing he or she is you. No one else will have a clue that there might be anything different either. I now ponder and wonder and marvel: What else could matter? Whatever the metaphysics of the situation, if these empirical facts are the case, then could anyone, including the person who went to bed the night before, complain of some loss? Suddenly I am at a loss … to see what has been lost.

Joel Marks is Professor Emeritus of Philosophy at the University of New Haven and a Bioethics Center Scholar at Yale University. He would like to thank Chris Bateman of International Hobo for re-sparking his interest in Parfit, and acknowledge the aid of Thomas Metzinger’s The Ego Tunnel (Basic Books, 2009) in further cutting the cord to himself.

Thursday, August 16, 2012

“A” Is for “Assumption” or Why the World Needs Philosophy

by Joel Marks

Published in issue no. 90 of Philosophy Now, May/June 2012, pp. 52-53

Socrates famously averred that the unexamined life is not worth living. This was part of his “apology” when he was on trial for his life as he tried to explain what it means to be a philosopher. I myself have taken this to heart as a definition: Philosophy is the examination of fundamental assumptions. It occurred to me the other day that I have been putting this conception into practice with a vengeance of late – not meaning to do so as philosophical exercises, mind you, but quite spontaneously as a natural-born philosopher. So perhaps it will help my readers to understand what I have been about in these columns if I review my recent philosophical hobbyhorses in this light. As it happens, like “assumption” (and, for that matter, “apology”), all of them begin with “a”: animals (issues 62, 66, 67, 72, and 85), asteroids (issues 79 and 86), and amorality (issues 80, 81, 82, 84, and 87). Herewith the common thread of my discourses on the lot.

Animals. Human beings treat other animals abominably. (“A” is for “abominably”!) There are some exceptions, such as, in some cultures, pets; but even pets represent an offense against free-living animals in their natural habitats, who have been deliberately bred into dependency and hence as a result dumbed-down as well. And almost all pets are denied the freedom to roam, whether by foot, feather, or fin; instead they are confined to a building or the end of a leash, or kept on display in a cage or a bowl. The condition of the vast majority of nonhuman animals, however, is without even the compensations that may attach to being a pet. Animals in the wild are trapped for their skins or hunted down for pure sport. Animals in captivity (other than pets) are turned into egg or milk machines, or fattened for direct human consumption, or consigned to laboratories for testing and vivisection. All in all, it is not good to be a nonhuman animal in a world controlled by human animals.

 However, many human beings are sensitive to one or another aspect of our “inhumanity” to other animals and therefore strive to better their lot. Thus have arisen numerous societies for the prevention of cruelty to other animals and, more generally, for the promotion of their welfare. One would think, then, that all animal advocates would be “welfarists.” But this is not the case. Why not? Because welfarism is based on an assumption which, if examined, proves untenable … or at least questionable. The assumption is that it is all right to use other animals so long as we do so with an eye to their welfare. Or to put it epigrammatically: It is OK to use animals so long as we do not abuse them.

 But this assumption may be unwarranted. The reason is that use and abuse, while indeed distinct concepts, may only differ in reality under certain conditions, and those conditions may not obtain for other animals. One argument goes like this: So long as x is at an extreme power disadvantage to y, any use of x by y will inevitably deteriorate into abuse. Well, clearly, under present circumstances all other animals are virtually powerless relative to human beings; therefore just about any use we make of them leads inexorably to their abuse. And is this not precisely the situation we observe?

 This is why there has arisen in opposition to welfarism the movement known as (“a” is for) abolitionism, which seeks to abolish all institutions of animal use. Thus, there would be no animal agriculture, no hunting (other than for real need), no animal circuses, no zoos, no pets. The breeding of domestic animals would end, and the preservation of wild habitats be maximized. Abolitionists further maintain that the emphasis on animal welfare actually serves to encourage animal use, since if people believe that the animals they use are being well taken care of, they will lose their main incentive for discontinuing that use; and hence, by the argument above, animal welfarism further entrenches animal abuse, and so is counterproductive even to welfare in the long run. Here again the evidence seems to be in plain sight: For all the growth of welfare organizations – and just about every major animal protection organization is a welfare, as opposed to an abolition, organization – the abuse of animals has only increased and shows no sign even of decelerating. For reasons such as these I have allied myself with abolitionists like Lee Hall and Gary Francione.

Asteroids. Here I have cheated a little bit because (“c” is for) comets are also a major concern. But due to their overwhelming numbers in our vicinity at present, asteroids have taken the lead in the public imagination as a threat to humanity. The more one learns about their potential to do us grave harm should we ever again collide with one of Manhattan-size or larger, the more one finds oneself tossing and turning in bed at night. These bodies number in the thousands up to the trillions, depending on size and distance considered; and the inevitability of another good-sized one striking our planet – unless we prevent it – is denied by no one. Indeed, no one denies that an object the size of the one that wiped out the dinosaurs, and that would wipe out human civilization, will one day bear down upon us. Furthermore, it is now a common occurrence to discover asteroids that are large enough to wreak havoc if they impacted us and that do in fact make a close approach to our planet, such as 2005 YU55, which came closer than the Moon last November 8 (2011), and 99942 Apophis, which will come even closer on April 13, 2029.

 Thus have arisen Spaceguard and other programs, whose mission is to detect all such hazards and devise and implement mitigating strategies. It is not easy, however, to deflect an incoming object of human-extinction size, which would be 10km or larger. Fortunately, as one hears with regularity from the scientists who inform the public on this matter, objects of that size likely to come into Earth’s immediate vicinity are exceedingly rare. In fact there is a power law of size relative to quantity, such that the larger the object, the fewer there are. Therefore, given limited resources, the present de facto policy is to focus on detecting mid-size NEOs (Near-Earth Objects) – ones that could, say, wipe out a city -- and designing and testing means of deflecting them.

 Alas, this seemingly sensible and rational policy is based on an assumption that will not withstand critical scrutiny. The assumption is that the relatively small number of the relatively large objects makes it unlikely that we will be impacted by one any time soon. But this is fallacious. The reason is that these events occur with total randomness. Therefore an extinction-size object could appear on the horizon at any time. The statistics only tell us that this will occur sooner or later, but they do not tell us when. One takes false comfort in their relative rarity in the recent historical record.

 Indeed, this way leads to absurdity. For suppose there were insufficient reason to begin to prepare to prevent (“a” is for) Armageddon by asteroid or comet this year because of the exceedingly low statistical probability of such an occurrence. Therefore there would never be a time when there is sufficient reason to prepare for it, since the statistical probability remains constant (at least until Armageddon occurs … but possibly even then!). But Armageddon will occur unless we prevent it. Therefore it is rational to allow Armageddon to occur. But it is not rational to allow Armageddon to occur. Therefore it is false that there is insufficient reason to begin to prepare to prevent Armageddon by asteroid or comet this year just because of its exceedingly low statistical probability.

 Thus, just as animal protection based on the fallacious policy of welfarism serves to the detriment of animal protection, planetary defense based on the fallacious policy of mid-size impact mitigation serves to the detriment of planetary defense.

Amorality. It was only after I had finished writing the culminating monograph of my career as a so-called normative ethicist that I realized that both the monograph and my career had been based on an assumption that could be seriously questioned, namely, that morality exists. The case against morality is known in the literature of meta-ethics as the argument to the best explanation. Simply stated it is the claim that all moral phenomena, including our occasional tendency to altruism and our beliefs in moral obligation, moral guilt, moral desert, and the like, can plausibly be accounted for by our evolutionary and cultural story (or stories), without need to postulate any actual moral obligation, moral guilt, moral desert, and the like. Thus, morality turns out to be like religion, or theism in particular, in that the more plausible explanation of our belief in God, etc., is that such a belief has served to help us survive rather than that there actually is a God.

 Now this may seem to lead to the conclusion that we are therefore in the peculiar position of needing to cling to a delusion. However, some few of us (including most explicitly at present Richard Garner and myself) maintain that the time is now ripe to expose morality for what it is – an illusion – and thence to eliminate it from our lives. The argument is an empirical one: in a nutshell, that a world without the felt-absolutism and felt-certainty of moral convictions would be less violent, less hypocritical, less egotistical, less fanatical and so forth than our present, moralistic world is, and therefore we would prefer it. Garner makes the case at length in his Beyond Morality (now online in a revised version), and I in my Ethics without Morals (forthcoming from Routledge). (Note: My personal story of “counter-conversion” to amorality is told in Bad Faith: A Philosophical Memoir, which I shall perhaps one day post on the Internet.)

 And observe that this claim is analogous to the two other claims discussed above. For just as animal protection based on the fallacious policy of welfarism serves to the detriment of animal protection, and planetary defense based on the fallacious policy of mid-size impact mitigation serves to the detriment of planetary defense, so, moral abolitionists (not to be confused with animal-use abolitionists, although I happen to be both) argue, an ethics based on morality is both fallacious and self-defeating. The fallacy of morality is that the strength of our moral convictions (or “intuitions”) warrants our belief in their truth. The self-defeatingness of morality is that a moralist world is (today if not heretofore) more likely to be discordant with our considered desires than an amoralist world.

Assumptions. Thus my catalogue of dangerous assumptions that license (1) the ever-increasing exploitation and slaughter of nonhuman animals by the tens and hundreds of billions, (2) the exposure of humanity to extinction by asteroidal or cometary impact (maybe not a bad deal for some of the animals, though), and (3) the excessively judgmental and even lethal imposition of our preferences on one another. My aim has been to illustrate the utility of philosophy as the critical examiner of our most fundamental and pervasive – and hence, most likely to be mischievous -- assumptions. By a curious but inevitable logic, the foundations of our beliefs are the shakiest part of the whole edifice of our knowledge, precisely because they are the most taken for granted – positively buried in the underground of our psyche. Philosophy brings them into the light of day for inspection and possible repair or, if they prove too rotted out, condemnation of the whole structure that has rested upon them.

 I must admit, (“a” is for) alas, that my own philosophical efforts to date have little to show by way of liberating animals, saving humanity, or making society less violent and antagonistic. But perhaps I can at least be given a “A” for effort.

Monday, February 06, 2012

Intellectual Pleasures

by Joel Marks
Published in Reflections (University of New Haven), no. 5, Spring 1989, pp. 1-3.

Human beings have faculties more elevated than the animal appetites and, when once made conscious of them, do not regard anything as happiness which does not include their gratification. -- John Stuart Mill, Utilitarianism

I like to parse arguments. I love to parse arguments. Give me a passage of text which is intended to persuade, and I will apply my powers of analysis to make its premises and conclusion explicit. Even if the argument seems clear to begin with, and is beautifully articulated, I derive pleasure from putting it into this dry mold: "A therefore B" (or "B because A").

For example:

A story (perhaps apocryphal) is told that Abraham Lincoln was once trying to convince a friend that all [people] were prompted by selfishness in doing good. As the coach in which they were riding crossed over a bridge, they saw an old razor backed sow on the bank making a terrible noise because her pigs had fallen into the water and were in danger of drowning. Mr. Lincoln asked the driver to stop, lifted the pigs out of the water, and placed them on the bank. When he returned, his companion remarked, "Now Abe, where does selfishness come in on this little episode?" "Why, bless your soul, Ed, that was the very essence of selfishness. I should have had no peace of mind all day had I gone on and left that suffering old sow worrying over those pigs." (Taken from C.E. Harris, Jr., Applying Moral Theories [Belmont, CA: Wadsworth, 1986], p. 62)

The argument reduces to: "I would have been upset not to do what I did; therefore I did it for selfish reasons."

A rather arid exercise, one might suppose. For me, a sip of pleasure. What exactly is it that I enjoy in this mental activity? Well, there is analysis: getting to the heart of something. I also like to express an idea in its most exact and explicit form. Precision and absence of ambiguity are here the paramount concerns; there is a kind of beauty in this, I find. And then, as well, an enhanced understanding can result, which is intrinsically valuable and satisfying.

I can also state something that is not the explanation of my love of parsing: I do not love it because it is useful. Don't get me wrong: It is useful. It is one of the most useful things in the world! The ability to clarify an argument is an antidote to muddle headed thinking, of which there is a great deal and which causes much woe. Take a look again at the Lincoln argument. It is so convincing in its narrative form, yet it invites critical analysis in its parsed form. As C.E. Harris points out, the conclusion does not follow from the premise. The fact that one is upset does not tell us anything about the nature of what is causing the upset; but the selfishness or unselfishness of one's motives or reasons depends completely on the nature of what is causing the upset. In the Lincoln case, the cause of the upset is the suffering of the old sow; this determines that Lincoln's motives were unselfish after all.

It is my belief that great chunks of scientific psychology and economics, which generally conceive human beings as fundamentally self-interested, rely on the sort of mistaken analysis Lincoln made.1 Nonetheless, I repeat, it is not the usefulness of analysis that explains my special fondness for it. I parse for its own sake. I would pay money to be able to parse arguments. The point I want to stress is that there is a pleasure to be had here. It is one of a set -- a vast set -- of possible intellectual and other cultural pleasures (and of the good kind) that help set human beings apart from other animals.2

So, Julie Andrews, the next time you sing, "These Are a Few of My Favorite Things," take note: Parsing may be one of them!


NOTES


1 Or is purported to have made; I rather think Lincoln was arguing tongue-in-cheek, in an effort at modesty, if this episode occurred at all.
2 Not that I have any disrespect for other animals; but, for better or worse, human fulfillment appears to lie in different directions from theirs.

The Discovery of the Opponym

by Joel Marks
Published in Reflections (University of New Haven), no. 16, Fall 1994, pp. 1-2.

As a wordsmith, I spend a lot of time trying to find that mot juste. (I hope "mot juste" is the mot juste in this case!) It is not always easy to say what you mean -- you know what I mean? The writer or speaker must not only understand the standard definitions of words, but also their special usages in various contexts -- with different audiences, on different occasions, etc. Tone of voice or surrounding sentences can also alter meaning. Ambiguity is ever-present. But of all the linguistic stumbling blocks to comprehension I know of, the most bedeviling is a type of word that has the amazing characteristic of meaning opposite things!

Now, it is certainly not unusual for a word to have multiple meanings. Indeed, this is probably the norm rather than the exception (just as the typical star shines not singly, like our solitary Sol, but as part of a binary system). And this phenomenon blends into another where the same spelling and pronunciation are used for what are considered different words -- so-called "homonyms." It is also not unusual for different words to have opposite meanings -- hence "antonyms." And when they are closely paired to form a phrase, we call the result an "oxymoron" (e.g., "cruelly kind").

But what I have in mind is a sort of one-word oxymoron, or one word that does the work of two antonyms. Alternatively, the situation could be conceived as involving word pairs, which would then be homonymous antonyms, or antonymous homonyms. Furthermore, there seems heretofore to have been no word for this sort of word. I have therefore dubbed it the "opponym."

Herewith follows my personal collection of opponyms, compiled over the years while I was writing about weightier matters.


A Glossary of Opponyms*


argue [transitive verb]: to give reasons for (He argued the point); to give reasons against (She declined to argue the point).

besides: except for (Besides money, we lack for nothing); in addition to (Besides our health, we've fortunate to be rich).

blunt: dull; pointed (blunt remarks).

bracket: include (These figures bracket the whole range); exclude (Let's bracket that issue for now).

cleave: divide (May nothing cleave these newlyweds asunder); adhere (May they cleave unto each other).

confirm: request or receive substantiation (I wish to confirm that the hoped-for event did indeed occur); provide substantiation (ditto!).

consult: to seek advice (She went to the lawyer to consult regarding her upcoming divorce); to give advice (However, the lawyer, who specializes in taxation, was not competent to consult on this matter).

discern: "to detect with the eyes"; "to detect with senses other than vision."

discursive: "moving from topic to topic without order; proceeding coherently from topic to topic."

dust: "to make free of dust"; "to sprinkle with fine particles."

easterly (etc.): from the east; toward the east.

enjoin: command to do; prohibit from doing.

flesh: to cover with flesh; to remove the flesh from.

founder: [noun] one who provides with a basis or foundation for existence; [verb] to sink below the surface and cease to exist.

franchiser: "franchisee; franchisor."

guard: to protect from harm or invasion; to prevent from escaping to freedom.

handicap: a natural disadvantage; an artificial advantage.

impression: a vivid imprint; a vague remembrance.

liege: "a vassal bound to feudal service and allegiance; a feudal superior to whom allegiance and service are due."

modify: "to make minor changes in; to make basic or fundamental changes in."

moot: debatable; no longer worth debating.

oversight: watchful care; a failure of same.

paradox: a seeming truth that is self-contradictory; a seeming contradiction that is (perhaps) true.

pride: "inordinate self-esteem"; "reasonable self-respect."

protest: "to make solemn affirmation of" (protest one's innocence); "to make a statement in objection to."

purblind: “wholly blind”; “partly blind” (i.e., not wholly blind).

qualification: something that suits a person (etc.) to a job (etc.); something that limits one's suitability.

sanction [noun]: a penalty for violating a law; official permission.

temper [noun]: "equanimity; proneness to anger." (One loses one’s temper in the sense of equanimity; one has a temper in the sense of proneness to losing it [in the first sense]!)

temper [verb]: "to soften (hardened steel) by reheating at a lower temperature; to harden (steel) by reheating and cooling in oil."

threaten: One and the same event may threaten [to bring about] war and [to eliminate] peace.

trim: remove from; add to (both with respect to trees).

* Quoted definitions are from Webster's Ninth New Collegiate Dictionary (Springfield, MA: Merriam-Webster Inc., 1985).