Friday, December 20, 2013

Eight Years Old and Counting

by Joel Marks
Originally published in Philosophy Now magazine, issue no.46, May/June 2004, p. 45
Any one thing can lead us to all other things. For me at one time the "one thing" was vision. As I have mentioned in a previous column (in issue no. 42) about the late perception psychologists, J.J. and Eleanor Jack Gibson, the personal discovery that there are not only objects in the world that I see (plus me or including me) but also my seeing them, ultimately led to my becoming a philosopher. But before I had even been introduced to the field of philosophy, I was captivated by vision. Perhaps the first significant manifestation was my hobby of black and white photography in college, under the tutelage of a rooming housemate, Pat Lau. Another housemate, Chip Porter, helped me build a darkroom in a corner of my living room. Then came my undergraduate studies with J.J. Gibson and his graduate students, and post-college I was hired to teach courses on visual psychology at an art school, where the director, Bill Collins, emulated the Bauhaus. Only after all of that did I consider studying philosophy. In the interim, vision had become a passion.
  I fancied myself a phenomenologist in that I cultivated visual experience per se. Perhaps a better label would be: visual naturalist -- a collector and cataloguer of specimens in the visual world, which I would record in a diary of observations.  And while I adored the work of the Belgian physicist, M. Minnaert, who did the same with optical phenomena, such as rainbows and halos, my quarry tended to be phenomena that could not be explained by optics alone (if at all!). A typical entry:
  “Wearing a brown shirt ... but I noticed the very thin rim of it rising above my beige sweater looked PURPLE. I checked the light fixture on the ceiling: it seemed to be regular incandescent/exposed bulbs. So then I performed one of those amazing, delicious life experiments (like being in a dream where you know you can fly): I pulled the sweater downward ever so slowly ... and RIGHT BEFORE MY VERY EYES ... the purple turned to brown!!! I repeated, up and down, several times. Albers [Joseph Albers, a modern artist who was famous for his studies of color, including contrast phenomena of this kind]. The watched pot [i.e., I had, in effect, witnessed the magic moment of boiling, or followed the rainbow to its source].”
  Optics plus physiology, you say? Perhaps. But my interest lay not so much in explanation as in implication (this being the nascent philosopher in me). I also simply indulged in the wonder of it, so in a way I positively did not want it to be explained! Over time the observations became more and more fantastic. There is magic in this mundane world of ours, if you take the time to look at it and reflect (a nicely ambiguous word under the circumstances). I hope to write about these experiences at length some day, but for now let me cut to the quick.
  I found that there were two poles of visual phenomena that were instructive in opposite ways. First were those which were commonplaces of veridical recognition, my favorite being the wind. As my stepson Sean had exclaimed one day when he was eight years old while looking out the window: "Look at the wind outside. Man!" It was plain to him, as it had always been so to me, that the wind is visible. Yet when I entered the scientific circles of perception psychologists, I discovered that this was almost universally denied (except by gibsonians). Why? Because the prevailing dogma was that anything which is visible must have color and shape; the wind having neither, its existence cannot be seen but must be inferred from other things seen which do have color and shape, such as bending branches and flying hats.
  Well, why let obvious facts get in the way of a good theory, eh? Ridiculous! Thus, I was developing my first skepticism of the "experts" (like a good Socratic) ... and of scientific psychologists in particular (as the psychologist Carol Gilligan was doing from her exposure to their equally ludicrous male biases). But what came as an even more startling revelation was finding that laypersons had also adopted the scientific viewpoint. There is nary an adult of my acquaintance who retains the vision of an eight year old ... and I'm not talking about physiology! This is a case of the emperor's new clothes: "You cannot see the wind, my child. Grow up!" But it is adults who deny their own senses. Just as it is adults who tell children fairy tales and expect them to believe them, even when they become adults themselves, as I continually discover to my amazement. (I spend much of my life being amazed, as you can see -- sometimes pleasantly, as by visual phenomena; other times unpleasantly, as by human stupidities.)
  But I spoke of two poles: another kind of visual experience presents us with clearly illusory phenomena, such as the bent stick in water (that isn't really bent). These too are commonplaces, but, despite sometimes giving delight (as when you shake a lead pencil in just the right way to make it appear rubbery), their philosophical "lesson" is usually completely overlooked. My favorite of this type is a wire cube that I keep in my office (I am looking at it right now), which is an absolute chunk of the Twilight Zone -- a true crack in the cosmic egg, to use Joseph Chilton Pearce's evocative phrase -- the looking glass I can walk through any time I please (and not just when I happen to dream of doing so). What this cube does, you see, is rotate ... except, it's not really rotating. Instead, the turning of my head as I gaze at it in a certain way (namely, by Gestalt-shifting it like a Necker cube) is translated into the cube (analogous to the way the earth's rotation is translated to the starry sky)*.
  Please, do visit me some time and I shall show you, because it is boggling. But what does it all mean? What fascinates me is that this rotating cube -- which is not "there" -- is sitting on a cabinet, which decidedly is "there" ... or is it? Doesn't this phenomenon prove that all we ever see is a kind of waking dream? I know that my "reasoning" herein has been quite "loose" ... that the very way I have phrased my account begs all the questions ... and that I have contradicted my own more "mature" musings about materialism in this very column (see issue no 44). But ... I don't want to stop being an eight-year-old!

NOTE
* Thus, I experienced a "reverse Buckminster." Buckminster Fuller used to claim that he had come to experience the apparent motion of the heavenly bodies across the sky as the actual motion of the rotating Earth. What I experienced was the actual rotation of my head as the apparent motion of the cube.

Monday, May 13, 2013

Belief

by Joel Marks
Published in Philosophy Now, Issue No. 70, November/December 2008, p. 39.

"JOOOOOOOOELLLLLLLLLL?!" The shrill call of my name made me jerk the phone from my ear. In that instant I thought, "Mom." I was flabbergasted: My mother had been dead for five years.
As the person on the wire continued to speak I had time to question myself. Could the death of my mother, even the sense that so many years had passed by, be only something I had dreamt the night before? Was it just that I had had no occasion to doubt it since waking up this morning, until this telephone call had jarred me back to my senses that my mother is still alive?
 I know there was a time in my life when I believed I could fly, for I remembered having done so. The image was of my body parallel with the ground, close to it but not touching it. My arms were crossed like a Cossack dancer's, and I was moving forward steadily, following the path of a sidewalk near my home. In my daily life I felt I had this power, although it wasn’t exactly clear to my child’s mind when I could exercise it. Then one day I realized it must have been something I had dreamt; so it was not a memory – or it was a memory, but of a recurring dream, not something that had actually happened. This was the kind of realization one experiences when, in the light of increasing knowledge, the belief in Santa Claus evaporates like dew at daybreak.
A few more words from the person on the telephone dispelled my current confusion. It was Svetlana, a new acquaintance. The way she had spoken my name had been her enthusiastic greeting, probably also prompted a bit by nervousness because of the novelty for her of speaking English without seeing the person she was speaking to. Now that I thought about it, her age was not far from my mother's when I was in college and would receive calls from her. That was how they would begin: "JOOOOOOOOELLLLLLLLLL?!"
The experience of Svetlana's call served as a reminder to me of the fragility of belief. For one second I had believed my mother was still alive. The belief was patently false, but I was taken in by it all the same. It is not an uncommon experience, is it? I’m sure you can empathize. Here is another example: How many times have I found myself gripping the chair when (for no particular reason) I have fallen into a momentary reverie of being in a plummeting airplane. I am feeling real fear. I believe I do believe at those moments that I am in an airplane. I am not asleep and dreaming; nor are my eyes even closed. It is just that there has been a shift of belief brought on by an image in my mind.
Yet, at other times, belief is recalcitrant. If somebody were to hold a gun to your head and demand that you believe in Santa Claus or he would shoot, could you do it? I doubt it. But that’s not because there is no Santa Claus; it’s because you don’t believe there is. If you were a Creationist, you would be just as much at a loss to conjure up a belief in evolution under the gun.
The startling revelation is that the entire world one inhabits is in some significant sense not the world that exists but the world one believes to exist. Everything that we know is first of all something that we believe, and in the end is that as well. In other words, what we know is, for all we know, something we only think we know. Our belief may be more or less justified, but even our deepest conviction is still a belief. And the hallmark of a belief, unlike a fact, is that it could be mistaken. That is the problem of skepticism: if beliefs are only buttressed by other beliefs, how can we know we have anything “right”? It is humbling, then, to realize that one's mind has a mind of its own.
But skepticism is, in the end, just a bugbear, for reasons that Wittgenstein explained in philosophy and sociobiologists have explained in science: We must be getting it all basically right or we couldn’t function -- we wouldn’t even be here. Indeed, for all the pleasure there is to be had from pondering the occasional lapse from perfection, such as mistaking Svetlana for Mom, the educated mind takes an even greater delight in understanding the inevitability of our exquisitely fine-tuned cognitive faculties. As others have pointed out: The question was never, “How could I have made such a mistake?” but, “How do we get it right so much of the time?” And now, amazingly, we know the answer: natural selection. What is more, the answer, now that we know it, seems totally obvious.
            Descartes’ intuitions were sound when he “forgave” the occasional illusions to which we are liable by pointing out that we also have the ability to disabuse ourselves of them (although he misattributed the source of that ability to the goodness of God rather than to the even more astonishing, because self-explanatory, mechanism of evolution). The late perception psychologist J.J. Gibson further developed this idea when he argued that illusions typically occur only under very limited or artificial circumstances, such as in the psychology laboratory, and are quickly remedied. Hence my swiftly figuring out that the person on the telephone was not my mother but Svetlana. 

Friday, May 10, 2013

Desire – Thirty Years Later

Published in Philosophy NowIssue No. 93, November/December 2012, p. 44.

In 1982 I had my first “major” philosophical publication, a journal article entitled “A Theory of Emotion” (Philosophical Studies vol. 42, no. 2., pp. 227-42). My thesis was that the new cognitivist revolution in the study of emotion, associated at the time with the philosopher Robert C. Solomon, needed a supplement, namely, desire. (O. H. Green had reached the same conclusion independently.) Solomon, and even more explicitly, my target in the article, William Lyons, held that emotions are essentially a type of belief. This was a welcome change from the previously prevailing view of emotions as “brute feelings.” But I argued that this was not enough, for one could believe, say, that one was about to be mauled by a rabid dog, and yet not be in an emotional state unless one also possessed a desire not to be so mauled.

This insight had no doubt been prompted by my dabblings in Buddhism, for the Buddha preached that all suffering comes from desire. The Buddha’s recommendation was that we therefore cease to desire. I defended this thesis in an article on “Dispassion and the Ethical Life” in a volume I co-edited with Roger T. Ames on Emotions in Asian Thought (Albany: SUNY press, 1995). But to deflect the obvious objection that eliminating desire would be throwing out the baby with the bath water – since what would be the point of living at all if we desired nothing? – I analyzed the Buddha’s notion of desire as emotion, and emotion in turn as involving strong desiring.

Subsequently I saw an opening to the study of motivation, for it seemed natural to extend the belief/desire analysis to what moves us to action. And it is not only emotions that do this but, more generally, what might be called attitudes. I analyzed these as belief/desire sets, but now without the “strong desire” qualifier, since one need not feel deeply about something in order for it to produce behavior (or, for that matter, to be a “mental feeling”).

But now I came up against a distinction, first brought to my attention by Wayne Davis in an essay he wrote for my edited volume on The Ways of Desire (Chicago: Precedent, 1986). For it seems that “desire” is ambiguous between two quite distinct psychic phenomena. On the one hand desire is simply synonymous with motivation, so to say that one was moved by desire is just to say that one was motivated. On the other hand desire is a specific type of mental state, on a par with belief, such that a particular belief and a particular desire could jointly constitute a motivation (or a feeling). The mental-state desire would be desire proper or genuine desire, since the other type of desire is only another name for motivation.

An example of desire (proper) is wanting to go for a walk for its own sake. An example of motivational desire is wanting to go for a walk because you believe it will help you lose weight and you want (desire) to lose weight. But here again the latter desire (to lose weight) is ambiguous, since you might simply wish to lose weight or you might be motivated to lose weight by some further belief/desire set, such as that you desire to date someone and you believe s/he will only date you if you lose weight. And so on. The thesis I defended in another essay in that same volume – “The Difference between Motivation and Desire” -- was that, even though motivation as such is not the same as genuine desire, a genuine desire is always involved in motivation, simply because the regress must stop if there is to be any action at all.

I am no longer so sure about that last thesis. Bill Lycan, on behalf of his graduate seminar a few years ago, planted a seed of doubt in my mind. But even if we could be sure that “genuine desire” is an essential component of all of our motivation, we would still want an account of what it is. More specifically, it has always been a teaser to tease apart desire from belief. The best accounts I’ve seen, quite different from each other, are by Dennis Stampe (in my desire volume) and, more recently, Timothy Schroeder (in Three Faces of Desire from Oxford).

Despite my uncertainty about what I am even talking about, however, I remain a fan of desire. In fact, my interest in it has returned with a vengeance after a long hiatus. This time I am taken with desire’s role in values. In fact, I have quite given up on objective value as anything but a figment, and see all value as subjective – specifically, as a function of our desires.

I do still find room for more than one legitimate category of value, but, instead of objective and subjective values, there are intrinsic and extrinsic (or instrumental) values. The latter pair corresponds to intrinsic and extrinsic desires. So for example, to want to go for a walk for its own sake is to value walking intrinsically, while to want to do so for one’s health is to value walking instrumentally. What I no longer accept is that in addition to these there is such a thing as objective or inherent value, such that, for example, going for a walk might be “good in itself.” In a word, I no longer recognize the reality of value that is independent of desire.

Therefore I now consider desire to be the key to ethics, and so it becomes incumbent on me to try once again to figure out what the hell desire is. For starters I think I will pick up a fading offprint of an article from 1982 entitled, “A Theory of Emotion”!


Joel Marks is Professor Emeritus of Philosophy at the University of New Haven and a Bioethics Center Scholar at Yale University. He continues the tale of desire in his trilogy: Ethics without Morals: In Defense of Amorality (Routledge, 2013), It's Just a Feeling: The Philosophy of Desirism (CreateSpace, 2013), and Bad Faith: A Philosophical Memoir (CreateSpace, 2013).

Thursday, May 09, 2013

Pons Asinorum

Copyright © 2002 by Joel Marks
Originally published in Philosophy Now magazine, no. 35, March/April 2002, page 48

Three travelers seek lodging for the night. They come upon a pension that charges 10 euros per person. It turns out that there is only one room available, but they don't mind sharing; so they pay the clerk 30 euros. When the proprietor returns, however, she decides that the guests should be given a discount for having to bunch up, so she summons the bellhop and hands him 5 euros to refund to them. Not being a completely honest fellow, the bellhop pockets two euros; this conveniently leaves one euro to be returned to each guest. Therefore each guest has now paid nine euros, for a total of 27 euros. But 27 plus the two in the bellhop's pocket = 29. What happened to the thirtieth euro?

When I first heard this puzzle, I was bedazzled. It seemed so simple; yet no matter how I turned it over in my mind, I could not come up with a solution. I even entertained the hypothesis that I must be dreaming, or under the influence of Descartes' evil daemon, "who has directed his entire effort to misleading me, [for] how do I know that I am not deceived every time I add two and three or count the sides of a square or perform an even simpler operation, if such can be imagined?" (Meditation One).

Soon, however, I came up with this surprising conclusion: There is no thirtieth euro! The travelers ended up paying 27 euros. The proprietor had 25, and the bellhop kept two. That's it. And yet ... I still could not shake from my head the notion that there was a missing euro. So it occurred to me that the puzzle could be conceived as a kind of illusion -- a calculative illusion, we might call it. An analogy can be drawn to a visual illusion, like the bent-stick-in-water, which is not really bent, but, even when one is fully knowledgeable of its straight shape, continues to appear bent at the waterline (due to the refraction of light). Just so, I now knew there was no thirtieth euro, but I couldn't dispel the mental impression that there was.

Finally I was able to dispel even the illusion. This came about precisely because of its refractoriness. I could not rid my mind of that thirtieth euro; there had to be a way to account for it. And so there is: For at the end, the proprietor has 25 euros, the bellhop two, and the guests three. Voila: 30 euros! So NOW the puzzle became: Why had there seemed to be a puzzle in the first place? Indeed, for some of my more logically adept friends and colleagues, there had been no puzzle about the 30th euro, and they were only puzzled about what was puzzling me. I can still experience a kind of Gestalt switching (as when viewing the picture of a vase and two facial profiles) between my puzzlement and my lack thereof. What makes for the difference?

The answer I have come up with is that this "puzzle" arises from a simple "mental mishearing": Where the situation at the end is that the guests have paid 27 euros, one might inattentively "hear" this as their now possessing 27 euros. Then indeed there would be a mystery (for the bellhop only possesses two, so where's the thirtieth?). But in fact at the end the guests only retain three euros of the original 30.

I have therefore passed through three stages: (1) puzzlement (indeed, astonishment), (2) knowledge, but with remaining unease or residual illusion, and (3) "total enlightenment" or "wisdom," with no puzzle or illusion extant (and even understanding why there had been puzzlement in the first place). The progression is instructive: From time to time life throws us for a loop, and, indeed, philosophy is in the very business of questioning fundamental assumptions. But sometimes, as with the three lodgers puzzle, we eventually discover a way to buttress our original conception of things; Wittgenstein considered philosophy itself to be one big faux-puzzle maker, which it was his calling to foil. However, the history of thought -- not to mention, the narratives of our individual lives -- is surely rife with cases of a new conception's replacing the old after some initial shock, such as the discoveries of pi, the stellar nature of the Milky Way, the absence of an ethereal medium, radioactivity, the expansion of the universe, the incompleteness of arithmetic, and so many others. So the truly philosophical task may be to discern which are the real and which the ersatz puzzles.

Which, for example, is the Anthropic Cosmological Principle? It seems that the various physical constants of our universe are exquisitely fine-tuned for the coming into being of ... us! The odds of this having come about by chance are said to be infinitesimal; ergo, we have empirical evidence of some (vast) intelligence and purposiveness (God?) pre-existing the universe. Is this a genuine problem for the secular mind?

Apparently not. Here is a homely analogy. Suppose you hit a golf ball into the air and it comes down in a dark forest. Well, no mystery there: Where it came down is where it came down. If we want to explain why it landed where it did, we would naturally look to physical laws and conditions. Now change the point of view: Pick a particular point hidden in the deep woods and challenge somebody to strike that precise location with the ball. We would expect only a Tiger Woods to attempt the feat, but even he would probably find it impossible.

Just so, the "fine-tuning" of nature that resulted in us may seem unlikely to the point of impossibility (sans an act of intentional design or creation), but the refutation of this "mystery" is that we are just "looking at things through the wrong end of the telescope": We pose the "problem" from the vantage of the end point, whereas causality works from the beginning, and then, whatever happens, happens. Thus, the "problem" needs no solution because it is not really a problem.

Yet there are others who see a deeper riddle posed by the constants of nature, and who consequently disparage the formulation above as the "Weak Anthropic Principle," or "WAP." Is there a Strong Anthropic Principle constituting a real puzzle? (Or would one just be a SAP to think so?) You will have to consider that for yourself outside the confines of this column.

Car Seats and the Absurd

Copyright © 2002 by Joel Marks
Originally published in Philosophy Now magazine, no. 38, October/November 2002, page 51

The extra minute you take to secure your child into her car seat could be just what it takes to bring your whole family into the path of a Mack truck half an hour down the road.

But that is obvious. It is the cruel, rueful, and ironic face of the contingency of existence. And of course it can work the other way around: Had you not taken the extra minute to secure your child into her car seat, you might have driven right into the path of a Mack truck. What does this tell us? Only, one might suppose, that we do not know the future. It doesn't change the fact that the only rational way to conduct one's affairs is to consider the odds: Children in automobile accidents are more likely to survive if they are strapped into a car seat. Therefore it is rational, not to mention morally obligatory, to do this for your child, even though it is within the realm of possibility that there will be a freak coincidence of circumstances, which converts your caring action into a contributing cause of the very catastrophe you were attempting to avert.

Only ... further reflection leads me to make a more bizarre inference. Put aside for the moment our epistemological situation and consider the metaphysics. Do you grant the following? Most accidents where there is a child passenger and an adult who has been responsible enough to purchase a car seat and secure the child into it, will not be due to some such aggravating factor as the driver drunkenly weaving in and out of traffic or drag racing or the like. Rather, the scenario will more likely be one of encountering some other car which has such a driver, or of the first driver's doing something foolishly spontaneous, like miscalculating when the light was going to change, OR of his being momentarily distracted, as by the family dog wagging his tail in the driver's face at a bend in the road, etc. In sum, I assume that the typical accident involving a child in a car seat occurs because the car was in the wrong place at the wrong time. Accidents are the thing of a moment, and moments are conditioned as delicately as a house of cards.

But if that is so, then do we not arrive at a rather startling conclusion, namely, that it is not the freak coincidence, but in fact the norm that accidents involving a child secured into a car seat would not have happened at all if the child had not been secured into the car seat? The logic of my argument is that everything else would have remained the same ... ceteris paribus, to use a logician's term. And I think that is a reasonable assumption in most cases. For instance, your not taking an extra minute with the car seat (because you were rushed, say) would not in any way affect whether the driver of the Mack truck takes another drink, or runs the stop light, etc. So that truck would still be at the very spot it would otherwise have been had you taken the extra minute. Except that because you didn't, there would be no accident: Your car and the Mack truck would pass through the same space but at different times.

In other words, although your alternative behavior would indeed affect the whole universe given enough time, the vast majority of the universe would remain the same in the short term. It is like the ripples in a pond after you plunk the pebble in: They will eventually reach the far shore and make the frog croak, but at first a nearby fish will not even notice anything has happened. Just so, the fate of the Mack truck and its driver, and of all who would be affected by them in turn into the indefinitely far future, would not begin to alter until later, after the moment at which the accident would have occurred. Up until then, all else with the truck and driver would be identical, so the accident won't occur provided you are careless about the car seat.

Singing the praises of car seats because your child's life has just been saved by one seems, therefore, as odd as extolling the virtues of kidnappers because your child has just been released by one. It is understandable, of course; there is a certain psycho-logic to it since your relief makes you feel grateful. But in strictly logical terms ... it ain't, is it?

Nonetheless, it is still true that it is rational (and, again, surely also ethical, even morally obligatory) to strap the child in. That is because the epistemology of the human condition leaves us with no rational option for deciding what to do other than relying on known, general probabilities. And in this case they presumably tell us that in otherwise matched populations, the one employing car seats will suffer fewer casualties. You simply cannot outwit Mother Nature on this one.

I conclude that ... life is absurd. (Although it is perhaps also absurd to employ logical argument to arrive at such a conclusion. But then ... life is absurd!) For the summation of the above is that it is rational to use a car seat for the safety of your child, even though on any actual occasion when the car seat shows its effectiveness for that purpose, it has likely also occasioned the risk to which your child has been exposed. In short, the car seat (in any given case but not in general) brings about the need for itself. It sounds like a marketer's dream ... or a metaphysical wizard's "perpetual justification engine" ... or the answer to a theologian's prayers for a Necessary Being ... but it is really a kind of joke, akin to: "Why am I hitting myself on the head with a hammer? Because it feels so good when I stop!" Also, this realization seems to have no practical import, and yet it changes everything, like a Gestalt shift (as from the contour of a vase to two facial profiles).

Sunday, April 21, 2013

Stop Think

by Joel Marks
Published in Philosophy Now magazine, issue no. 55, May/June 2006, page 38 

My stepson once gave me a book entitled Jewish as a Second Language (by Molly Katz). He need not have bothered because I was already fluent. Take the chapter on worrying: It explains that "Natural-born Jews leave the womb equipped with a worry reservoir that is filled early and replenished constantly. We worry about everything. ... It is our duty, our birthright, and our most profound satisfaction." I understand this implicitly. For those who are not thus genetically constituted, Katz offers the following practical advice: 

[S]imply make an enormous big deal out of some existing minor problem, such as: An ingrown toenail (it could get bad enough so you'd have to wear special shoes. But those wouldn't go with your business clothes, and you'd be fired for having a poor image. Then you'd lose your medical insurance, get blood poisoning, and die).

  I can add a suggestion: Become a philosopher. This is perfect training for worrying, except that we call it "reflecting." And, indeed, anything and everything is our oyster, er, ingrown toenail. The regular reader of this magazine, and of “Moral Moments” in particular has perhaps already picked up on that. It's no joke, as I indicated in a previous column; when one worries as a matter of both personality and profession, it can become quite painful.

  Fortunately, there is an alternative method of philosophizing which is almost the exact opposite of worrying. It is so different, in fact, that many so-called WESTERN philosophers do not consider it to be philosophy at all. I am not one of those. For me philosophy is defined as much by its goals (understanding the nature of reality, learning how to live properly) as by its recommended methods of attaining them, so I can be catholic about the latter and consider even apparently antithetical approaches to be kosher.

  The alternative method to which I allude is variously named meditation, yoga, mysticism, or even prayer. The variety I happen to employ is MANTRA meditation. Although I first learned it from the TM organization, i.e., Maharishi Mahesh Yogi's "transcendental meditation," I have not retained any ties to that or any other organization or sect. Instead, I went on to study meditation as a component of Hinduism, Buddhism, and Taoism during my otherwise-analytical philosophical education in graduate school and beyond. Meditation has been for me, then, a kind of oriental philosophic cure for an occidental philosophic disease.

  The method is simplicity itself. You say a word (the MANTRA) -- for example, "Om" or "One" -- over and over in your mind. THAT'S IT. Well, of course that's not all there is to "it." It is infinitely subtle. But if you could really just do that, that WOULD be it. It is amazing how difficult it can be to do something so simple -- that is what you learn straight off. (Although when I say "difficult," I don't mean to imply onerous; MANTRA meditation can be surprisingly relaxing and pleasant, and is surely not boring.) You discover, for example, that your mind is full of junk -- mental chatter, mental clutter -- and it's all competing for your attention with that MANTRA. When that happens, here's the key to the whole thing: You bring your mind back to the MANTRA. But you don't yank it; you just withdraw attention from the distraction and return it to the MANTRA, "gently."

  I have done that in "formal" sessions of twenty minutes at least once a day for thirty years. What do I have to show for it? Until recently, I could not say for sure. But in the last few years, I have certainly experienced a boon: I am able to "detach" from thinking about things in the obsessive manner of my New York Jewish upbringing and Western philosophic training. By withdrawing my attention from the thoughts -- precisely analogously to the method of MANTRA meditation, or perhaps even instantiating it -- I can enervate them so that they cease to press on me.

  What takes the place of the MANTRA in this real-life application? Simply whatever there is to attend to. I take my cue here from the philosophy (or practice) of Zen, a distinctive derivation of Buddhism and Taoism. The essence of wisdom is that there is only the here and now; therefore this is what one should attend to. The present moment and place contain all that is necessary for life; be alert to them and you will know what to do and how to live. In this way the ever-thinking, ever-preoccupied mind is side-stepped, so that there ceases to be an intermediary between the self and the object perceived. It is like the difference between walking as we normally do, which is Zen, and trying to walk by thinking to oneself, “Well, first I should extend this leg, then put down this foot, etc.” This is why Zen is sometimes called the philosophy of “No Mind.” But it is also mindfulness, as when you “Mind the gap” in the London underground.

  How do I know that my temperamental achievement has resulted from meditation? And why now, after thirty years? Maybe I'm just getting older and wiser. But as I have related, this new mental ability seems to mimic the skill I rehearse in my meditative sessions. That it would take as long to "undo" a personality trait as to have acquired it should perhaps come as no surprise about human psychology. Probably, then, there has been a confluence of the two influences (practice and maturity).

  However it came about, what it boils down to is self-control. I now have the hang of holding the upper hand with my own mind. A life-transforming technique, which heretofore I could only endorse as an abstract proposition, is now something I can wield (albeit still imperfectly, to be sure). Thus, while I have been emphasizing NOT thinking about things -- an odd-seeming desire for a philosopher, who is supposed to value the "examined life" -- a personality different from mine might benefit from more thinking rather than less. For me it has been the refraining from thinking so much, or in a particular way, that is appealing, as an antidote to despair, which must be an occupational hazard of those who dwell on the human condition, including their own personal prospects for happiness. But the general point is that one ought to be able to direct one's mind to think or not to think about something, independently of one's tendencies: to become autonomous rather than automatic.



Friday, April 19, 2013

The Dancing Philosopher

by Joel Marks
Published in Philosophy Now, Issue No. 95, March/April 2013, p. 52

Every afternoon at the end of my work day I head out for a walk. The locals can set their clocks by this latter day Immanuel Kant. Only when rain and cold and wind are absolutely wretched will this philosopher be kept from his appointed rounds. But on those occasions I make a substitute for my daily constitutional by dancing in my living room to the sounds of music on Pandora. I’ve got a station selected for songs with a fast, heavy beat.

            Thus was I engaged one day when I realized something: I was a marionette. When I’m strutting and shaking and jumping and twisting in the throes of these sounds, it is not by any act of will. “Somebody else” is pulling the strings. Whether it’s Pat Benatar singing “Heartbreaker” or  Billy Idol singing “White Wedding” or Steppenwolf playing “Magic Carpet Ride” or The Trammps playing “Disco Inferno,” my motions just happen in response. I would have to exert my will to stop them ... if I could. Similarly when I’m at a club. If the band begins to play rhythm and blues, or my stepson revs up his rock band, I simply cannot remain seated. Partner or no partner, I’m up on the dance floor; and you’d have to drag me off if the band was still playing.

So much for the idea that free will is something we feel. The only way I could accurately describe my feelings and consequent behaviors in these situations is that they are compelled by an outside force. Yet surely my dancing is an expression of me in the purest form. If this is not me acting feely, then what is? Would only my resistance count as truly free? Or my forcing myself to dance if I did not feel like it? My fellow walker (but presumably not dancer!) Kant might have thought so. He wrote, “suppose that, even though no inclination moves him any longer, he nevertheless tears himself from this deadly insensibility and performs the action without any inclination at all, but solely from duty – then for the first time his action has genuine moral worth” (from the First Section of his Grounding for the Metaphysics of Morals). Moral worth, for Kant, derives from acting freely (in accordance with the categorical imperative), but presumably my dancing would count only as acting from “inclination.”

This is not the first time I have noted my own roboticness in this column. In issue no. 77 I reported on my discovery at the kitchen sink. In that case my behavior was the result of thought processes; I was washing the breakfast dishes because I realized that they would just get in the way if I left them unwashed in the sink and furthermore become more difficult to clean as the dirt encrusted and they piled up, and I didn’t want any of that to happen. It required self-awareness and inference to figure out that what I was doing was therefore not something I had initiated de novo but rather the result of an ultimately billions-of-years-long chain of causes and effects.

In the present case, quite differently, the realization of roboticness was direct: It just felt that way. And that is because I did not have to become aware of what I was thinking in order to link my circumstances to my behavior. The “circumstances” were simply the music, which caused my dancing. Or even more graphically, the cause was a certain pattern of airwaves hitting my inner ear, and the effect was my body jerking around. The whole event was as physical as a hammer hitting a nail, or as if there really were strings attached to my body being pulled by a very strong puppeteer in the rafters. How could I miss that?

            Meanwhile it is child’s play – or more literally I should say oldster’s amusement, for experience helps – to pick out the automatic behavior of others. At my ripening age it has become downright tedious to observe the completely predictable behavior of people I know, people I read about in the news, as well as of political parties, nation-states, and other groupings of human beings. We are all marching to the beat of some drummer or other, and often the same one. This also makes us liable to manipulation by those who figure out the best beats and strike their drum accordingly. In the literal case of the dance music I like, it’s great to be manipulated in this way. But I, like all of us, have also been the victim countless times of drummers and string-pullers who used their implicit or explicit knowledge of my inner workings to gain some advantage over me. (Although they may not have understood at all what was making them do that.)      

But no matter which way the determinism reveals itself, it is a fact. And it is a fact which fascinates me. Really, what could be more amazing than realizing that one is an automaton? It has a definite science-fiction aura to it, like realizing you are a replicant in Blade Runner, or an alien pod in Invasion of the Body Snatchers. But this is reality, backed up by both science and philosophic reflection. I have long marveled at the implications. And more recently, with these mundane recognitions of my own determinism, I have taken delight in cultivating and compiling a phenomenology of determinism. What is it like to be an android? This is a question anyone can answer on one’s own: Just know thyself.

The Sleeper Wakes

by Joel Marks
Published in Philosophy Now, Issue No. 89, March/April 2012, p. 52

Now I lay me down to sleep,
I pray the Lord my soul to keep,
If I shall die before I wake,
I pray the Lord my soul to take.

Derek Parfit’s discussion of personal identity in his 1984 book Reasons and Persons is a timeless challenge to our deepest intuitions about who and what we are or even whether we are, that is, exist. Although his treatment of it was novel, the thesis is hardly new. Parfit himself realized its relation to Buddhism, drawing parallels in his last appendix; and in another famous appendix (of his Treatise) Hume dabbled with a similar notion. I have also written about the problem in this column (Issue no. 74) as well as in a science-fiction (or philosophy-fiction) story called “Teleporter on Trial” published in SciFiDimensions.

            My own intuition has been quite clear to me but also perplexing, and in both senses of the latter. Thus, suppose you enter a presumed teleporter and are beamed to Mars. In what seems to me the most likely scenario, only the information about you will be transmitted, since sending an electromagnetic signal is far more efficient and swift that transporting your entire body. So on Mars a brand new body, and in particular brain, will be shaped according to the blueprint of that information; and out of the transmission receiver will walk a person who is in every respect identical to the one who walked into the transmitter on Earth, including in his own mind. The person will believe he or she is you, no doubt about it.

            My feeling, however, is that he or she is not you at all but only an exact replica. I won’t repeat all of the arguments but only say why this is perplexing in two ways. First, it is puzzling: This is because we are left to wonder what you (or the I or self) could be, such that your existence depends on the existence of your body or brain and not on its blueprint. After all, even the existence of your body and brain is problematic; that is to say, in what sense is your body the same body over time, given that all of its component cells are replaced every number of years? (It is not clear to what degree this is true of the brain, but even here it seems plausible to imagine that we could replace your entire brain, cell by cell, if the technology were available to do so, while leaving it essentially the same brain.) Second, it is worrisome: This is because the implication is that instead of your having been teleported from Earth to Mars, if we simply disposed of your remaining body on Earth we would in fact be killing you (while bringing a new person into being on Mars).

            I am writing about these things again because all of a sudden I am possessed of a new intuition. It is common to take waking up from deep sleep as the archetype of continuing to exist as oneself. Even though it can seem puzzling that one is still oneself despite an apparent hiatus in consciousness, who would seriously doubt it? Or put it this way: if one did doubt it, then one would be close to doubting the very notion of a continuing self, which is pretty much the same as doubting the existence of the self altogether. For it hardly conforms to our conception of being so-and-so that we exist only for a single day (unless one is a mayfly).

Indeed, if one doubts that one is the same person upon awakening as the person who went to bed the night before, one could begin to doubt that one is the same person now as the person who began to read this article, and so on to the duration of a mere moment. For what do you really know about your own continuity? Right now you recollect that you have continued in existence since reading “Right now you recollect ….” But would this not also be the case if there were a sequence of selves or “you”s, each of which duplicated the mental content of the one immediately preceding it?

I won’t push that particular line of argument because, Parfit-like, I am more interested in implications for what matters than about the ultimate metaphysics. So return to the sleeping/waking case: There is this gap in consciousness of a clear sense of yourself existing through time, yet upon arising you (in a few moments if not at once) “collect yourself” back (?) into being. Is this really the same you? Up until now I have considered this not only obvious but the most important  fact in the world. One very real application would be as I related in my phi-fi story about teleporters: Any time a person entered one of these contraptions, he would be about to die.

But now for the first time I question that, or anyway that it matters. Instead of entering the teleporter, just put your head down on your pillow tonight. Tomorrow morning someone will awaken on that pillow, believing he or she is you. No one else will have a clue that there might be anything different either. I now ponder and wonder and marvel: What else could matter? Whatever the metaphysics of the situation, if these empirical facts are the case, then could anyone, including the person who went to bed the night before, complain of some loss? Suddenly I am at a loss … to see what has been lost.

Joel Marks is Professor Emeritus of Philosophy at the University of New Haven and a Bioethics Center Scholar at Yale University. He would like to thank Chris Bateman of International Hobo for re-sparking his interest in Parfit, and acknowledge the aid of Thomas Metzinger’s The Ego Tunnel (Basic Books, 2009) in further cutting the cord to himself.

Thursday, August 16, 2012

“A” Is for “Assumption” or Why the World Needs Philosophy

by Joel Marks

Published in issue no. 90 of Philosophy Now, May/June 2012, pp. 52-53

Socrates famously averred that the unexamined life is not worth living. This was part of his “apology” when he was on trial for his life as he tried to explain what it means to be a philosopher. I myself have taken this to heart as a definition: Philosophy is the examination of fundamental assumptions. It occurred to me the other day that I have been putting this conception into practice with a vengeance of late – not meaning to do so as philosophical exercises, mind you, but quite spontaneously as a natural-born philosopher. So perhaps it will help my readers to understand what I have been about in these columns if I review my recent philosophical hobbyhorses in this light. As it happens, like “assumption” (and, for that matter, “apology”), all of them begin with “a”: animals (issues 62, 66, 67, 72, and 85), asteroids (issues 79 and 86), and amorality (issues 80, 81, 82, 84, and 87). Herewith the common thread of my discourses on the lot.

Animals. Human beings treat other animals abominably. (“A” is for “abominably”!) There are some exceptions, such as, in some cultures, pets; but even pets represent an offense against free-living animals in their natural habitats, who have been deliberately bred into dependency and hence as a result dumbed-down as well. And almost all pets are denied the freedom to roam, whether by foot, feather, or fin; instead they are confined to a building or the end of a leash, or kept on display in a cage or a bowl. The condition of the vast majority of nonhuman animals, however, is without even the compensations that may attach to being a pet. Animals in the wild are trapped for their skins or hunted down for pure sport. Animals in captivity (other than pets) are turned into egg or milk machines, or fattened for direct human consumption, or consigned to laboratories for testing and vivisection. All in all, it is not good to be a nonhuman animal in a world controlled by human animals.

 However, many human beings are sensitive to one or another aspect of our “inhumanity” to other animals and therefore strive to better their lot. Thus have arisen numerous societies for the prevention of cruelty to other animals and, more generally, for the promotion of their welfare. One would think, then, that all animal advocates would be “welfarists.” But this is not the case. Why not? Because welfarism is based on an assumption which, if examined, proves untenable … or at least questionable. The assumption is that it is all right to use other animals so long as we do so with an eye to their welfare. Or to put it epigrammatically: It is OK to use animals so long as we do not abuse them.

 But this assumption may be unwarranted. The reason is that use and abuse, while indeed distinct concepts, may only differ in reality under certain conditions, and those conditions may not obtain for other animals. One argument goes like this: So long as x is at an extreme power disadvantage to y, any use of x by y will inevitably deteriorate into abuse. Well, clearly, under present circumstances all other animals are virtually powerless relative to human beings; therefore just about any use we make of them leads inexorably to their abuse. And is this not precisely the situation we observe?

 This is why there has arisen in opposition to welfarism the movement known as (“a” is for) abolitionism, which seeks to abolish all institutions of animal use. Thus, there would be no animal agriculture, no hunting (other than for real need), no animal circuses, no zoos, no pets. The breeding of domestic animals would end, and the preservation of wild habitats be maximized. Abolitionists further maintain that the emphasis on animal welfare actually serves to encourage animal use, since if people believe that the animals they use are being well taken care of, they will lose their main incentive for discontinuing that use; and hence, by the argument above, animal welfarism further entrenches animal abuse, and so is counterproductive even to welfare in the long run. Here again the evidence seems to be in plain sight: For all the growth of welfare organizations – and just about every major animal protection organization is a welfare, as opposed to an abolition, organization – the abuse of animals has only increased and shows no sign even of decelerating. For reasons such as these I have allied myself with abolitionists like Lee Hall and Gary Francione.

Asteroids. Here I have cheated a little bit because (“c” is for) comets are also a major concern. But due to their overwhelming numbers in our vicinity at present, asteroids have taken the lead in the public imagination as a threat to humanity. The more one learns about their potential to do us grave harm should we ever again collide with one of Manhattan-size or larger, the more one finds oneself tossing and turning in bed at night. These bodies number in the thousands up to the trillions, depending on size and distance considered; and the inevitability of another good-sized one striking our planet – unless we prevent it – is denied by no one. Indeed, no one denies that an object the size of the one that wiped out the dinosaurs, and that would wipe out human civilization, will one day bear down upon us. Furthermore, it is now a common occurrence to discover asteroids that are large enough to wreak havoc if they impacted us and that do in fact make a close approach to our planet, such as 2005 YU55, which came closer than the Moon last November 8 (2011), and 99942 Apophis, which will come even closer on April 13, 2029.

 Thus have arisen Spaceguard and other programs, whose mission is to detect all such hazards and devise and implement mitigating strategies. It is not easy, however, to deflect an incoming object of human-extinction size, which would be 10km or larger. Fortunately, as one hears with regularity from the scientists who inform the public on this matter, objects of that size likely to come into Earth’s immediate vicinity are exceedingly rare. In fact there is a power law of size relative to quantity, such that the larger the object, the fewer there are. Therefore, given limited resources, the present de facto policy is to focus on detecting mid-size NEOs (Near-Earth Objects) – ones that could, say, wipe out a city -- and designing and testing means of deflecting them.

 Alas, this seemingly sensible and rational policy is based on an assumption that will not withstand critical scrutiny. The assumption is that the relatively small number of the relatively large objects makes it unlikely that we will be impacted by one any time soon. But this is fallacious. The reason is that these events occur with total randomness. Therefore an extinction-size object could appear on the horizon at any time. The statistics only tell us that this will occur sooner or later, but they do not tell us when. One takes false comfort in their relative rarity in the recent historical record.

 Indeed, this way leads to absurdity. For suppose there were insufficient reason to begin to prepare to prevent (“a” is for) Armageddon by asteroid or comet this year because of the exceedingly low statistical probability of such an occurrence. Therefore there would never be a time when there is sufficient reason to prepare for it, since the statistical probability remains constant (at least until Armageddon occurs … but possibly even then!). But Armageddon will occur unless we prevent it. Therefore it is rational to allow Armageddon to occur. But it is not rational to allow Armageddon to occur. Therefore it is false that there is insufficient reason to begin to prepare to prevent Armageddon by asteroid or comet this year just because of its exceedingly low statistical probability.

 Thus, just as animal protection based on the fallacious policy of welfarism serves to the detriment of animal protection, planetary defense based on the fallacious policy of mid-size impact mitigation serves to the detriment of planetary defense.

Amorality. It was only after I had finished writing the culminating monograph of my career as a so-called normative ethicist that I realized that both the monograph and my career had been based on an assumption that could be seriously questioned, namely, that morality exists. The case against morality is known in the literature of meta-ethics as the argument to the best explanation. Simply stated it is the claim that all moral phenomena, including our occasional tendency to altruism and our beliefs in moral obligation, moral guilt, moral desert, and the like, can plausibly be accounted for by our evolutionary and cultural story (or stories), without need to postulate any actual moral obligation, moral guilt, moral desert, and the like. Thus, morality turns out to be like religion, or theism in particular, in that the more plausible explanation of our belief in God, etc., is that such a belief has served to help us survive rather than that there actually is a God.

 Now this may seem to lead to the conclusion that we are therefore in the peculiar position of needing to cling to a delusion. However, some few of us (including most explicitly at present Richard Garner and myself) maintain that the time is now ripe to expose morality for what it is – an illusion – and thence to eliminate it from our lives. The argument is an empirical one: in a nutshell, that a world without the felt-absolutism and felt-certainty of moral convictions would be less violent, less hypocritical, less egotistical, less fanatical and so forth than our present, moralistic world is, and therefore we would prefer it. Garner makes the case at length in his Beyond Morality (now online in a revised version), and I in my Ethics without Morals (forthcoming from Routledge). (Note: My personal story of “counter-conversion” to amorality is told in Bad Faith: A Philosophical Memoir, which I shall perhaps one day post on the Internet.)

 And observe that this claim is analogous to the two other claims discussed above. For just as animal protection based on the fallacious policy of welfarism serves to the detriment of animal protection, and planetary defense based on the fallacious policy of mid-size impact mitigation serves to the detriment of planetary defense, so, moral abolitionists (not to be confused with animal-use abolitionists, although I happen to be both) argue, an ethics based on morality is both fallacious and self-defeating. The fallacy of morality is that the strength of our moral convictions (or “intuitions”) warrants our belief in their truth. The self-defeatingness of morality is that a moralist world is (today if not heretofore) more likely to be discordant with our considered desires than an amoralist world.

Assumptions. Thus my catalogue of dangerous assumptions that license (1) the ever-increasing exploitation and slaughter of nonhuman animals by the tens and hundreds of billions, (2) the exposure of humanity to extinction by asteroidal or cometary impact (maybe not a bad deal for some of the animals, though), and (3) the excessively judgmental and even lethal imposition of our preferences on one another. My aim has been to illustrate the utility of philosophy as the critical examiner of our most fundamental and pervasive – and hence, most likely to be mischievous -- assumptions. By a curious but inevitable logic, the foundations of our beliefs are the shakiest part of the whole edifice of our knowledge, precisely because they are the most taken for granted – positively buried in the underground of our psyche. Philosophy brings them into the light of day for inspection and possible repair or, if they prove too rotted out, condemnation of the whole structure that has rested upon them.

 I must admit, (“a” is for) alas, that my own philosophical efforts to date have little to show by way of liberating animals, saving humanity, or making society less violent and antagonistic. But perhaps I can at least be given a “A” for effort.

Monday, February 06, 2012

Intellectual Pleasures

by Joel Marks
Published in Reflections (University of New Haven), no. 5, Spring 1989, pp. 1-3.

Human beings have faculties more elevated than the animal appetites and, when once made conscious of them, do not regard anything as happiness which does not include their gratification. -- John Stuart Mill, Utilitarianism

I like to parse arguments. I love to parse arguments. Give me a passage of text which is intended to persuade, and I will apply my powers of analysis to make its premises and conclusion explicit. Even if the argument seems clear to begin with, and is beautifully articulated, I derive pleasure from putting it into this dry mold: "A therefore B" (or "B because A").

For example:

A story (perhaps apocryphal) is told that Abraham Lincoln was once trying to convince a friend that all [people] were prompted by selfishness in doing good. As the coach in which they were riding crossed over a bridge, they saw an old razor backed sow on the bank making a terrible noise because her pigs had fallen into the water and were in danger of drowning. Mr. Lincoln asked the driver to stop, lifted the pigs out of the water, and placed them on the bank. When he returned, his companion remarked, "Now Abe, where does selfishness come in on this little episode?" "Why, bless your soul, Ed, that was the very essence of selfishness. I should have had no peace of mind all day had I gone on and left that suffering old sow worrying over those pigs." (Taken from C.E. Harris, Jr., Applying Moral Theories [Belmont, CA: Wadsworth, 1986], p. 62)

The argument reduces to: "I would have been upset not to do what I did; therefore I did it for selfish reasons."

A rather arid exercise, one might suppose. For me, a sip of pleasure. What exactly is it that I enjoy in this mental activity? Well, there is analysis: getting to the heart of something. I also like to express an idea in its most exact and explicit form. Precision and absence of ambiguity are here the paramount concerns; there is a kind of beauty in this, I find. And then, as well, an enhanced understanding can result, which is intrinsically valuable and satisfying.

I can also state something that is not the explanation of my love of parsing: I do not love it because it is useful. Don't get me wrong: It is useful. It is one of the most useful things in the world! The ability to clarify an argument is an antidote to muddle headed thinking, of which there is a great deal and which causes much woe. Take a look again at the Lincoln argument. It is so convincing in its narrative form, yet it invites critical analysis in its parsed form. As C.E. Harris points out, the conclusion does not follow from the premise. The fact that one is upset does not tell us anything about the nature of what is causing the upset; but the selfishness or unselfishness of one's motives or reasons depends completely on the nature of what is causing the upset. In the Lincoln case, the cause of the upset is the suffering of the old sow; this determines that Lincoln's motives were unselfish after all.

It is my belief that great chunks of scientific psychology and economics, which generally conceive human beings as fundamentally self-interested, rely on the sort of mistaken analysis Lincoln made.1 Nonetheless, I repeat, it is not the usefulness of analysis that explains my special fondness for it. I parse for its own sake. I would pay money to be able to parse arguments. The point I want to stress is that there is a pleasure to be had here. It is one of a set -- a vast set -- of possible intellectual and other cultural pleasures (and of the good kind) that help set human beings apart from other animals.2

So, Julie Andrews, the next time you sing, "These Are a Few of My Favorite Things," take note: Parsing may be one of them!


NOTES


1 Or is purported to have made; I rather think Lincoln was arguing tongue-in-cheek, in an effort at modesty, if this episode occurred at all.
2 Not that I have any disrespect for other animals; but, for better or worse, human fulfillment appears to lie in different directions from theirs.

The Discovery of the Opponym

by Joel Marks
Published in Reflections (University of New Haven), no. 16, Fall 1994, pp. 1-2.

As a wordsmith, I spend a lot of time trying to find that mot juste. (I hope "mot juste" is the mot juste in this case!) It is not always easy to say what you mean -- you know what I mean? The writer or speaker must not only understand the standard definitions of words, but also their special usages in various contexts -- with different audiences, on different occasions, etc. Tone of voice or surrounding sentences can also alter meaning. Ambiguity is ever-present. But of all the linguistic stumbling blocks to comprehension I know of, the most bedeviling is a type of word that has the amazing characteristic of meaning opposite things!

Now, it is certainly not unusual for a word to have multiple meanings. Indeed, this is probably the norm rather than the exception (just as the typical star shines not singly, like our solitary Sol, but as part of a binary system). And this phenomenon blends into another where the same spelling and pronunciation are used for what are considered different words -- so-called "homonyms." It is also not unusual for different words to have opposite meanings -- hence "antonyms." And when they are closely paired to form a phrase, we call the result an "oxymoron" (e.g., "cruelly kind").

But what I have in mind is a sort of one-word oxymoron, or one word that does the work of two antonyms. Alternatively, the situation could be conceived as involving word pairs, which would then be homonymous antonyms, or antonymous homonyms. Furthermore, there seems heretofore to have been no word for this sort of word. I have therefore dubbed it the "opponym."

Herewith follows my personal collection of opponyms, compiled over the years while I was writing about weightier matters.


A Glossary of Opponyms*


argue [transitive verb]: to give reasons for (He argued the point); to give reasons against (She declined to argue the point).

besides: except for (Besides money, we lack for nothing); in addition to (Besides our health, we've fortunate to be rich).

blunt: dull; pointed (blunt remarks).

bracket: include (These figures bracket the whole range); exclude (Let's bracket that issue for now).

cleave: divide (May nothing cleave these newlyweds asunder); adhere (May they cleave unto each other).

confirm: request or receive substantiation (I wish to confirm that the hoped-for event did indeed occur); provide substantiation (ditto!).

consult: to seek advice (She went to the lawyer to consult regarding her upcoming divorce); to give advice (However, the lawyer, who specializes in taxation, was not competent to consult on this matter).

discern: "to detect with the eyes"; "to detect with senses other than vision."

discursive: "moving from topic to topic without order; proceeding coherently from topic to topic."

dust: "to make free of dust"; "to sprinkle with fine particles."

easterly (etc.): from the east; toward the east.

enjoin: command to do; prohibit from doing.

flesh: to cover with flesh; to remove the flesh from.

founder: [noun] one who provides with a basis or foundation for existence; [verb] to sink below the surface and cease to exist.

franchiser: "franchisee; franchisor."

guard: to protect from harm or invasion; to prevent from escaping to freedom.

handicap: a natural disadvantage; an artificial advantage.

impression: a vivid imprint; a vague remembrance.

liege: "a vassal bound to feudal service and allegiance; a feudal superior to whom allegiance and service are due."

modify: "to make minor changes in; to make basic or fundamental changes in."

moot: debatable; no longer worth debating.

oversight: watchful care; a failure of same.

paradox: a seeming truth that is self-contradictory; a seeming contradiction that is (perhaps) true.

pride: "inordinate self-esteem"; "reasonable self-respect."

protest: "to make solemn affirmation of" (protest one's innocence); "to make a statement in objection to."

purblind: “wholly blind”; “partly blind” (i.e., not wholly blind).

qualification: something that suits a person (etc.) to a job (etc.); something that limits one's suitability.

sanction [noun]: a penalty for violating a law; official permission.

temper [noun]: "equanimity; proneness to anger." (One loses one’s temper in the sense of equanimity; one has a temper in the sense of proneness to losing it [in the first sense]!)

temper [verb]: "to soften (hardened steel) by reheating at a lower temperature; to harden (steel) by reheating and cooling in oil."

threaten: One and the same event may threaten [to bring about] war and [to eliminate] peace.

trim: remove from; add to (both with respect to trees).

* Quoted definitions are from Webster's Ninth New Collegiate Dictionary (Springfield, MA: Merriam-Webster Inc., 1985).

Sunday, June 19, 2011

Science and Philosophy: Vive la Différence!

by Joel Marks
Originally published in Philosophy Now magazine, no. 33, September/October 2001, page 31

The work of the philosopher consists in assembling reminders for a particular purpose.
- Wittgenstein, Philosophical Investigations (3rd ed., tr. G.E.M. Anscombe), item #127

... seeking and learning are in fact nothing but recollection.
- Socrates in the Meno (tr. W.K.C. Guthrie), 81d

Every once in a while I wonder what I am doing. As a philosopher, that is. I do what I consider to be philosophy as naturally as I breathe, and about as often. I philosophize when I work at my computer, as I am doing now; but I also do it when I am walking, or driving, or swimming, or talking. Sometimes I catch myself: "Shouldn't I be paying full attention to what I am doing and not always cogitating in this way? Shouldn't I be more Zen?" But, then, you see, in asking that question I am philosophizing again!

Maybe it's a sickness. There are all sorts of ways to characterize this thing called philosophy, and to dispute those characterizations. Sometimes the activity of trying to establish what philosophy is is called "metaphilosophy"; for this is itself a philosophical question. The funny thing is, you can be doing something every minute of your life and still not know what you are doing. But that strikes me as a clue. Therefore, without further ado, let me propose a metaphilosophical thesis that may help to clarify what I am doing right now!

I would say that philosophy is pretty much in the same business as what we today in English refer to as science: Both are methods of inquiry that involve reasoning and empirical confirmation. There is of course much to discuss about what all of that means, and furthermore to distinguish these two from yet other realms, such as religion. But in the short space available to me here I would like to highlight what I think is the chief distinction between philosophy and science, for it strikes me as being very much at the heart of what I do on a day-to-day basis. Furthermore, I believe the distinction matters, since nowadays many people who are not professional philosophers, which includes most scientists, tend to take a dim view of philosophy precisely because they see it as superfluous to science.

My hypothesis is that science undertakes to generate new data to test its hypotheses, while philosophy, by and large, is content to test its hypotheses against already existing data.*

Suppose one wished to investigate the hypothesis that women are more emotional than men. A scientist might then construct an experimental situation intended to arouse an emotion -- for example, showing disturbing photographs or videotapes, or even perpetrating a phony scene, such as introducing a rude actor -- and then measure the resultant responses among the experimental subjects, such as having them fill out a questionnaire or having trained observers count certain behaviors elicited from the subjects by the situation. Alternatively, and more naturalistically, trained interviewers could be sent out into the community to question randomly chosen respondents about episodes in their life. Finally, all data would be tabulated and statistically analyzed to yield possibly significant conclusions.

But a philosopher would approach the same hypothesis quite differently. He or she might simply think about the matter. A typical line of thought could go as follows: "There is a stereotypical assumption that women are more emotional than men. And suppose it is even true that women -- at least in some societies -- tend to, say, cry more than men. Does it follow that they are more emotional, even in those societies? Well, no. For example, it is equally noted -- again, however truly or falsely -- that men tend to get angry more (as well as more angry) than women. But is not anger an emotion, just as much as the sadness or anxiety that prompts crying? So there seems to be some kind of prejudice in the stereotype that is biasing the very notion of emotion. Hmm." And so on ... without end, really.

The philosopher could as well discuss this issue with others, to discover other ideas and observations which happened not to occur to him or her up until that point, but which are, nonetheless, commonplaces that anyone can confirm. Literature and biography are other sources of "data." And even the scientific literature is fair hunting ground, for once something has become a so-called established or scientific fact (putting aside for now that these too can be questioned), then it is considered to be "known," and so is on a par -- from the standpoint of my proposed distinction -- with phenomena that can be ascertained by personal observation, introspection, discussion, reading, and the like.

This is why philosophy may give the impression of being nonempirical, but that is only because it is concerned primarily with "re-collecting" or "assembling reminders" (à la the epigraphs) from which to draw implications or "put 2 and 2 together," not because its subject matter is purely mental. Science has become a legitimate offshoot of philosophy because the means of testing hypotheses have become more technical and elaborate; and hence also, in dialectic with the development of experimental instruments, techniques, and analyses, the hypotheses being proposed have themselves become more technical. Nonetheless, there remain significant hypotheses about the nature of reality which call for careful and sometimes prolonged reflection, yet do not require, indeed would not be elucidated by, experimental or analogously regimented investigation.

Furthermore, science cannot possibly put philosophy out of business, but can only help to expand its inventory. The appearance of a "new fact" in the world of human knowledge is but the beginning of an understanding of that fact. Any fact is related to other facts, perhaps ultimately to all others. It is these relations that philosophy is committed to explore.

NOTE
* Again cf. Wittgenstein: “One might also give the name ‘philosophy’ to what is possible before all new discoveries and inventions” (item # 126, ibid.).

Monday, May 10, 2010

Seany Time

By Joel Marks

Sometimes we wonder how we became the way we are. Was it nature or nurture that made me come to love the same music my mother loved? Now that I've had the experience of raising a child of my own – that is to say, my stepson -- some of the mystery has been resolved for me. Sean’s musical talent probably came from his parents, but at least some of his taste will have come from me.

I think it began when my mother died. I retrieved my LP collection and old KLH phonograph that had been stored at home. The KLH was perfect because it did not skip no matter how much you bounced. And bounce we did, Sean, since your introduction to art music was as an athletic activity. I knew you would want to be moving and not just sitting and listening.

Do you remember? -- Every weekday night, after dinner and doing the dishes, it was "Seany Time." These were some of the most blessed moments of my life. Do you know which one was the best for me? -- when you wanted me to carry you in my outstretched arms while we listened to your "flying music.” (This was "Something's Coming" from West Side Story.) You were stretched out like Superboy. We “flew” together around and around the living room, looking down at the grain of the carpet as if it were trees or clouds far far below. Sean, you were totally into that music, and so was I. We shared the rhythm of it, the imagination. We were one!

The music you especially liked was music we could run to ... the more frenetic the better. Remember how we would chase each other from the living room to the dining room and through the kitchen and up the landing and back into the living room, faster and faster? That was Prokofiev, either the frenzied first movement of the Third Piano Sonata or the magnificent first movement of the First Piano Concerto. Beethoven's Fifth Symphony (first movement) was another favorite.

You also had a deep and mysterious side. How many times we sat inside the "boat" you made of the couch and pillows, lights off in the room, peering out into the gloomy darkness (sometimes with a flashlight), waiting for the appearance of the sea monsters. (That was Bartok's Music for Strings, Percussion, and Celesta.) Then they would suddenly attack while we tried desperately to fight them off. Other times it would be the approach of a mighty typhoon, roiling the waters, clashing thunder, forcing us to hold on for dear life. (That was Liszt's Totentantz.)

You showed talent on your mother's (actually, her mother’s) somewhat-the-worse-for-wear piano at a very early age. Therefore when my mother died I decided to have her Steinway grand moved to our place. She had been a composer and a pianist. Her spirit seemed at once to enter into you. From upstairs I would hear you picking out tunes by ear and creating your own. "This is a miracle," I would think, holding back the tears. I was determined to have you begin lessons with the perfect teacher I knew. Sean, I'm so proud of you: You've kept it up.

I was also eager to bring you to concerts. You became acquainted with the extraordinary musical resources of nearby Yale University (as my mother used to take me to Carnegie Hall and Lincoln Center). You first heard Amahl and the Night Visitors there, and Peter and the Wolf (that night the high point for you was after the concert when you got to swing on a tire hanging from a tree in the college courtyard).

And do you remember the Yale student at the "Speed and Fear" concert who played the thrilling third movement of Prokofiev's Seventh Piano Sonata, then jumped on to the piano bench and then into the audience right next to you?

We went to several concerts in the large Woolsey Hall, where we would sometimes sit in the very last row in the upper balcony, and other times in the very first row of the orchestra. Do you remember when the incredibly fat lady was playing the breathtaking cadenza of Rachmaninoff's Third Piano Concerto … and we had to slink out right beneath the piano and then up the aisle, facing hundreds of people in the audience … because you had to go to the bathroom!

(Woolsey Hall is also where I took you some evenings to rollerblade in the courtyard with the Yalies. Always music went with moving for you, Sean.)

So when you are an adult, if you find yourself attracted to “classical” music of a certain sort, this is probably why. This is exactly how it began, Sean. I know … I was there.

Saturday, May 01, 2010

A Funny Thing about Consciousness

by Joel Marks

Published in Philosophy Now, issue No. 44, January/February 2004

My local newspaper recently switched from a black-and-white format to color for the daily comics. That is quite an innovation. For my whole half-century-plus of existence, color had been reserved for the Sunday comics section. For a child and, generalizing from personal experience, for many an habituated adult, seeing that journalistic tome wrapped in tinted drawings has been a weekly source of delight. So some marketing genius has now got the idea to spread the joy to the six other days.

I am not amused. Call me a curmudgeon who is set in his ways, but I see no good in the change. The appeal of the daily comics was quite apart from their artwork; instead they -- the good ones -- gave us a daily dose of wit. We are used to seeing through them, as it were; there are characters, jokes, ideas. Now our attention has been drawn to their superficial aspect, and there they are found lacking.

But it hit me with a jolt the other day that a deep metaphysical significance might also be intimated. The materialist project, according to which we are nothing but physical objects of a certain sort, maintains that we can do quite well without consciousness, thank you very much. So wouldn't that suggest that consciousness is just like that superfluous, indeed officious, color that has now been imposed on the funny pages?

I'll take that more slowly. It is obvious that we are physical beings. But are we also more than that? As I related in an earlier column about the psychologists James J. and Eleanor J. Gibson, my own "conversion" into a philosopher came about when I discovered consciousness. I honestly do not know how many of my readers know what I am talking about (and that is relevant to my theme), but simply put I am referring to what I experience when I enter a dark room and turn on the lights. Contrast that to what we would normally imagine to be the interior life of a robot: It could come into a dark room and flick the light switch, then light would fill the room; but there would be no corresponding filling of the robot's being with light. Indeed, there is no darkness in the robot's being either: It is simply not conscious.

When I speak of the light and darkness of interior being, I am speaking metaphorically. But the materialist would caution that I am in danger of taking the metaphor literally, of believing that there is something in existence that is not physical light or darkness, and yet which is not just brain cells either. To me (at least the me who was a nascent philosopher) it was obvious -- really, the most obvious fact in my new philosophical world, and the most marvelous -- that the light of consciousness was neither physical light nor neural matter. After all, I could sometimes still experience it in a darkened room or with my eyes shut (as when dreaming of a lighted room), and there was nothing corresponding to it in appearance beneath my skull (nobody peering inside would have seen a light shining in there ... unless they used a flashlight [Cf. instant water: Just add water]!).

Now, a wizened if not yet wise philosopher, I see so clearly how question-begging that argument is. If the light and darkness, and the interior being itself, are all metaphorical, then their literalness could be anything: even dull grey brain cells!

This business about the new color comics only brings home the point. You see, life went on merrily enough without that color. Indeed, I have suggested that the color is a nuisance, a distraction. Similarly might we not suppose that a robot or android could go about its tasks without a hint of "light" or consciousness? If so, it seems a small stretch to suggest that we ourselves could do so ... in fact, do do so, until some philosophical bozo (or impressionist painter?) happens upon this phantasmic bauble and becomes bedazzled by it.

Furthermore, do we not positively trip over our own feet when we do become aware of consciousness? Who will be the better dancer: the one who moves, or the one who thinks about the moves? Isn't this the meaning of: "You can't learn to ski from a book"? Isn't this what "The Zen of ..." is all about? Become the bow, become the arrow. Do, be, don't think. Do-be-do-be-do.

This is also just what J.J. Gibson said. The title of one of his books -- The Senses Considered as Perceptual Systems -- was intended to convey the idea that "sensation" is irrelevant to perception. The balance sense was a favorite example: Although it is just as essential to our functioning as any of the "five senses," it tends to go about its work without invoking consciousness, without our taking notice of it in feeling or sensation. There does not seem to be anything corresponding to light or color or sound or taste or smell or touch to which we need or even can attend when trying to maintain our orientation to gravity. But if we did not perceive its direction, we could not maintain posture or move in a coordinated fashion. (It is only when the system malfunctions that we sense it, as when we become dizzy.)

Ah yes, I am aware of possible objections to my remarks. For example, it may only be certain types of consciousness that are useless or meddlesome, and even then maybe only in certain types of situations; thus, verbal consciousness may often interfere with playing the piano, but a sensitivity to tone and touch could be essential, and verbal thoughts may similarly inform poetic imagery (and writing good "Moral Moments"!). I could also be confusing self-consciousness with consciousness per se. Meanwhile, the gravity sense may be relatively unobtrusive only because it has one simple job: to ascertain which way is down. Even so, a devotee could probably develop a sensibility to equilibrium and an adept describe its phenomenology. Finally, consciousness could be totally physically constituted but still considered to exist for all that.

But those blasted new comics have at least got me thinking that consciousness could be more epiphenomenal than it is the real deal, and hence we ourselves be ... laughing matter.

Sunday, April 04, 2010

I Sink, Therefore I’m Not

by Joel Marks

Published in Philosophy Now, issue no. 77, February/March 2010, p. 39

I do have revelations at the kitchen sink. Just a few minutes ago I had one. Of the myriad thoughts always racing through my head, one caught hold of my attention as I was washing a breakfast dish. The idea that struck me was that my washing of that dish was as “determined” as determined could be. That is, in a very local sense of metaphysical determinism, I could sense the inevitability of the event’s occurring as a result of the current and immediately preceding circumstances. Specifically, there were dishes from which I had just eaten sitting in the sink, and I wasn’t about to let them continue to sit there indefinitely because they would become encrusted with the food scraps that could now be easily removed, and would take up space I would want for preparing lunch, and would grow into a pile from more than one meal that would be far more onerous to clean.

This was truly a moment of perceiving my inner robot. I do believe we are all robots – natural ones, of course, as opposed to created in some factory. But everything that we do is ultimately a matter of stimulus and response, granted via innumerable mediations of physiology and so forth. As I washed the dishes I began to review in my mind my morning rounds, beginning with the alarm clock waking me up. It soon became apparent that the short period of time between awakening and washing those dishes was filled with a virtual infinity of mechanical encounters.

As an eternity had already passed by the time I came down to the kitchen, let me begin my story just before breakfast. There was a dish rack full of last night’s dishes, so I proceeded to place the various items in their usual places – the better to find them again in future -- while also keeping out those I would be using again for breakfast. This was not an uninterrupted process, however, since in replacing an item I would also come into proximity of something I would need for breakfast, such as the glass I would use for my orange juice that sat in the cupboard where I was replacing last night’s tea cup. Removing the glass would then prompt me to walk over to the refrigerator, where I would grab the juice carton. This would in turn send me to the kitchen counter, where I would pour the juice. Ah, but I’d reached the bottom of this carton and didn’t yet have enough juice in the glass, so I tossed the carton into the waste bin and returned to the refrigerator to pull out a new carton, which brought me back to the counter, where I opened it … with another brief visit to the bin to toss the seal of the new carton … then back to the counter ….

Well, you get the idea. The point is: it never ends. (Nor, for all I know, does it ever begin). My typing these words is all part of the exact same sequence, one thing leading to the next, with perfect reasonableness in most cases, and in others for unknown reasons but without any deep mystery. An example of the latter might be: Why was I thinking about determinism while washing that dish? But even there I can easily sketch an explanation, although of course I am unable to tell you which neurons were firing, etc. It came about, I am sure, because I had attended a fascinating colloquium earlier in the week about determinism and free will! This is already a subject that fascinates me … which, of course, is why I was attending that colloquium! I could well having been pondering my freedom, or lack thereof, even had I not gone to the colloquium; but it seems plausible that the currency of that colloquium is responsible for the particular force with which my dish-washing exemplified the phenomenon for me this morning.

I now just took a glance at the notes I scribbled down when in the throes of the revelation. “& meanwhile all these thoughts coursing thru my head,” I wrote; then, “(& writing down these words!)” with an arrow pointing to the preceding phrase, and then another arrow pointing to itself (the phrase in parentheses). Yes, of course: even my thinking about determinism was determined by factors internal and external to my body and preceding and concurrent with the thinking. Where in any of this is there room for some “free” act of “will”? Nowhere at all. In fact, there isn’t even any place for me – for an agent who does any of this stuff. What there is is a flow of events, including some that we conceive as experiences of our own self initiating various actions on its own behalf. But the way we conceptualize the flow (probably even including conceptualizing it as a flow) is rife with anomalies and gaps, which current science is filling in and rationalizing. The final result will be a story about what is going on that is utterly different from the tale we are accustomed to tell. And the trickiest part may not be the neuroscience and such but the replacement of our everyday vocabulary with a more scientifically informed one. For example, somehow we are going to have to learn how to talk about ourselves without even referring to “me.” Our present language only enables us to utter nonsense, such as “I do not exist.” Says who, right?


Joel Marks is Professor Emeritus of Philosophy at the University of New Haven in West Haven, Connecticut, and a Bioethics Center Scholar at Yale University. He has his greatest ideas upon awakening, while walking, washing dishes, and taking showers (but not all at the same time). The kitchen sink at which his most recent revelation occurred is the very same one pictured on the cover of his book of Moral Moments published a decade ago. He thanks Joshua Greene for being the stimulus of this essay.