My Blog has moved!.... Блог переехал!...

Мой блог переехал на новый адрес:





My blog has relocated to the new address:



http://www.heyvalera.com/


































Showing posts with label New Yorker Magazine. Show all posts
Showing posts with label New Yorker Magazine. Show all posts

June 7, 2010

Poem du jour


A Maxim

by Carl Dennis

To live each day as if it might be the last
Is an injunction that Marcus Aurelius
Inscribes in his journal to remind himself
That he, too, however privileged, is mortal,
That whatever bounty is destined to reach him
Has reached him already, many times.
But if you take his maxim too literally
And devote your mornings to tinkering with your will,
Your afternoons and evenings to saying farewell
To friends and family, you’ll come to regret it.
Soon your lawyer won’t fit you into his schedule.
Soon your dear ones will hide in a closet
When they hear your heavy step on the porch.
And then your house will slide into disrepair.
If this is my last day, you’ll say to yourself,
Why waste time sealing drafts in the window frames
Or cleaning gutters or patching the driveway?
If you don’t want your heirs to curse the day
You first opened Marcus’s journals,
Take him simply to mean you should find an hour
Each day to pay a debt or forgive one,
Or write a letter of thanks or apology.
No shame in leaving behind some evidence
You were hoping to live beyond the moment.
No shame in a ticket to a concert seven months off,
Or, better yet, two tickets, as if you were hoping
To meet by then someone who’d love to join you,
Two seats near the front so you catch each note.

May 20, 2010

New Yorker on Jesus




What DID Jesus do?
Reading and unreading the Gospels

by Adam Gopnik

Then we meet Jesus of Nazareth at the beginning of the Gospel of Mark, almost surely the oldest of the four, he’s a full-grown man. He comes down from Galilee, meets John, an ascetic desert hermit who lives on locusts and wild honey, and is baptized by him in the River Jordan. If one thing seems nearly certain to the people who read and study the Gospels for a living, it’s that this really happened: John the Baptizer—as some like to call him, to give a better sense of the original Greek’s flat-footed active form—baptized Jesus. They believe it because it seems so unlikely, so at odds with the idea that Jesus always played the star in his own show: why would anyone have said it if it weren’t true? This curious criterion governs historical criticism of Gospel texts: the more improbable or “difficult” an episode or remark is, the likelier it is to be a true record, on the assumption that you would edit out all the weird stuff if you could, and keep it in only because the tradition is so strong that it can’t plausibly be excluded. If Jesus says something nice, then someone is probably saying it for him; if he says something nasty, then probably he really did.
So then, the scholars argue, the author of Mark, whoever he was—the familiar disciples’ names conventionally attached to each Gospel come later—added the famous statement of divine favor, descending directly from the heavens as they opened. But what does the voice say? In Mark, the voice says, “You are my Son, whom I love; with you I am well pleased,” seeming to inform a Jesus who doesn’t yet know that this is so. But some early versions of Luke have the voice quoting Psalm 2: “You are my Son; today I have begotten you.” Only in Matthew does it announce Jesus’ divinity to the world as though it were an ancient, fixed agreement, not a new act. In Mark, for that matter, the two miraculous engines that push the story forward at the start and pull it toward Heaven at the end—the Virgin Birth and the Resurrection—make no appearance at all. The story begins with Jesus’ adult baptism, with no hint of a special circumstance at his birth, and there is actually some grumbling by Jesus about his family (“Only in his home town, among his relatives and in his own house, is a prophet without honor,” he complains); it ends with a cry of desolation as he is executed—and then an enigmatic and empty tomb. (It’s left to the Roman centurion to recognize him as the Son of God after he is dead, while the verses in Mark that show him risen were apparently added later.)
read more
The intractable complexities of fact produce the inevitable ambiguities of faith. The more one knows, the less one knows. Was Jesus a carpenter, or even a carpenter’s son? The Greek word tekto¯n, long taken to mean “carpenter,” could mean something closer to a stoneworker or a day laborer. (One thinks of the similar shadings of a word like “printer,” which could refer to Ben Franklin or to his dogsbody.) If a carpenter, then presumably he was an artisan. If a stoneworker, then presumably he spent his early years as a laborer, schlepping from Nazareth to the grand Greco-Roman city of Sepphoris, nearby, to help build its walls and perhaps visit its theatre and agora. And what of the term “Son of Man,” which he uses again and again in Mark, mysteriously: “The Son of Man is Lord even of the Sabbath.” As Diarmaid MacCulloch points out in his new, immensely ambitious and absorbing history, “Christianity: The First Three Thousand Years” (Viking; $45), the phrase, which occurs in the Gospels “virtually exclusively in the reported words of Jesus,” certainly isn’t at all the same as the later “Son of God,” and may merely be Aramaic for “folks like us.”
Belief remains a bounce, faith a leap. Still, the appetite for historical study of the New Testament remains a publishing constant and a popular craze. Book after book—this year, ten in one month alone—appears, seeking the Truth. Paul Johnson has a sound believer’s life, “Jesus: A Biography from a Believer,” while Paul Verhoeven, the director of “Basic Instinct,” has a new skeptical-scholar’s book, “Jesus of Nazareth” (Seven Stories; $23.95). Verhoeven turns out to be a member of the Jesus Seminar, a collection mostly of scholars devoted to reconstructing the historical Jesus, and much of what he has to say is shrewd and learned. (An odd pull persists between box-office and Biblical study. A few years ago, another big action-film director and producer, James Cameron, put himself at the center of a documentary called “The Lost Tomb of Jesus.”)
What the amateur reader wants, given the thickets of uncertainty that surround the garden, is not what the passionate polemicists want—not so much a verdict on whether Jesus was nasty or nice as a sense of what, if anything, was new in his preaching. Was the cult that changed the world a product of Paul’s evangelism and imperial circumstance and the military embrace of one miracle-mystery cult among many such around? Or was there really something new, something unheard of, that can help explain the scale of what happened later? Did the rise of Christendom take place because historical plates were moving, with a poor martyred prophet caught between, or did one small pebble of parable and preaching start the avalanche that ended the antique world?
Ever since serious scholarly study of the Gospels began, in the nineteenth century, its moods have ranged from the frankly skeptical—including a “mythicist” position that the story is entirely made up—to the credulous, with some archeologists still holding that it is all pretty reliable, and tombs and traces can be found if you study the texts hard enough. The current scholarly tone is, judging from the new books, realist but pessimistic. While accepting a historical Jesus, the scholarship also tends to suggest that the search for him is a little like the search for the historical Sherlock Holmes: there were intellectual-minded detectives around, and Conan Doyle had one in mind in the eighteen-eighties, but the really interesting bits—Watson, Irene Adler, Moriarty, and the Reichenbach Falls—were, even if they all had remote real-life sources, shaped by the needs of storytelling, not by traces of truth. Holmes dies because heroes must, and returns from the dead, like Jesus, because the audience demanded it. (The view that the search for the historical Jesus is like the search for the historical Superman—that there’s nothing there but a hopeful story and a girlfriend with an alliterative name—has by now been marginalized from the seminaries to the Internet; the scholar Earl Doherty defends it on his Web site with grace and tenacity.)
The American scholar Bart Ehrman has been explaining the scholars’ truths for more than a decade now, in a series of sincere, quiet, and successful books. Ehrman is one of those best-selling authors like Richard Dawkins and Robert Ludlum and Peter Mayle, who write the same book over and over—but the basic template is so good that the new version is always worth reading. In his latest installment, “Jesus, Interrupted” (HarperOne; $15.99), Ehrman once again shares with his readers the not entirely good news he found a quarter century ago when, after a fundamentalist youth, he went to graduate school: that all the Gospels were written decades after Jesus’ death; that all were written in Greek, which Jesus and the apostles didn’t speak and couldn’t write (if they could read and write at all); and that they were written as testaments of faith, not chronicles of biography, shaped to fit a prophecy rather than report a profile.
The odd absences in Mark are matched by the unreal presences in the other Gospels. The beautiful Nativity story in Luke, for instance, in which a Roman census forces the Holy Family to go back to its ancestral city of Bethlehem, is an obvious invention, since there was no Empire-wide census at that moment, and no sane Roman bureaucrat would have dreamed of ordering people back to be counted in cities that their families had left hundreds of years before. The author of Luke, whoever he might have been, invented Bethlehem in order to put Jesus in David’s city. (James Tabor, a professor of religious studies, in his 2006 book “The Jesus Dynasty,” takes surprisingly seriously the old Jewish idea that Jesus was known as the illegitimate son of a Roman soldier named Pantera—as well attested a tradition as any, occurring in Jewish texts of the second century, in which a Jesus ben Pantera makes several appearances, and the name is merely descriptive, not derogatory. Tabor has even found, however improbably, a tombstone in Germany for a Roman soldier from Syria-Palestine named Pantera.)
What seems a simple historical truth is that all the Gospels were written after the destruction of Jerusalem and the Temple in the First Jewish-Roman War, in 70 C.E.—a catastrophe so large that it left the entire Jesus movement in a crisis that we can dimly imagine if we think of Jewish attitudes before and after the Holocaust: the scale of the tragedy leads us to see catastrophe as having been built into the circumstance. As L. Michael White’s “Scripting Jesus: The Gospels in Rewrite” (HarperOne; $28.99) explains in daunting scholarly detail, even Mark—which, coming first, might seem to be closest to the truth—was probably written in the ruins of the Temple and spiritually shaped to its desolate moment. Mark’s essential point, he explains, is about secrecy: Jesus keeps telling people to be quiet about his miracles, and confides only to an inner circle of disciples. With the Temple gone, White says, it was necessary to persuade people that the grotesque political failure of Jesus’ messianism wasn’t a real failure. Mark invents the idea that Jesus’ secret was not that he was the “Davidic” messiah, the Arthur-like returning king, but that he was someone even bigger: the Son of God, whose return would signify the end of time and the birth of the Kingdom of God. The literary critic Frank Kermode, in “The Genesis of Secrecy” (1979), a pioneering attempt to read Mark seriously as poetic literature, made a similar point, though his is less historical than interpretative. Kermode considers Mark to be, as the French would say, a text that reads itself: the secret it contains is that its central figure is keeping a secret that we can never really get. It is an intentionally open-ended story, prematurely closed, a mystery without a single solution.
Even if we make allowances for Mark’s cryptic tracery, the human traits of his Jesus are evident: intelligence, short temper, and an ironic, duelling wit. What seems new about Jesus is not his piety or divine detachment but the humanity of his irritability and impatience. He’s no Buddha. He gets annoyed at the stupidity of his followers, their inability to grasp an obvious point. “Do you have eyes but fail to see?” he asks the hapless disciples. The fine English actor Alec McCowen used to do a one-man show in which he recited Mark, complete, and his Jesus came alive instantly as a familiar human type—the Gandhi-Malcolm-Martin kind of charismatic leader of an oppressed people, with a character that clicks into focus as you begin to dramatize it. He’s verbally spry and even a little shifty. He likes defiant, enigmatic paradoxes and pregnant parables that never quite close, perhaps by design. A story about a vineyard whose ungrateful husbandmen keep killing the servants sent to them is an anti-establishment, even an anti-clerical story, but it isn’t so obvious as to get him in trouble. The suspicious priests keep trying to catch him out in a declaration of anti-Roman sentiment: Is it lawful to give tribute to Caesar or not, they ask—that is, do you recognize Roman authority or don’t you? He has a penny brought out, sees the picture of the emperor on it, and, shrugging, says to give to the state everything that rightly belongs to the state. The brilliance of that famous crack is that Jesus turns the question back on the questioner, in mock-innocence. Why, you give the king the king’s things and God God’s. Of course, this leaves open the real question: what is Caesar’s and what is God’s? It’s a tautology designed to evade self-incrimination.
Jesus’ morality has a brash, sidewise indifference to conventional ideas of goodness. His pet style blends the epigrammatic with the enigmatic. When he makes that complaint about the prophet having no honor in his own home town, or says exasperatedly that there is no point in lighting a candle unless you intend to put it in a candlestick, his voice carries a disdain for the props of piety that still feels startling. And so with the tale of the boy who wastes his inheritance but gets a feast from his father, while his dutiful brother doesn’t; or the one about the weeping whore who is worthier than her good, prim onlookers; or about the passionate Mary who is better than her hardworking sister Martha. There is a wild gaiety about Jesus’ moral teachings that still leaps off the page. He is informal in a new way, too, that remains unusual among prophets. MacCulloch points out that he continually addresses God as “Abba,” Father, or even Dad, and that the expression translated in the King James Version as a solemn “Verily I say unto you” is actually a quirky Aramaic throat-clearer, like Dr. Johnson’s “Depend upon it, Sir.”
Some of the sayings do have, in their contempt for material prosperity, the ring of Greek Cynic philosophy, but there is also something neither quite Greek nor quite Jewish about Jesus’ morality that makes it fresh and strange even now. Is there a more miraculous scene in ancient literature than the one in John where Jesus absent-mindedly writes on the ground while his fellow-Jews try to entrap him into approving the stoning of an adulteress, only to ask, wide-eyed, if it wouldn’t be a good idea for the honor of throwing the first stone to be given to the man in the mob who hasn’t sinned himself? Is there a more compressed and charming religious exhortation than the one in the Gospel of Thomas in which Jesus merrily recommends to his disciples, “Be passersby”? Too much fussing about place and home and ritual, and even about where, exactly, you’re going to live, is unnecessary: be wanderers, dharma bums.
This social radicalism still shines through—not a programmatic radicalism of national revolution but one of Kerouac-like satori-seeking-on-the-road. And the social radicalism is highly social. The sharpest opposition in the Gospels, the scholar and former priest John Dominic Crossan points out in his illuminating books—“The Historical Jesus: The Life of a Mediterranean Jewish Peasant” is the best known—is between John the Faster and Jesus the Feaster. Jesus eats and drinks with whores and highwaymen, turns water into wine, and, finally, in one way or another, establishes a mystical union at a feast through its humble instruments of bread and wine.
The table is his altar in every sense. Crossan, the co-founder of the Jesus Seminar, makes a persuasive case that Jesus’ fressing was perhaps the most radical element in his life—that his table manners pointed the way to his heavenly morals. Crossan sees Jesus living within a Mediterranean Jewish peasant culture, a culture of clan and cohort, in which who eats with whom defines who stands where and why. So the way Jesus repeatedly violates the rules on eating, on “commensality,” would have shocked his contemporaries. He dines with people of a different social rank, which would have shocked most Romans, and with people of different tribal allegiance, which would have shocked most Jews. The most forceful of his sayings, still shocking to any pious Jew or Muslim, is “What goes into a man’s mouth does not make him unclean, but what comes out of his mouth, that is what makes him unclean.” Jesus isn’t a hedonist or an epicurean, but he clearly isn’t an ascetic, either: he feeds the multitudes rather than instructing them how to go without. He’s interested in saving people living normal lives, buying and selling what they can, rather than in retreating into the company of those who have already arrived at a moral conclusion about themselves.
To a modern reader, the relaxed egalitarianism of the open road and the open table can seem undermined by the other part of Jesus’ message, a violent and even vengeful prediction of a final judgment and a large-scale damnation. In Mark, Jesus is both a fierce apocalyptic prophet who is preaching the death of the world—he says categorically that the end is near—and a wise philosophical teacher who professes love for his neighbor and supplies advice for living. If the end is near, why give so much sage counsel? If human life is nearly over, why preach in such detail the right way to live? One argument is that a later, perhaps “unpersonified” body of Hellenized wisdom literature was tacked on to an earlier account of a Jewish messianic prophet. Since both kinds of literature—apocalyptic hysterics and stoic sayings—can be found all over the period, perhaps they were merely wrenched together.
And yet a single figure who “projects” two personae at the same time, or in close sequence, one dark and one dreamy, is a commonplace among charismatic prophets. That’s what a charismatic prophet is: someone whose aura of personal conviction manages to reconcile a hard doctrine with a humane manner. The leaders of the African-American community before the civil-rights era, for instance, had to be both prophets and political agitators to an oppressed and persecuted people in a way not unlike that of the real Jesus (and all the other forgotten zealots and rabbis whom the first-century Jewish historian Josephus names and sighs over). They, too, tended to oscillate between the comforting and the catastrophic. Malcolm X was the very model of a modern apocalyptic prophet-politician, unambiguously preaching violence and a doctrine of millennial revenge, all fuelled by a set of cult beliefs—a hovering U.F.O., a strange racial myth. But Malcolm was also a community builder, a moral reformer (genuinely distraught over the sexual sins of his leader), who refused to carry weapons, and who ended, within the constraints of his faith, as some kind of universalist. When he was martyred, he was called a prophet of hate; within three decades of his death—about the time that separates the Gospels from Jesus—he could be the cover subject of a liberal humanist magazine like this one. One can even see how martyrdom and “beatification” draws out more personal detail, almost perfectly on schedule: Alex Haley, Malcolm’s Paul, is long on doctrine and short on details; thirty years on, Spike Lee, his Mark, has a full role for a wife and children, and a universalist message that manages to blend Malcolm into Mandela. (As if to prove this point, just the other week came news of suppressed chapters of Haley’s “Autobiography,” which, according to Malcolm’s daughter, “showed too much of my father’s humanity.”)
As the Bacchae knew, we always tear our Gods to bits, and eat the bits we like. Still, a real, unchangeable difference does exist between what might be called storytelling truths and statement-making truths—between what makes credible, if sweeping, sense in a story and what’s required for a close-knit metaphysical argument. Certain kinds of truths are convincing only in a narrative. The idea, for instance, that the ring of power should be given to two undersized amateurs to throw into a volcano at the very center of the enemy’s camp makes sound and sober sense, of a kind, in Tolkien; but you would never expect to find it as a premise at the Middle Earth Military Academy. Anyone watching Hamlet will find his behavior completely understandable—O.K., I buy it; he’s toying with his uncle—though any critic thinking about it afterward will reflect that this behavior is a little nuts.
In Mark, Jesus’ divinity unfolds without quite making sense intellectually, and without ever needing to. It has the hypnotic flow of dramatic movement. The story is one of self-discovery: he doesn’t know who he is and then he begins to think he does and then he doubts and in pain and glory he dies and is known. The story works. But, as a proposition under scrutiny, it makes intolerable demands on logic. If Jesus is truly one with God, in what sense could he suffer doubt, fear, exasperation, pain, horror, and so on? So we get the Jesus rendered in the Book of John, who doesn’t. But if he doesn’t suffer doubt, fear, exasperation, pain, and horror, in what sense is his death a sacrifice rather than just a theatrical enactment? A lamb whose throat is not cut and does not bleed is not really much of an offering.
None of this is very troubling if one has a pagan idea of divinity: the Son of God might then be half human and half divine, suffering and triumphing and working out his heroic destiny in the half-mortal way of Hercules, for instance. But that’s ruled out by the full weight of the Jewish idea of divinity—omnipresent and omniscient, knowing all and seeing all. If God he was—not some Hindu-ish avatar or offspring of God, but actually one with God—then God once was born and had dirty diapers and took naps. The longer you think about it, the more astounding, or absurd, it becomes. To be really believed at all, it can only be told again.
So the long history of the early Church councils that tried to make the tales into a theology is, in a way, a history of coming out of the movie confused, and turning to someone else to ask what just happened. This is the subject of Philip Jenkins’s “Jesus Wars: How Four Patriarchs, Three Queens, and Two Emperors Decided What Christians Would Believe for the Next 1,500 Years” (HarperOne; $26.99). Jenkins explains what was at stake in the seemingly wacky wars over the Arian heresy—the question of whether Jesus the Son shared an essence with God the Father or merely a substance—which consumed the Western world through the second and third centuries. Was Jesus one with God in the sense that, say, Sean Connery is one with Daniel Craig, different faces of a single role, or in the sense that James Bond is one with Ian Fleming, each so dependent on the other that one cannot talk about the creation apart from its author? The passion with which people argued over apparently trivial word choices was, Jenkins explains, not a sign that they were specially sensitive to theology. People argued that way because they were part of social institutions—cities, schools, clans, networks—in which words are banners and pennants: who pledged to whom was inseparable from who said what in what words. It wasn’t that they really cared about the conceptual difference between the claim that Jesus and the Father were homoousian (same in essence) and the claim that the two were homoiousian (same in substance); they cared about whether the Homoousians or the Homoiousians were going to run the Church.
The effort to seal off the inspiration from the intolerance, nice Jesus from nasty Jesus, is very old. Jefferson compiled his own New Testament, with the ethical teachings left in and the miracles and damnations left out—and that familiar, outraged sense of the ugly duplicity of the Christian heritage is at the heart of Philip Pullman’s new plaint against it, “The Good Man Jesus and the Scoundrel Christ” (Canongate; $24), in which the two aspects are neatly divided into twins borne by Mary. The wise Jesus is brother to the shrewd Christ. One leads to the nice Jewish boy, the other to Paul’s scary punitive God. Pullman, a writer of great skill and feeling, as he has shown in his magical children’s fantasies, feels the betrayal of Jesus by his brother Christ as a fundamental betrayal of humanity. He wants us to forget Christ and return to Jesus alone, to surrender miracles for morals. Pullman’s book, however, is not narrowly polemical; he also retells the parables and acts with a lucid simplicity that strips away the Pauline barnacles. His real achievement is to translate Jesus’ sayings into a simple, almost childlike English that would seem to have much of the sound we are told is present in the artless original Greek: “Those who make peace between enemies, those who solve bitter disputes—they will be blessed. . . . But beware, and remember what I tell you: there are some who will be cursed, who will never inherit the Kingdom of God. D’you want to know who they are? Here goes: Those who are rich will be cursed.”
If one thing seems clear from all the scholarship, though, it’s that Paul’s divine Christ came first, and Jesus the wise rabbi came later. This fixed, steady twoness at the heart of the Christian story can’t be wished away by liberal hope any more than it could be resolved by theological hair-splitting. Its intractability is part of the intoxication of belief. It can be amputated, mystically married, revealed as a fraud, or worshipped as the greatest of mysteries. The two go on, and their twoness is what distinguishes the faith and gives it its discursive dynamism. All faiths have fights, but, as MacCulloch shows at intricate, thousand-page length, few have so many super-subtle shadings of dogma: wine or blood, flesh or wafer, one God in three spirits or three Gods in one; a song of children, stables, psalms, parables, and peacemakers, on the one hand, a threnody of suffering, nails, wild dogs, and damnation and risen God, on the other. The two spin around each other throughout history—the remote Pantocrator of Byzantium giving way to the suffering man of the Renaissance, and on and on.
It is typical of this conundrum that, in the past century, the best Christian poet, W. H. Auden, and the greatest anti-Christian polemicist, William Empson, were exact contemporaries, close friends, and, as slovenly social types, almost perfectly interchangeable Englishmen. Auden chose Christianity for the absolute democracy of its vision—there is, in it, “neither Jew nor German, East nor West, boy nor girl, smart nor dumb, boss nor worker.” Empson, in the same period, beginning in the fatal nineteen-forties, became the most articulate critic of a morality reduced “to keeping the taboos imposed by an infinite malignity,” in which the reintroduction of human sacrifice as a sacred principle left the believer with “no sense either of personal honour or of the public good.” (In this case, though, where Auden saw a nice Christ, Empson saw a nasty Jesus.)
Beyond the words, we still hear that cry. The Passion is still the point. In Mark, Jesus’ arrest and execution feels persuasively less preordained and willed than accidental and horrific. Jesus seems to have an intimation of the circumstance he has found himself in—leading a rebellion against Rome that is not really a rebellion, yet doesn’t really leave any possibility of retreat—and some corner of his soul wants no part of it: “Abba, Father, everything is possible for you. Take away this cup from me.” Mel Gibson was roughed up for roughing up Jesus, in his “Passion of the Christ,” but, though Gibson can fairly be accused of fanaticism, he can’t be accused of unfairness: in the long history of human cruelty, crucifixion, practiced as a mass punishment by the Romans, was uniquely horrible. The victim was stripped, in order to be deprived of dignity, then paraded, then whipped bloody, and then left to die as slowly as possible in as public a manner as conceivable. (In a sign of just how brutal it was, Josephus tells us that he begged the Roman rulers for three of his friends to be taken off the cross after they had spent hours on it; one lived.) The victim’s legs were broken to bring death in a blaze of pain. And the corpse was generally left to be eaten by wild dogs. It was terrifying and ever-present.
Verhoeven, citing Crossan, offers an opening scene for a Jesus bio-pic which neatly underlines this point. He imagines a man being nailed to a cross, cries of agony, two companion crosses in view, and then we crane out to see two hundred crosses and two hundred victims: we are at the beginning of the story, the mass execution of Jewish rebels in 4 B.C., not the end. This was the Roman death waiting for rebels from the outset, and Jesus knew it. Jesus’ cry of desolation—“My God, my God, why have you forsaken me?”—though primly edited out or explained as an apropos quotation from the Psalms by later evangelists, pierces us even now from the pages of Mark, across all the centuries and Church comforts. The shock and pity of failure still resonates.
One thing, at least, the cry assures: the Jesus faith begins with a failure of faith. His father let him down, and the promise wasn’t kept. “Some who are standing here will not taste death before they see the kingdom of God,” Jesus announced; but none of them did. Jesus, and Paul following him, says unambiguously that whatever is coming is coming soon—that the end is very, very near. It wasn’t, and the whole of what follows is built on an apology for what went wrong. The seemingly modern waiver, “Well, I know he said that, but he didn’t really mean it quite the way it sounded,” is built right into the foundation of the cult. The sublime symbolic turn—or the retreat to metaphor, if you prefer—begins with the first words of the faith. If the Kingdom of God proved elusive, he must have meant that the Kingdom of God was inside, or outside, or above, or yet to come, anything other than what the words seem so plainly to have meant.
The argument is the reality, and the absence of certainty the certainty. Authority and fear can circumscribe the argument, or congeal it, but can’t end it. In the beginning was the word: in the beginning, and in the middle, and right there at the close, Word without end, Amen. The impulse of orthodoxy has always been to suppress the wrangling as a sign of weakness; the impulse of more modern theology is to embrace it as a sign of life. The deeper question is whether the uncertainty at the center mimics the plurality of possibilities essential to liberal debate, as the more open-minded theologians like to believe, or is an antique mystery in a story open only as the tomb is open, with a mystery left inside, never to be entirely explored or explained. With so many words over so long a time, perhaps passersby can still hear tones inaudible to the more passionate participants. Somebody seems to have hoped so, once. ♦

April 22, 2010

New Yorker on Terrorism



A few days after the September 11th attacks—which killed seven times as many people as any previous act of terrorism—President George W. Bush declared that the United States was engaged in a global war on terror. September 11th seemed to confirm that we were in a clash of civilizations between modernity and radical Islam. We had a worldwide enemy with a cause that was general, not specific (“They hate our freedoms”), and we now had to take on the vast, long-running mission—equal in scope to the Cold War—of defeating all ambitious terrorist groups everywhere, along with the states that harbored them. The war on terror wasn’t a hollow rhetorical trope. It led to the American conquest and occupation first of Afghanistan, which had sheltered the leaders of Al Qaeda, and then of Iraq, which had no direct connection to September 11th.
Today, few consider the global war on terror to have been a success, either as a conceptual framing device or as an operation. President Obama has pointedly avoided stringing those fateful words together in public. His foreign-policy speech in Cairo, last June, makes an apt bookend with Bush’s war-on-terror speech in Washington, on September 20, 2001. Obama not only didn’t talk about a war; he carefully avoided using the word “terrorism,” preferring “violent extremism.”
But if “global war” isn’t the right approach to terror what is? Experts on terrorism have produced shelves’ worth of new works on this question. For outsiders, reading this material can be a jarring experience. In the world of terrorism studies, the rhetoric of righteousness gives way to equilibrium equations. Nobody is good and nobody is evil. Terrorists, even suicide bombers, are not psychotics or fanatics; they’re rational actors—that is, what they do is explicable in terms of their beliefs and desires—who respond to the set of incentives that they find before them. The tools of analysis are realism, rational choice, game theory, decision theory: clinical and bloodless modes of thinking.
read more That approach, along with these scholars’ long immersion in the subject, can produce some surprising observations. In “A Question of Command: Counterinsurgency from the Civil War to Iraq” (Yale; $30), Mark Moyar, who holds the Kim T. Adamson Chair of Insurgency and Terrorism at the Marine Corps University, tells us that, in Afghanistan, the Taliban’s pay scale (financed by the protection payments demanded from opium farmers) is calibrated to be a generous multiple of the pay received by military and police personnel (financed by U.S. aid); no wonder official Afghan forces are no match for the insurgents. Audrey Kurth Cronin, a professor of strategy at the National War College, reminds us, in “How Terrorism Ends: Understanding the Decline and Demise of Terrorist Campaigns” (Princeton; $29.95), that one can find out about Al Qaeda’s policy for coördinating attacks by reading a book called “The Management of Barbarism,” by Abu Bakr Naji, which has been available via Al Qaeda’s online library. (Naji advises that, if jihadis are arrested in one country after an attack, a cell elsewhere should launch an attack as a display of resilience.) In “Radical, Religious, and Violent: The New Economics of Terrorism” (M.I.T.; $24.95), Eli Berman traces the origins of the Taliban to a phenomenon that long preceded the birth of modern radical Islam: they are a direct descendant of the Deobandi movement, which began in nineteenth-century India in opposition to British colonial rule and, among other things, established a system of religious schools.
What is terrorism, anyway? The expert consensus converges on a few key traits. Terrorists have political or ideological objectives (the purpose can’t be mere profiteering). They are “non-state actors,” not part of conventional governments. Their intention is to intimidate an audience larger than their immediate victims, in the hope of generating widespread panic and, often, a response from the enemy so brutal that it ends up backfiring by creating sympathy for the terrorists’ cause. Their targets are often ordinary civilians, and, even when terrorists are trying to kill soldiers, their attacks often don’t take place on the field of battle. The modern age of suicide terrorism can be said to have begun with Hezbollah’s attack, in October of 1983, on U.S. marines who were sleeping in their barracks in Beirut.
Once you take terrorists to be rational actors, you need a theory about their rationale. Robert Pape, a political scientist at the University of Chicago, built a database of three hundred and fifteen suicide attacks between 1980 and 2003, and drew a resoundingly clear conclusion: “What nearly all suicide terrorist attacks have in common is a specific secular and strategic goal: to compel modern democracies to withdraw military forces from territory that the terrorists consider to be their homeland.” As he wrote in “Dying to Win: The Strategic Logic of Suicide Terrorism” (2005), what terrorists want is “to change policy,” often the policy of a faraway major power. Pape asserts that “offensive military action rarely works” against terrorism, so, in his view, the solution to the problem of terrorism couldn’t be simpler: withdraw. Pape’s “nationalist theory of suicide terrorism” applies not just to Hamas and Hezbollah but also to Al Qaeda; its real goal, he says, is the removal of the U.S. military from the Arabian Peninsula and other Muslim countries. Pape says that “American military policy in the Persian Gulf was most likely the pivotal factor leading to September 11”; the only effective way to prevent future Al Qaeda attacks would be for the United States to take all its forces out of the Middle East.
By contrast, Mark Moyar dismisses the idea that “people’s social, political, and economic grievances” are the main cause of popular insurgencies. He regards anti-insurgent campaigns as “a contest between elites.” Of the many historical examples he offers, the best known is L. Paul Bremer’s de-Baathification of Iraq, in the spring of 2003, in which the entire authority structure of Iraq was disbanded at a stroke, creating a leadership cadre for a terrorist campaign against the American occupiers. One of Moyar’s chapters is about the uncontrollably violent American South during Reconstruction—a subject that a number of authors have turned to during the war on terror—and it demonstrates better than his chapter on Iraq the power of his theory to offend contemporary civilian sensibilities. Rather than disempowering the former Confederates and empowering the freed slaves, Moyar says, the victorious Union should have maintained order by leaving the more coöperative elements of the slaveholding, seceding class in control. Effective counterinsurgency, he says, entails selecting the élites you can work with and co-opting them.
In “Talking to Terrorists: Why America Must Engage with Its Enemies” (Basic; $26.95), Mark Perry describes a little-known attempt to apply Moyar’s model in Iraq. The book jacket identifies Perry as “a military, intelligence, and foreign affairs analyst and writer,” but his writing conveys a strong impression that he has not spent his career merely watching the action from a safe seat in the bleachers. Much of the book is devoted to a detailed description, complete with many on-the-record quotes, of a series of meetings in Amman, Jordan, in 2004, between a group of Marine officers based in Anbar province, in western Iraq, and an Iraqi businessman named Talal al-Gaood. Gaood, a Sunni and a former member of Saddam Hussein’s Baath Party, suggested he could broker a deal that would make the horrific, almost daily terrorist attacks in western Iraq go away.
Perry’s tone calls to mind a Tom Clancy novel. Tough, brave, tight-lipped officers do endless battle not just with the enemy in the field but also with cowardly, dissembling political bureaucrats in the Pentagon, the State Department, and the White House. The crux of his story is that a promising negotiation was tragically cut short, just as it was about to bear fruit, when the key negotiator, a Marine colonel, was “PNG’d”—declared persona non grata—by Washington and denied entry to Jordan. Not long after that, Gaood died suddenly, of a heart ailment, at the age of forty-four (according to Perry, he was so beloved that his wake had to be held in a soccer stadium), putting an end to any possibility of further talks. It’s startling to read about American military commanders in the field taking on a freelance diplomatic mission of this magnitude, and to imagine that there was a businessman in Amman who, on the right terms, could have snapped his fingers and ended what we back home thought of as pervasive, wild-eyed jihad.
What dominates the writing of experts about terrorism, however, is a more fine-grained idea of terrorists’ motives—at the level of ethnic group, tribe, village, and even individual calculation. Pape thinks of terrorists as being motivated by policy and strategic concerns; Cronin, of the National War College, shares Pape’s view that most terrorists are, essentially, terroirists—people who want control of land—but she is also attuned to their narrower, more local considerations. The odds are against them, because of the natural forces of entropy and their lack of access to ordinary military power and other resources, but, if they do succeed, they can be counted upon to try to ascend the ladder of legitimacy, first to insurgency, then to some kind of governing status. (Examples of that ultimate kind of success would be the Irgun and the Stern Gang, in Israel, Sinn Fein and the Provisional I.R.A., in Northern Ireland, and the Palestine Liberation Organization, in the West Bank and Gaza.)
Cronin goes through an elaborate menu of techniques for hastening the end of a terrorist campaign. None of them rise to the level of major policy, let alone a war on terror; in general, the smaller their scope the more effective Cronin finds them to be. She believes, for instance, that jailing the celebrated head of a terrorist organization is a more effective countermeasure than killing him. (Abimael Guzmán, the head of the Shining Path, in Peru, was, after his capture in 1992, “displayed in a cage, in a striped uniform, recanting and asking his followers to lay down their arms.” That took the wind out of the Shining Path’s sails. A surprise ambush that martyred him might not have.) Negotiating with terrorists—a practice usually forsworn, often done—can work in the long term, Cronin says, not because it is likely to produce a peace treaty but because it enables a state to gain intelligence about its opponents, exploit differences and hive off factions, and stall while time works its erosive wonders.
Cronin offers a confident prescription, based on her small-bore approach to terrorism, for defeating the apparently intractable Al Qaeda. The idea is to take advantage of the group’s highly decentralized structure by working to alienate its far-flung component parts, getting them to see their local interests as being at odds with Al Qaeda’s global ones. “Bin Laden and Zawahiri have focused on exploiting and displacing the local concerns of the Chechens, the Uighurs, the Islamic Movement of Uzbekistan, the Salafist Group for Call and Combat in Algeria, and many others, and sought to replace them with an international agenda,” Cronin writes. The United States should now try to “sever the connection between Islamism and individualized local contexts for political violence, and then address them separately.” It should work with these local groups, not in an effort to convert them to democracy and love of America but in order to pry them away, one by one, from Al Qaeda. (“Calling the al-Qaeda movement ‘jihadi international,’ as the Israeli intelligence services do,” she writes, “encourages a grouping together of disparate threats that undermines our best counterterrorism. It is exactly the mistake we made when we lumped the Chinese and the Soviets together in the 1950s and early 1960s, calling them ‘international Communists.’ ”)
Eli Berman, an economist who has done field work among ultra-orthodox religious groups in Israel, is even more granular in his view of what terrorists want: he stresses the social services that terror and insurgent groups provide to their members. Berman’s book is an extended application to terrorism of an influential 1994 article by the economist Laurence Iannaccone, called “Why Strict Churches Are Strong.” Trying to answer the question of why religious denominations that impose onerous rules and demand large sacrifices of their members seem to thrive better than those which do not, Iannaccone surmised that strict religions function as economic clubs. They appeal to recruits in part because they are able to offer very high levels of benefits—not just spiritual ones but real services—and this involves high “defection constraints.” In denominations where it’s easy for individual members to opt out of an obligation, it is impossible to maintain such benefits. Among the religious groups Iannaccone has written about, impediments to defection can be emotionally painful, such as expulsion or the promise of eternal damnation; in many terrorist groups, the defection constraints reflect less abstract considerations: this-worldly torture, maiming, and murder.
Berman’s main examples are Hamas, Hezbollah, Moqtada al-Sadr’s Mahdi Army, in Iraq, and the Taliban, whom Berman calls “some of the most accomplished rebels of modern times.” All these organizations, he points out, are effective providers of services in places where there is dire need of them. Their members are also subject to high defection constraints, because their education and their location don’t put them in the way of a lot of opportunity and because they know they will be treated brutally if they do defect.
Like most other terrorism experts, Berman sees no crevasse between insurgents and terrorists. Instead, he considers them to be members of a single category he calls “rebels,” who use a variety of techniques, depending on the circumstances. Suicide bombing represents merely one end of the spectrum; its use is an indication not of the fanaticism or desperation of the individual bomber (most suicide bombers—recall Muhammad Atta’s professional-class background—are not miserably poor and alienated adolescent males) but of the supremely high cohesion of the group. Suicide bombing, Berman notes, increases when the terrorist group begins to encounter hard targets, like American military bases, that are impervious to everything else. The Taliban used traditional guerrilla-warfare techniques when they fought the Northern Alliance in the mountains. When their enemies became Americans and other Westerners operating from protected positions and with advanced equipment, the Taliban were more likely to resort to suicide bombing. How else could a small group make a big impact?
The idea of approaching terrorists as rational actors and defeating them by a cool recalibration of their incentives extends beyond the academic realm. Its most influential published expression is General David Petraeus’s 2006 manual “Counterinsurgency.” Written in dry management-ese, punctuated by charts and tables, the manual stands as a rebuke of the excesses of Bush’s global war on terror.
“Soldiers and Marines are expected to be nation builders as well as warriors,” the introduction to the manual declares. “They must be prepared to help reestablish institutions and local security forces and assist in rebuilding infrastructure and basic services. They must be able to facilitate establishing local governance and the rule of law.” The manual’s most famous formulation is “clear-hold-build,” and its heaviest emphasis is on the third of those projects; the counterinsurgent comes across a bit like a tough but kindhearted nineteen-fifties cop, walking a beat, except that he does more multitasking. He collects garbage, digs wells, starts schools and youth clubs, does media relations, improves the business climate. What he doesn’t do is torture, kill in revenge, or overreact. He’s Gandhi in I.E.D.-proof armor.
Petraeus has clearly absorbed the theory that terrorist and insurgent groups are sustained by their provision of social services. Great swaths of the manual are devoted to elaborating ways in which counterinsurgents must compete for people’s loyalty by providing better services in the villages and tribal encampments of the deep-rural Middle East. It’s hard to think of a service that the manual doesn’t suggest, except maybe yoga classes. And, like Berman, the manual is skeptical about the utility, in fighting terrorism, of big ideas about morality, policy, or even military operations. Here’s a representative passage:
REMEMBER SMALL IS BEAUTIFUL
Another tendency is to attempt large-scale, mass programs. In particular, Soldiers and Marines tend to apply ideas that succeed in one area to another area. They also try to take successful small programs and replicate them on a larger scale. This usually does not work. Often small-scale programs succeed because of local conditions or because their size kept them below the enemy’s notice and helped them flourish unharmed. . . . Small-scale projects rarely proceed smoothly into large programs. Keep programs small.
One problem with such programs is that they can be too small, and too nice, to win the hearts and minds of the populace away from their traditional leaders. The former civil-affairs officer A. Heather Coyne tells the story, recounted in Berman’s book, of a program that offered people in Sadr City ten dollars a day to clean the streets—something right out of the counterinsurgency manual. The American colonel who was running the program went out to talk to people and find out how effective the program was at meeting its larger goal. This is what he heard: “We are so grateful for the program. And we’re so grateful to Muqtada al-Sadr for doing this program.” Evidently, Sadr had simply let it be known that he was behind this instance of social provision, and people believed him. For Berman, the lesson is “a general principle: economic development and governance can be at odds when the territory is not fully controlled by the government.” That’s a pretty discouraging admission—it implies that helping people peacefully in an area where insurgents are well entrenched may only help the insurgents.
One could criticize the manual from a military perspective, as Mark Moyar does, for being too nonviolent and social-worky. Moyar admires General Petraeus personally (Petraeus being the kind of guy who, while recuperating from major surgery at a hospital after taking a bullet during a live-ammunition exercise, had his doctors pull all the tubes out of his arm and did fifty pushups to prove that he should be released early). But Moyar is appalled by the manual’s tendency to downplay the use of force: “The manual repeatedly warned of the danger of alienating the populace through the use of lethal force and insisted that counterinsurgents minimize the use of force, even if in some instances it meant letting enemy combatants escape. . . . As operations in Iraq and elsewhere have shown, aggressive and well-led offensive operations to chase down insurgents have frequently aided the counterinsurgent cause by robbing the insurgents of the initiative, disrupting their activities, and putting them in prison or in the grave.”
Because terrorism is such an enormous problem—it takes place constantly, all over the world, in conflict zones and in big cities, in more and less developed countries—one can find an example of just about every anti-terrorist tactic working (or failing to). One of the most prolific contemporary terrorist groups, the Tamil Tigers, of Sri Lanka, appears to have been defeated by the Sinhalese Buddhist-dominated government, through a conventional, if unusually violent, military campaign, which ended last spring. In that instance, brutal repression seems to have been the key. But the Russians have tried that intermittently in Chechnya, without the same effect; the recent suicide bombing in the Moscow subway by Chechen terrorists prompted an Op-Ed piece in the Times by Robert Pape and two associates, arguing that the answer is for Russia to dial back its “indirect military occupation” of Chechnya.
The point of social science is to be careful, dispassionate, and analytical, to get beyond the lure of anecdote and see what the patterns really are. But in the case of counterterrorism the laboratory approach can’t be made to scan neatly, because there isn’t a logic that can be counted upon to apply in all cases. One could say that the way to reduce a group’s terrorist activity is by reaching a political compromise with it; Northern Ireland seems to be an example. But doing that can make terrorism more attractive to other groups—a particular risk for the United States, which operates in so many places around the world. After the Hezbollah attack on the Marine barracks, in 1983, President Ronald Reagan pulled out of Lebanon, a decision that may have set off more terrorism in the Middle East over the long term. Immediate, savage responses—George W. Bush, rather than Reagan—can work in one contained area and fail more broadly. If the September 11th attacks were meant in part to provoke a response that would make the United States unpopular in the Muslim world, they certainly succeeded.
Even if one could prove that a set of measured responses to specific terrorist acts was effective, or that it’s always a good idea to alter terrorists’ cost-benefit calculations, there’s the problem implied by the tactic’s name: people on the receiving end of terrorism, and not just the immediate victims, do, in fact, enter a state of terror. The emotion—and its companion, thirst for revenge—inevitably figure large in the political life of the targeted country. As Cronin dryly notes, “In the wake of major attacks, officials tend to respond (very humanly) to popular passions and anxiety, resulting in policy made primarily on tactical grounds and undermining their long-term interests. Yet this is not an effective way to gain the upper hand against nonstate actors.” The implication is that somewhere in the world there might be a politician with the skill to get people to calm down about terrorists in their midst, so that a rational policy could be pursued. That’s hard to imagine.
Another fundamental problem in counterterrorism emerges from a point many of the experts agree on: that terrorism, uniquely horrifying as it is, doesn’t belong to an entirely separate and containable realm of human experience, like the one occupied by serial killers. Instead, it’s a tactic whose aims bleed into the larger, endless struggle of people to control land, set up governments, and exercise power. History is about managing that struggle, sometimes successfully, sometimes not, rather than eliminating the impulses that underlie it.
For Americans, the gravest terrorist threat right now is halfway across the world, in Iraq, Afghanistan, and Pakistan. On paper, in all three countries, the experts’ conceptual model works. Lesser terrorist groups remain violent but seem gradually to lose force, and greater ones rise to the level of political participation. At least some elements of the Taliban have been talking with the Afghan government, with the United States looking on approvingly. In Iraq, during the recent elections, some Sunni groups set off bombs near polling places, but others won parliamentary seats. Yet this proof of concept does not solve the United States’ terrorism problem. Iraq, Afghanistan, and Pakistan all have pro-American governments that are weak. They don’t have firm control over the area within their borders, and they lack the sort of legitimacy that would make terrorism untempting. Now that General Petraeus is the head of the Central Command and has authority over American troops in the region, our forces could practice all that he has preached, achieve positive results, and still be unable to leave, because there is no national authority that can be effective against terrorism.
Long ago, great powers that had vital interests far away simply set up colonies. That wound up being one of the leading causes of terrorism. Then, as an alternative to colonialism, great powers supported dictatorial client states. That, too, often led to terrorism. During the Bush Administration, creating democracies (by force if necessary) in the Middle East was supposed to serve American interests, but, once again, the result was to increase terrorism. Even if all terrorism turns out to be local, effective, long-running counterterrorism has to be national. States still matter most. And finding trustworthy partner states in the region of the world where suicide bombers are killing Americans is so hard that it makes fighting terrorism look easy. ♦




April 18, 2010

New Yorker on Turkish Food




LETTER FROM ISTANBUL about Musa Dağdeviren and his restaurants, Çiya Kebap, Çiya Kebap II, and Çiya Sofrasi. Writer describes her first visit to Çiya Sofrasi, which stands on the Asian side of the Bosporus in Istanbul. The place was pleasant but unremarkable. There was a self-service bar with meze priced by weight. Hot dishes were dispensed at a cafeteria-style counter. The first sign of anything unusual was the kisir, a Turkish version of tabouli, which had an indescribable freshness. The stewed eggplant dolmas reminded the writer of her grandmother’s version. The writer notes that food has never played a large role in her mental life, but that night at Çiya, she viscerally understood why someone might use a madeleine dipped in tea as a metaphor for the spiritual content of the material world. The writer’s parents were both born in Turkey, but she hadn’t been back for more than four years. Describes the rest of the meal at Çiya and tells about its proprietor, Musa Dağdeviren. Tapping into a powerful vein of collective food memory, Çiya was producing the kind of Turkish cuisine that Turkey itself, racing toward the West and the future, seemed to have forgotten. Musa has masterminded a project to document, restore, recreate, and reinvent Turkish food culture. Since 2005, Musa and his wife have been publishing a quarterly magazine, Yemek ve Kültür, each issue of which includes a section titled “Seven Forgotten Folk Recipes.” Musa came to Istanbul in 1979 from the south of Turkey. For the next eight years, he worked his way up through various Istanbul kitchens. He opened his first restaurant, Çiya Kebap, in 1987. Musa and his wife now run three Çiya restaurants: Çiya Kebap, Çiya Kebap II, and Çiya Sofrasi. The writer accompanies Musa to Kandira, two hours east of Istanbul, on the Black Sea coast. Tells about the market there. Mentions Carlo Petrini’s Slow Food movement. The writer and Musa discuss simit, a pretzel-like ring of bread covered in sesame seeds and the chain restaurant Simit Sarayi. They eat lunch at a fish shop. The writer tells about Musa’s monograph on keşkek, a dish made by boiling well-beaten wheat together with meat. Describes a visit to a turkey farm where Musa purchased four female turkeys, which the farmer killed. The writer and Musa helped pluck the birds. On the return to Istanbul, Musa described his dream of creating a Turkish culinary institute.

Read the whole article

March 10, 2010

New Yorker on John Stevens

Supreme Court Justices are remembered for their opinions, but they are revealed by their questions. For many years, Sandra Day O’Connor chose to open the questioning in most cases, and thus show the lawyers—and her colleagues—which way she, as the Court’s swing vote, was leaning. Today, Antonin Scalia often jumps in first, signalling the intentions of the Court’s ascendant conservative wing, and sometimes Chief Justice John G. Roberts, Jr., makes his views, which are usually aligned with Scalia’s, equally clear. New Justices tend to defer to their senior colleagues, but Sonia Sotomayor, in her first year on the Court, has displayed little reluctance to test lawyers on the facts and the procedural posture of their cases; these kinds of questions had generally been the province of Ruth Bader Ginsburg, who, at times, has not seemed entirely pleased by the newcomer’s vigor. Samuel A. Alito, Jr., often says little; Clarence Thomas never says anything. (Thomas has not asked a question at an oral argument since 2006.)

John Paul Stevens, who will celebrate his ninetieth birthday on April 20th, generally bides his time. Stevens is the Court’s senior Justice, in every respect. He is thirteen years older than his closest colleague in age (Ginsburg) and has served eleven years longer than the next most experienced (Scalia). Appointed by President Gerald R. Ford, in 1975, Stevens is the fourth-longest-serving Justice in the Court’s history; the record holder is the man Stevens replaced, William O. Douglas, who retired after thirty-six and a half years on the bench. Stevens is a generation or two removed from most of his colleagues; when Roberts served as a law clerk to William H. Rehnquist, Stevens had already been a Justice for five years. He was the last nominee before the Reagan years, when confirmations became contested territory in the culture wars (and he was also, not coincidentally, the last whose confirmation hearings were not broadcast live on television). In some respects, Stevens comes from another world; in a recent opinion, he noted that contemporary views on marijuana laws were “reminiscent of the opinion that supported the nationwide ban on alcohol consumption when I was a student.”


read more

Ever since last fall, when it emerged that Stevens had hired only one law clerk for the next year, instead of his customary four, there has been growing speculation that he will soon retire. Since 1994, Stevens has been the senior Associate Justice and so has been responsible for assigning opinions when the Chief Justice is not in the majority. He has used that power to build coalitions and has become the undisputed leader of the resistance against the conservatives on the Court. “For those fifteen years, John Stevens has essentially served as the Chief Justice of the Liberal Supreme Court,” Walter Dellinger, who was the acting Solicitor General in the Clinton Administration and is a frequent advocate before the Court, says. In Stevens’s absence, leadership of the Court’s liberals would fall, by seniority, to Ginsburg, but she is also elderly and has suffered from a range of health problems. Even if President Obama appointed a like-minded replacement for Stevens, that person, while taking his seat, would not fill his role.

Stevens is an unlikely liberal icon. When he was appointed, he told me recently, he thought of himself as a Republican and always had—“ever since my father voted for Coolidge and Harding.” He declined to say whether he still does. For many decades, there have been moderate Republicans on the Court—John M. Harlan II and Potter Stewart (appointed by Eisenhower), Lewis F. Powell and Harry Blackmun (Nixon), David H. Souter (Bush I). Stevens is the last of them, and his departure will mark a cultural milestone. The moderate-Republican tradition that he came out of “goes way back,” Stevens said. “But things have changed.”

So has Stevens. His positions have evolved on such issues as civil rights and the death penalty, and he has led the Court’s counteroffensive against the Bush Administration’s treatment of the detainees at Guantánamo Bay. And, as Stevens’s profile has risen, and his views have moved left, so, too, has criticism of him from conservatives reached a higher pitch. “From the beginning of his time as a Justice, you could see Stevens’s roots in the New Deal Court and his willingness to justify an expanding welfare state,” Richard Epstein, a libertarian-leaning law professor at New York University, said. “On these issues, he’s been consistent and consistently wrong about everything—and highly influential.”

Still, Stevens’s views suggest a sensibility more than a philosophy. Many great judicial legacies have a deep theoretical foundation—Oliver Wendell Holmes’s skeptical pragmatism, William J. Brennan’s aggressive liberalism, Scalia’s insistent originalism. Stevens’s lack of one raises questions about the durability of his influence on the Court.

But, more than anything, his career shows how the Court has become a partisan battlefield. In that spirit, Roberts last week denounced President Obama’s criticism of the Court in his State of the Union address, saying that the occasion had “degenerated to a political pep rally.” When Stevens leaves, the Supreme Court will be just another place where Democrats and Republicans fight.

Stevens tends to weigh in at oral argument at around the halfway point, and he does something that none of his colleagues do: he asks permission. “May I ask you a question?” or “May I ask you this?” Frequent advocates find this tic amusing and endearing, a little like the bow ties that he always wears. “However Justice Stevens is going to come out on an issue, he is going to do it in a way that is very friendly and avuncular and good-natured,” Paul Clement, who was George W. Bush’s Solicitor General from 2005 to 2008, says. “He’ll say something like ‘This is probably obvious, but I have this one question. Could you help me with this one point?’ An experienced advocate knows that you have to be on your guard, because he’s probably found the one issue that puts your case on the line.” Jeffrey Fisher, who clerked for Stevens in the 1998-99 term and is now a professor at Stanford, says, “The reason he very rarely speaks first is that he really listens to his colleagues and tries to figure out what is on their minds and tries to figure out what the swing votes care about in the case.”

On September 9th last year, Stevens engaged in a classic version of advocacy-by-interrogation during the argument of Citizens United v. Federal Election Commission. The Court was hearing the case before the first Monday in October, the traditional start of its year—an indication of how important some of the Justices thought it was. In 2008, Citizens United, a right-leaning nonprofit organization, had used some corporate contributions, along with money from individuals, to produce and promote a documentary critical of Hillary Clinton. (“She is steeped in controversy, steeped in sleaze,” the narrator says.) The group planned a video-on-demand broadcast on the eve of several Democratic primaries. But the Bipartisan Campaign Reform Act of 2002 (also known as McCain-Feingold, after its two chief sponsors) forbids political advertisements paid for by corporations in the weeks before a primary. Citizens United challenged the law, asserting that its right to freedom of speech was violated.

The Court had first heard arguments in the case in March, 2009, and the questions raised then were mostly narrow ones—whether McCain-Feingold pertained to video-on-demand technology, for example. Months passed without a decision. But, in June, the Court issued an unsigned order asking for the case to be reargued on new terms. Such an order, which requires a majority, had never been issued since Roberts became Chief Justice, in 2005, and only rarely in earlier years. The Court now told the lawyers to address much broader issues about the relationship of corporations to the First Amendment. Specifically, it asked whether two decisions, from 1990 and 2003, which upheld restrictions on corporate speech, should be overturned.

For a century, Congress and the Supreme Court had been restricting the participation of corporations, and individuals, in elections, mostly through limits on campaign contributions. The Court had come to see campaign spending as a form of speech, but one that clearly could be regulated, especially if the speaker was a business. The notion that corporations did not have the same free-speech rights as human beings had been practically a given of constitutional law for decades, and the 1990 and 2003 decisions (both joined by Stevens) reflected that consensus. Now the Court seemed open to what had been radical notions—that corporations had essentially the same rights as individuals, and could spend potentially unlimited amounts of money in elections.

Stevens never uses his questions to filibuster, and his first query was simple. “Does the First Amendment permit any distinction between corporate speakers and individual speakers?” he asked Theodore B. Olson, the lawyer for Citizens United and a Solicitor General in the second Bush Administration.

Olson hedged, saying, “I am not—I’m not aware of a case that just—”

“I am not asking you that,” Stevens persisted. “I meant in your view does it permit that distinction?”

Finally, Olson said, “I would not rule that out, Justice Stevens. I mean, there may be.”

Stevens was trying to alert his colleagues to the extreme shift in the law the case implied. But Roberts, Scalia, Thomas, and Alito had already made plain that they were seeking just such a change. As has often been the case, Stevens’s only hope appeared to be to get the vote of Anthony M. Kennedy, to make a majority with himself, Ginsburg, Stephen G. Breyer, and Sotomayor. (So far, Sotomayor seems to be voting much like Souter, an ally of Stevens, whom she replaced.) When Elena Kagan, the Solicitor General, rose to defend McCain-Feingold, Stevens had his chance.

Stevens asked Kagan if it would be possible for the Court to rule narrowly. There could, for example, be an exception for nonprofits like Citizens United, or for “ads that are financed exclusively by individuals even though they are sponsored by a corporation.” Kagan, grasping the lifeline that Stevens was throwing her, said, “Yes, that’s exactly right.”

“Nobody has explained why that wouldn’t be a proper solution, not nearly as drastic,” Stevens went on. “Why is that not the wisest narrow solution of the problem before us?”

His strategizing was for naught. In a decision announced on January 21st, Kennedy, joined by the four conservatives, wrote a breathtakingly broad opinion, overturning the 1990 decision and much of the 2003 decision, and establishing, for the first time, that corporations have rights to free speech comparable to those of individuals. In the 1990 case, the Court’s majority opinion cited “the corrosive and distorting effects of immense aggregations of wealth that are accumulated with the help of the corporate form and that have little or no correlation to the public’s support for the corporation’s political ideas.” Kennedy’s opinion simply asserted that “independent expenditures, including those made by corporations, do not give rise to corruption or the appearance of corruption.”

Stevens’s ninety-page dissenting opinion in Citizens United (the longest of his career) was joined in full by Ginsburg, Breyer, and Sotomayor, and was a slashing attack on the majority, laden with sarcastic asides. “Under the majority’s view, I suppose it may be a First Amendment problem that corporations are not permitted to vote, given that voting is, among other things, a form of speech,” he wrote.

To make his displeasure clear, Stevens read his dissent from the bench. Justices usually read pared-down versions of published opinions, but Stevens prepared a twenty-minute stem-winder. When the moment came, however, he stumbled frequently, skipped words, and, at times, was hard to understand. (As when he said, “As the corp, court has long resembled . . .”) For the first time in public, Stevens looked his age.

Stevens charged that the way the majority had handled the case was even worse than the legal outcome. “There were principled, narrower paths that a Court that was serious about judicial restraint could have taken,” he wrote. “Essentially, five justices were unhappy with the limited nature of the case before us, so they changed the case to give themselves an opportunity to change the law.” He added, referring to the Court, “The path it has taken to reach its outcome will, I fear, do damage to this institution.” It suggested that, after thirty-five years on the Supreme Court, John Paul Stevens was about to walk away from a place he no longer recognized.

Several weeks later, I sat with Stevens in his sun-streaked chambers at the Court. He had begun his day with a tennis game (singles), then showered and changed into a white dress shirt, suit, and bow tie at the Court. He wears a hearing aid, but walks at an athlete’s loping pace and shakes hands with a punishing grip; he keeps two well-used putters on hand to practice his short game on the office carpet.

For many years, Stevens, who grew up in Chicago, and his wife have divided their time between Washington and Fort Lauderdale, where they own a condominium. In the nineteen-eighties, Court insiders dubbed Stevens the FedEx Justice, because he spent so much time in Florida and corresponded with his chambers by overnight mail. Stevens still flees Washington at every opportunity, especially in the winter (though he now communicates electronically). He deals with his colleagues mostly by memorandum, occasionally by telephone, and rarely in person, except when the Court is in session. His law clerks report that months go by without another Justice visiting his chambers. Under Chief Justice Rehnquist, most of the Justices kept their distance from one another, and this has continued under Roberts, but Stevens in particular is, while cordial, remote.

Yet in person Stevens is as genial as he appears on the bench. He is ever hopeful about his home-town Cubs, and a devoted player, and fan, of golf—“though I have to confess, I miss Tiger.” His financial-disclosure form lists honorary memberships in four country clubs—near Chicago, near Indianapolis, near Washington, and in Florida. But when, in our conversation, the subject turned to the contemporary Supreme Court Stevens’s tone darkened.

I asked him if the center of gravity had moved to the right since he became a Justice. “There’s no doubt,” he said. “You don’t have to ask me that. Look at Citizens United.” He added, “If it is not necessary to decide a case on a very broad constitutional ground, when other grounds are available, then doesn’t that create the likelihood that people will think you’re not following the rules?”

Stevens doesn’t pretend that he’s more in tune with the Court than he is. When I asked him if there were any cases he especially regretted, he said, “Dozens. There are a lot I’m very unhappy with.” The first two that came to mind: District of Columbia v. Heller, in which the Court, in 2008, recognized an individual’s right to own weapons under the Second Amendment; and Bush v. Gore, halting the recount that the Florida Supreme Court had ordered in the 2000 Presidential race. He was in the minority in both.

On some subjects, his own views have shifted. Writing on affirmative action, in 1980, he noted, “If the National Government is to make a serious effort to define racial classes by criteria that can be administered objectively, it must study precedents such as the First Regulation to the Reich’s Citizenship Law of November 14, 1935”; yet in 2003 he engineered the preservation of racial preferences in admissions in a case involving the University of Michigan Law School. In 1976, he joined his colleagues in ending a moratorium on the death penalty; in 2008, he wrote that executions are “patently excessive and cruel and unusual punishment violative of the Eighth Amendment.” Stevens has always supported abortion rights and an expansive notion of freedom of speech.

In all areas, Stevens has favored gradual change over sudden lurches and precedent over dramatic overrulings. But, especially since Roberts took over as Chief Justice, Stevens has found himself confronting colleagues who have a very different approach—an aggressive, line-drawing conservatism that appears bent on remaking great swaths of Supreme Court precedent.

On a wall in Stevens’s chambers that is mostly covered with autographed photographs of Chicago sports heroes, from Ernie Banks to Michael Jordan, there is a box score from Game Three of the 1932 World Series, between the Yankees and the Cubs. When Babe Ruth came to bat in the fifth inning, at Wrigley Field, according to a much disputed baseball legend, he pointed to the center-field stands and then proceeded to hit a home run right to that spot. The event is known as “the called shot.”

“My dad took me to see the World Series, and we were sitting behind third base, not too far back,” Stevens, who was twelve years old at the time, told me. He recalled that the Cubs players had been hassling Ruth from the dugout earlier in the game. “Ruth did point to the center-field scoreboard,” Stevens said. “And he did hit the ball out of the park after he pointed with his bat. So it really happened.”

Stevens has a reverence for facts. He mentioned that he vividly recalled Ruth’s shot flying over the center-field scoreboard. But, at a recent conference, a man in the audience said that Ruth’s homer had landed right next to his grandfather, who was sitting far away from the scoreboard. “That makes me warn you that you should be careful about trusting the memory of elderly witnesses,” Stevens said. The box score was a gift from a friend; Stevens noticed that it listed the wrong pitchers for the game, so he crossed them out with a red pen, and wrote in the right names.

This meticulousness is evident in Stevens’s judicial writing. Most Supreme Court Justices, if they write first drafts of their opinions at all, concentrate on the legal analysis, which usually includes the flowery language that gets quoted in newspapers and textbooks; it is for their law clerks to write up the facts of the case, the driest part. Stevens always does the facts himself (and says he does all the other drafting, too). For many years, his was the only chambers to review individually the thousands of petitions for certiorari that come to the Court each year; the others pooled their efforts. (Alito also recently left the cert. pool.)

It was not a surprise that Ernest Stevens, the Justice’s father, got tickets to the World Series. The Stevenses were prominent citizens of Chicago. The Justice’s grandfather James Stevens had gone into the insurance business, and, with the profits, he and his sons Ernest and Raymond bought land on South Michigan Avenue and built what was then the biggest hotel in the world, with three thousand rooms. The Stevens Hotel opened in 1927, and featured a range of luxurious services, a bowling alley, and a pitch-and-putt golf course on the roof. There was a big, stylized “S” over the main entrance. “We stayed at the hotel sometimes, every now and then,” Stevens told me. “I have pleasant memories, and there are also some unpleasant aspects of it, too.”

The Depression hit the family hard. As chronicled in “John Paul Stevens: An Independent Life,” a biography by Bill Barnhart and Gene Schlickman, which will be published in May, questions arose about whether the Stevens family had embezzled funds from the insurance company to prop up the hotel. In January, 1933, three months after Ruth’s called shot, the Chicago Herald-Examiner reported, “The Stevens children were sent to bed so they could not see their father arrested.” After Ernest Stevens was released on bail, according to the new biography, four men brandishing a submachine gun, two shotguns, and a revolver ransacked the Stevens home in search of cash. Ernest and Elizabeth and two of their children, William, age fifteen, and John, age twelve, as well as the family cook and two maids, were herded upstairs and held in a bedroom after one of the boys was forced to open a safe in the first-floor library.
It remains unclear whether the intruders were police officers or gangsters (or both), but they found no secret stash of cash.

Later in 1933, the patriarch, James, had a debilitating stroke. A few days afterward, John’s uncle, Raymond, committed suicide rather than endure the disgrace of a criminal prosecution. Ernest Stevens thus had to go to trial alone, and in the toxic environment of the Depression he was swiftly convicted. He faced ten years in state prison. Deliverance came in 1934, when his appeal reached the Illinois Supreme Court and the justices unanimously reversed his conviction. “In this whole record there is not a scintilla of evidence of any concealment or fraud attempted,” the decision said. Still, the family never recovered its former wealth, and lost control of the hotel. (It is now known as the Chicago Hilton and Towers; the “S” is still there.)

“It was a tough period, no doubt about it,” Stevens told me. Notably, what saved his father was an appellate court. Stevens dismisses the connection as a “coincidence,” adding, “Of course, I respected the decision, but I was pretty young at the time—though I remember the words ‘not a scintilla of evidence.’ ”

The influence may be greater than Stevens acknowledges. His jurisprudence is distinguished by his confidence in the ability of judges to resolve difficult issues. “Generally, he respects the heck out of the profession of which he’s a member,” Deborah Pearlstein, a research scholar at Princeton who clerked for Stevens in 1999-2000, said. “Whether you take the examples from his personal life, or the litany of cases he’s heard in decades on the bench, his reliance on and confidence in judges to find out the truth was pretty unswerving.” Writing for a unanimous Court in 1997, Stevens rejected Bill Clinton’s argument that the Paula Jones case should be postponed until after his Presidency so that it would not interfere with his duties: “If properly managed by the District Court, it appears to us highly unlikely to occupy any substantial amount of [Clinton’s] time.” (“I get razzed a lot for predicting there wouldn’t be anything to come out of the case,” Stevens told me, “because they were, in effect, saying that the opinion is what triggered the impeachment and all the rest of it.” But, he said, “the opinion really had absolutely nothing to do with what followed, because the only issue was when the trial was going to occur, not whether it would occur. And it was agreed by everybody that discovery would go forward. So we are not responsible for the fact that they took the deposition, and the deposition is what got the President in trouble.”)

In Bush v. Gore, Stevens framed his colleagues’ decision as an insult to the judicial role, one that could, he wrote, “only lend credence to the most cynical appraisal of the work of judges throughout the land.” In words that became better known than anything in the collectively written majority decision, he continued:

Although we may never know with complete certainty the identity of the winner of this year’s Presidential election, the identity of the loser is perfectly clear. It is the Nation’s confidence in the judge as an impartial guardian of the rule of law.

John Stevens rallied from the family trauma of his teen-age years and excelled at the Lab School of the University of Chicago. (Sasha and Malia Obama were students there; the Obamas lived about a mile away from where Stevens grew up, on the city’s South Side.) He enrolled at the university in 1937. He was the editor of the newspaper, a stalwart of the tennis team, the head class marshal, a member of Phi Beta Kappa. Toward the end of his undergraduate career, the dean of students, Leon P. Smith, rather mysteriously suggested that he take a correspondence course, and Stevens did. He later learned, he said, that Smith “was an undercover naval officer who had been asked to see if he could get people interested in cryptography. Somewhere toward the end of November of 1941, they sent me a letter that said you’ve completed enough of the assignment, so you’re now eligible to apply for a commission.” He enlisted on December 6, 1941. “The next day, the war started,” he said.

Stevens spent most of 1942 in Washington, learning to analyze enemy transmissions, before being transferred to Pearl Harbor, where he served until 1945. “All of the intercepted Japanese traffic would come over the desk,” he said. “I was responsible for a twenty-four-hour period. The timing was such that when I came on, which would be eight o’clock in the morning, you know, that would correspond to a new day in Japan.” He went on, “I’d write up a report for Captain Layton, who was the intelligence officer for Admiral Nimitz. And we would give a summary of what we could learn from the day’s traffic.”

Like many veterans, Stevens will shed a customary reserve to share a war story. He tries to have lunch with the law clerks from the chambers of each of his colleagues in the course of a year. Thomas Lee, who clerked for Souter in 2001-02, during his lunch with Stevens mentioned that he, too, had been a Navy cryptologist. “I told him that I had served almost exactly fifty years after he did, and in the same place—in the Pacific,” Lee, who is now a professor at Fordham Law School, told me. “He asked me to stop by his chambers so we could continue talking about it.” Lee did, and the Justice told him about a moral dilemma that had haunted him for decades.

In April, 1943, a coded message came across Stevens’s desk—“one eagle and two sparrows, or something like that,” he said. Stevens knew the transmission meant that an operation based on intelligence from his station had been a success. American aviators had tracked and shot down the airplane of Admiral Isoroku Yamamoto, who was the architect of the Japanese attack on Pearl Harbor and the leader of Axis forces in Midway. Stevens was a twenty-three-year-old lieutenant, and the mission, essentially a targeted assassination, troubled him. “Even at the time, it seemed to me kind of strange that you had a mission that was intended to kill a particular individual,” he told me. “And it was an individual who was a friend of some of the Navy officers.” (Before the war, Yamamoto had trained with the U.S. Navy and studied at Harvard.) Ultimately, Stevens concluded that the operation, which was approved by President Roosevelt, was justified, but the moral complexity of such a killing, even in wartime, stayed with him. “It is a little different than your statistics about so many thousands of highway deaths—that doesn’t mean all that much,” he said. “But if somebody you know is killed, you have an entirely different reaction.” The morality of military action became a lifelong preoccupation.

Veterans of the Second World War dominated American public life for decades, but Stevens is practically the last one still holding a position of prominence. He is the only veteran of any kind on the Court. (Kennedy served briefly in the National Guard; Thomas received a student deferment and later failed a medical test during Vietnam.) “Somebody was saying that there ought to be at least one person on the Court who had military experience,” Stevens told me. “I sort of feel that it is important. I have to confess that.” The war helped shape his jurisprudence, and even today shapes his frame of reference. In his dissent in Citizens United, he questioned the majority’s insistence that the United States government could never discriminate on the basis of the identity of a speaker by saying, “Such an assumption would have accorded the propaganda broadcasts to our troops by ‘Tokyo Rose’ during World War II the same protection as speech by Allied commanders.” Since Tokyo Rose is not exactly a contemporary reference, Stevens told me, “my clerks didn’t particularly like that.”

Stevens’s Second World War experience also played a part in perhaps his most anomalous opinion as a Justice. In 1989, he dissented from the decision that protected the right to burn the American flag as a form of protest. “The ideas of liberty and equality have been an irresistible force in motivating leaders like Patrick Henry, Susan B. Anthony, and Abraham Lincoln, schoolteachers like Nathan Hale and Booker T. Washington, the Philippine Scouts who fought at Bataan, and the soldiers who scaled the bluff at Omaha Beach,” he wrote in an unusually lyrical dissent. “If those ideas are worth fighting for—and our history demonstrates that they are—it cannot be true that the flag that uniquely symbolizes their power is not itself worthy of protection.”

“The funny thing about that case is, the only consequence of it—nobody burns flags anymore,” Stevens told me. “It was an important symbolic form of protest at the time. But nobody does it anymore. As long as it’s legal, it’s not a big deal. You just don’t have flag burning.”

The war followed Stevens at the beginning of his legal career, too. After being discharged, in 1945, he raced through Northwestern Law School in two years, winning valedictorian honors. (He also acquired a new name, at least professionally. “I had a professor who said that every lawyer should have something unique about them,” he told me. “Some people sign their names in green ink, some people did other things. I had this very boring name. Who can remember ‘John Stevens’? So I added my middle name. I’ve used it ever since for work, but my friends have always called me John.”) Stevens earned a Supreme Court clerkship with Justice Wiley B. Rutledge, an F.D.R. appointee. In his year at the Court, Stevens worked on a case, Ahrens v. Clark, that had echoes sixty years later.

The matter grew out of the wartime detention of some hundred and twenty German-born U.S. residents, who were still being held at Ellis Island in 1948. The issue was whether these detainees had the right to challenge their incarceration in an American court. In a memo to Rutledge, Stevens wrote, “I should think that even an alien enemy ought to be entitled to a fair hearing on the question whether he is in fact dangerous.” Nevertheless, a six-to-three majority saw it the other way, so Rutledge and his twenty-eight-year-old clerk collaborated on a lengthy dissent, which said that the majority had torn at “the roots of individual freedom.”

Rutledge and Stevens were vindicated in 1973, when the Court effectively overruled its Ahrens precedent in a case involving the Kentucky legal system, but the issue of the rights of enemy aliens in wartime largely disappeared from the Court’s docket for many decades. It returned with a vengeance in the second Bush Administration. As Stevens said of the Ahrens dissent, with typical understatement, “It was relevant in the Guantánamo case.”

After his clerkship, Stevens returned to Chicago and took a job at one of the city’s first religiously integrated law firms. Abner Mikva clerked on the Supreme Court the year after Stevens, then returned to Chicago to start a career in public life. “Those were the days when there was such a thing as a moderate Republican, and that’s what he was,” Mikva said of Stevens. “He was a pretty conservative Republican on economic issues, but he was always a great progressive on civil rights and social rights.”

Stevens’s career resembled that of moderate Republicans like Harlan, Stewart, and Powell. All were successful corporate lawyers who leavened their private practice with periods of public service. Three years after joining the firm, Stevens did another short stint in Washington, this time as a lawyer on the Republican staff of the House Judiciary Committee, where he worked on antitrust issues. Back in Chicago, he became a widely renowned antitrust litigator while enjoying the life of a golf-playing suburban burgher. He and his wife, Betty, had four children, two of them adopted, and he took up flying a private plane as a hobby, which also enabled him to visit clients around the Midwest.

Robert H. Bork, the conservative scholar who was an unsuccessful nominee to the Supreme Court, was also an antitrust lawyer in Chicago in the late fifties, and in one case he and Stevens represented co-defendants. “I found him an amiable man, with conventional views for the time, and he gave no hint that he would become such a liberal in later years,” Bork told me.

Stevens likely would have lived out his life in prosperous obscurity if one of Chicago’s periodic corruption scandals hadn’t intervened. A local character, a wheelchair-bound frequent litigant named Sherman Skolnick, alleged that two justices on the Illinois Supreme Court had taken bribes to sway their votes in a political-corruption case. The court formed a committee to investigate, which appointed Stevens as its counsel. In a series of dramatic hearings in 1969, Stevens established that the two judges had indeed taken bribes. Both resigned, and Stevens became a public figure. The next year, Senator Charles Percy, an Illinois Republican, put Stevens up for a judgeship on the Court of Appeals for the Seventh Circuit. Richard Nixon followed Percy’s advice, and, in 1970, Stevens began his judicial career.

Gerald Ford, coming into office in 1974, sought to demonstrate a renewed commitment to ethics at the Justice Department by naming as Attorney General Edward H. Levi, the dean of the University of Chicago Law School. When, the following year, William O. Douglas left the Supreme Court, Levi pushed for Stevens, his fellow-Chicagoan, whose anti-corruption credentials looked especially desirable in that post-Watergate moment. “Ford’s purpose was not to make a big splash and change the world,” Jack Balkin, a professor at Yale Law School, said. “Ford was still smarting after the pardon of Nixon. He wanted to unite the country. There was no attempt to nominate a strong ideologue. That just wasn’t on the table. They wanted a straight-arrow, middle-of-the-road, normal guy, excellent lawyer—and that’s what they got in Stevens.” Ford nominated Stevens, who was then fifty-five, on November 28, 1975, and the Senate confirmed him just nineteen days later, by a vote of ninety-eight to zero.

Stevens’s corruption investigation had a profound effect on the kind of judge he became. One of the justices on the Illinois Supreme Court had written a draft dissenting opinion in the case in which his colleagues were paid off but at the last minute had decided to remain silent. (Dissents were rare in Illinois.) “If there is disagreement within an appellate court about how a case should be resolved, I firmly believe that the law will be best served by an open disclosure of that fact, not only to the litigants and their lawyers, but to the public as well,” Stevens wrote in the introduction to “Illinois Justice,” a 2001 book about the scandal. As a result, “I do clutter up the U.S. Reports with more separate writing than most lawyers have either time or inclination to read.”

This is true. Especially in his early years, Stevens wrote a lot of opinions, including many short dissents and concurrences. The point of all this writing has not always been clear—he’s not warning of corruption among his colleagues—and initially the number of opinions gave Stevens a reputation for eccentricity. “His early concurrences did not move the ball—they were personal statements,” Mikva said. “They were not stirring, Brandeis-type dissents. It used up a lot of his time.” (Also in his first few years in Washington, Stevens divorced and remarried. His second wife, Maryan Mulholland Simon, an old friend from Chicago, is a dietician, whose ministrations Stevens credits for his longevity.)

At first, Stevens settled into the ideological center of the Court, which at the time was bounded, on the left, by William Brennan and Thurgood Marshall, and, on the right, by Rehnquist, then an Associate Justice, and Chief Justice Warren E. Burger. The turning point came in 1994, when Blackmun retired and Stevens became the senior Associate Justice on the Court. Then, as now, the Court was closely divided between liberals and conservatives, so both sides had at least a chance of cobbling together majorities in important cases. This part of the job requires political deftness, which Stevens, in his Lone Ranger mode, had not often displayed. But he flourished in the role.

“Stevens controlled the assignment of opinions with great skill,” Walter Dellinger said. “Sometimes he has assigned the opinions to himself, but more important are the cases in which he gave up the privilege of writing the opinion in landmark cases in order to secure a shaky majority.” In 2003, Stevens asked O’Connor to write the opinion in Grutter v. Bollinger, the University of Michigan Law School case. The same year, Stevens bestowed on Kennedy the opportunity to write Lawrence v. Texas, the epochal gay-rights case invalidating bans on consensual sex between adults of the same gender.

Decisions like Lawrence, as well as abortion-rights cases, which are based on what are known as “unenumerated rights” in the Constitution, have long drawn the ire of conservatives. “It’s in recent years that Stevens has most become an activist judge, on issues like homosexual rights,” Bork told me. “He finds rights in the Constitution that no plausible reading could find there.”

But such cases also raised his standing with liberals. “It was particularly selfless for Stevens to assign Lawrence to Kennedy,” Dellinger said. “He could have chosen the honor of writing Lawrence for himself. But it seems he wanted to make sure that the tentative vote to strike down the Texas law held up, and assigning the opinion of the Court to Kennedy locked in the majority.”

Still, the summit of Stevens’s achievements on the bench came during the Bush Administration, in the series of decisions about the detention of prisoners at Guantánamo Bay, and he kept for himself the most important of these opinions. In the 2004 case of Rasul v. Bush, among the first major cases to arise from Bush’s war on terror—and the first time that a President ever lost a major civil-liberties case in the Supreme Court during wartime—Stevens wrote for a six-to-three majority that the detainees did have the right to challenge their incarceration in American courts. In his opinion, which was written in an especially understated tone, in notable contrast to the bombastic rhetoric that accompanied the war on terror, he cited Rutledge’s dissent in the Ahrens case—which he himself had helped write, fifty-six years earlier. One of Stevens’s law clerks, Joseph T. Thai, later wrote an article in the Virginia Law Review entitled “The Law Clerk Who Wrote Rasul v. Bush,” which concluded that “Stevens’s work on Ahrens as a law clerk exerted a remarkable influence over the Rasul decision.”

Two years after Rasul, Stevens wrote the opinion for the Court in Hamdan v. Rumsfeld, in which a five-to-three majority rejected the Bush Administration’s plans for military tribunals at Guantánamo, on the ground that they would violate both the Uniform Code of Military Justice and the Geneva conventions. (Roberts did not participate in that case, because as a judge on the D.C. Circuit he had joined the opinion that Stevens overruled.)

Stevens’s repudiation of the Bush Administration’s legal approach to the war on terror was total. First, in Rasul, he opened the door to American courtrooms for the detainees; then, in Hamdan, he rejected the procedures that the Bush Administration had drawn up in response to Rasul; finally, in 2008, in Boumediene v. Bush, Stevens assigned Kennedy to write the opinion vetoing the system that Congress had devised in response to Hamdan.

After the attacks of September 11, 2001, the Bush Administration conducted its war on terror with almost no formal resistance from other parts of the government, until Stevens’s opinions. He was among the first voices, and certainly the most important one, to announce, as he wrote in Hamdan, that “the Executive is bound to comply with the Rule of Law.”

“The Second World War was the defining experience of his life, and he is proud of being a veteran,” Cliff Sloan, a Washington lawyer who clerked for Stevens in the mid-eighties, said. “No one can challenge his patriotism, and that’s why he was the right guy to take on the Bush Administration’s position at that time and in that way.”

Stevens, throughout his years on the Court, has drawn not just on history and precedent but on contemporary values and even on his own experience as a judge. According to Stevens, that approach has its origins in his brief stint as a lawyer on the staff of the House Judiciary Committee. “That was probably one of the most important parts of my education,” Stevens told me. He recalled an incident involving an antitrust law: “I remember explaining one of the tricky problems in the statute to one of the members of the committee. I got all through it, and he said, ‘Well, you know, let’s let the judges figure that one out.’ ”

What that told him was that “the legislature really works with the judges—contrary to the suggestion that the statute is a statute all by itself,” Stevens said. “There is an understanding that there are areas of interpretation that are going to have to be filled in later on, and the legislators rely on that. It’s part of the whole process. And you realize that they’re not totally separate branches of government—they’re working together.”

Andrew Siegel, a Stevens clerk and now a law professor at Seattle University, said, “Stevens believes that constitutional decision-making is conducted through the interpretation of a mix of various sources—a complex balancing act.” He added, “The glue holding it all together is judicial judgment.”

This is the core of Stevens’s disagreement with his great intellectual adversary on the court, Antonin Scalia. When it comes to interpreting statutes, Scalia believes that the Court should be guided by the words of the law “all by itself,” as Stevens put it. Steven G. Calabresi, a law professor at Northwestern and a co-founder of the conservative Federalist Society, told me, “What makes Stevens a moderate liberal is that he is fundamentally a legal realist, which means that when the text and history of the Constitution point in one direction, and good results and good consequences point in the other, he’ll usually go with what he sees as the good results.” He added, “Scalia sees the role of the judge as to read the text and apply it—period. Stevens thinks the law is more of a living thing, and he takes text and history and applies it in a way that he thinks serves the purposes of the framers, not necessarily their exact words.“

Just about every year, Stevens and Scalia take each other on in one or more cases. These contests reflect the temperaments of the two men—Stevens’s cautious balancings against Scalia’s caustic certainties. One dramatic example came in 2008, in Baze v. Rees, which asked whether execution by lethal injection amounted to cruel and unusual punishment, in violation of the Eighth Amendment. Stevens and Scalia were both part of the seven-member majority, which said that lethal injections were permissible, but wrote separate concurring opinions. Stevens’s showed how his experience on the Court had soured him on the death penalty. “State-sanctioned killing is . . . becoming more and more anachronistic,” he wrote, and he proceeded to show that all of the purported justifications for the death penalty—deterrence, retribution—failed in practice. “I have relied on my own experience in reaching the conclusion that the imposition of the death penalty ‘represents the pointless and needless extinction of life with only marginal contributions to any discernible social or public purposes.’ ” Still, he felt bound by the precedents of the Court to uphold lethal injections.

Scalia wrote as a “needed response to Justice Stevens’s separate opinion.” He criticized Stevens’s assertions about the death penalty, but it was Stevens’s invocation of his own “experience” that really outraged Scalia. “Purer expression cannot be found of the principle of rule by judicial fiat. In the face of Justice Stevens’s experience, the experience of all others is, it appears, of little consequence,” Scalia wrote, adding, “It is Justice Stevens’s experience that reigns over all.”

Scalia’s mockery gets to the heart of his critique of Stevens’s jurisprudence—that his variability simply amounts to a judge’s whim. “That flexibility and malleability that Stevens talks about is really just a license for a judge to reach any result he wants,” M. Edward Whelan III, a former Scalia clerk who runs the conservative Ethics and Public Policy Center, said. “Scalia believes in rules.” According to Calabresi, “Stevens gives judges too much freewheeling power, and that’s not the way our system was supposed to work and not the way it works the best.”

True to form, Stevens dismisses doctrinaire originalism, but says that historical evidence does have its uses. “The original intent cannot be the final answer—the world changes,” Stevens told me. “But I think it’s always a part of your job to take a look at what you can find out about the original drafting and all the rest of it.” In Heller, the gun-control case, Scalia invoked his view of original intent to find that the Second Amendment gave individuals a personal right to possess weapons. In his dissent, Stevens looked exhaustively at the same historical evidence and reached an opposite conclusion: that the authors of the Second Amendment intended to create no such right. “I’ve written a lot of opinions in which I’ve looked at the history pretty carefully,” he said. For Stevens, then, original intent is one factor—but only one—that should tell a Justice what the Constitution means.

On September 29, 2005, Stevens administered the oath of office to Roberts in a ceremony at the White House. “I didn’t think that ceremony should have been at the White House,” Stevens told me. “I feel very strongly about that. I think the proper place for that ceremony is at the Court. It has great symbolic importance. After a nominee has been confirmed, he’s a member of the judiciary—he’s not primarily the person who was selected by the President for the Court.” Still, Stevens went ahead with the ceremony, because “I think he was a particularly fine appointment, and I didn’t want anyone to get the misimpression that I didn’t approve of him.”

During Roberts’s tenure, though, Stevens’s view of the Constitution—holistic, gradualist, inclusive, broadly sourced—has most often been on display in dissent in important cases. The replacement of Rehnquist and O’Connor by Roberts and Alito made not only a more conservative Court but also a more aggressive one, with far less regard for precedent. This is evident in areas from abortion law (where the Court upheld for the first time a total ban on a specific medical procedure) to antitrust (where the majority overturned a ninety-six-year-old line of cases). William Rehnquist was no liberal, but he did not lead an attack on the Court’s past.

Stevens believes that even the 1954 landmark, Brown v. Board of Education, which struck down the doctrine of “separate but equal” in education, is under assault. In 2007, when the Court, in an opinion by Roberts, struck down the Seattle school-integration plan, Stevens, in dissent, could only murmur in wonder: “It is my firm conviction that no Member of the Court that I joined in 1975 would have agreed with today’s decision.”

Even Stevens’s manners at oral argument are not entirely the result of Midwestern politeness. “You want to be sure to get it in,” he said. “The bench is a little more active than it was years ago. You’ve got four or five Justices who are very active.” Is that a good thing? “I’m not a Clarence Thomas, but I think a little more permission to the lawyers to develop their own argument would be better than the way it does develop.”

How long will Stevens remain on the Court? Good genes (one of his older brothers practiced law until he was ninety-one), a happy home, plenty of exercise, and even more luck could allow Stevens to keep up the fight into his tenth decade. Last December, he had lunch with Peter Isakoff, a Washington lawyer who was one of his early Supreme Court law clerks. “He had just played tennis that morning—singles!—and I was just kind of amazed,” Isakoff recalled. “And so I asked him, ‘Do you still run?’ And he looked at me and said, ‘Well, how else are you going to get to the ball?’ ”

With the election of Barack Obama, the question of Stevens’s retirement has become more pressing. Even though Stevens was appointed by a Republican President, many assume that he would never willingly have turned his seat over to George W. Bush. I asked Stevens about his plans.

“Well, I still have my options open,” he said. “When I decided to just hire one clerk, three of my four clerks last year said they’d work for me next year if I wanted them to. So I have my options still. And then I’ll have to decide soon.” On March 8th, he told me that he would make up his mind in about a month.

Stevens needs a little more than two years to surpass Douglas for the longest tenure on the Court, and about one year to equal Oliver Wendell Holmes as the oldest serving Justice, but he said that those numbers were irrelevant. “I’ve never felt any interest in trying to break any records,” he said. He has had a closeup view of the complexities of retirement decisions for Supreme Court Justices. William Douglas, whom Stevens replaced, stayed on the Court after a series of strokes that incapacitated him; his colleagues awkwardly forced his resignation. On the other hand, O’Connor left the Court in good health, which continues, and has watched her successor, Alito, undo part of her legacy.

Did it matter which President named his replacement?

“I’d rather not answer that,” Stevens said. The Republican Party may have moved right since 1975, but Ford himself never displayed anything but pride in his choice of Stevens for the Court. In 2005, a year before his death, Ford wrote, in a tribute to Stevens, “For I am prepared to allow history’s judgment of my term in office to rest (if necessary, exclusively) on my nomination thirty years ago of John Paul Stevens to the U.S. Supreme Court.”

As for Obama, Stevens said, “I have a great admiration for him, and certainly think he’s capable of picking successfully, you know, doing a good job of filling vacancies.” He added, “You can say I will retire within the next three years. I’m sure of that.”

He will not be seen again, under any circumstances, at a State of the Union address. “I went to a few of them when I was first on the Court, but I stopped,” Stevens told me. “First, they are political occasions, where I don’t think our attendance is required. But also it comes when I am on a break in Florida. To be honest with you, I’d rather be in Florida than in Washington.”