This Department features reviews and summaries of new books that link history and current events. From time to time we also feature essays that highlight publishing trends in various fields related to history.
If you would like to tell the editors about a new book (even your own) that addresses the concerns of HNN -- current events and history -- or would like to write a review, please send us an email: email@example.com.
SOURCE: Providence Sunday Journal ()
If sportswriter Jimmy Cannon was right when he self-deprecatingly described his part of the newspaper as “the toy department,” then the New Yorker’s sportswriting is F.A.O. Schwarz: the top of the line. David Remnick, the magazine’s editor (and author of a recent biography of Barack Obama—when does he sleep?), has compiled a collection of 32 articles (plus a goodly number of sports-related cartoons) written between 1930 and 2008, many by familiar authors, some of them classics, all worth savoring.
Classics include Roger Angell’s 1981 essay on the college baseball game featuring pitchers Frank Viola of St. John’s and Ron Darling of Yale, which he watched with the elderly Smokey Joe Wood, who went 34-5 for the 1912 world champion Red Sox. Then there’s John McPhee’s “A Sense of Where You Are” (1965), on Princeton basketball star Bill Bradley. And, inevitably, John Updike’s “Hub Fans Bid Kid Adieu” (1960), on Ted Williams’ memorable last at-bat. Rereading them is like visiting old friends.
Such classics are readily available elsewhere, of course—Updike’s essay has just been published as a small book (with an introduction and afterword) by Library of America. Others, even by well-known writers, are less easily found, so they are especially welcome here. Ring Lardner’s characteristically grumpy take on the juiced-up baseball of the 1930 season is the earliest; Herbert Warren Wind (on golf) and A.J. Liebling (on boxing) were New Yorker regulars, while others, such as Henry Louis Gates (who writes about Michael Jordan), range far afield from where we might expect to find them.
All are distinguished by the quality of their prose, their insight into individual lives, and their depiction of sporting events as social and cultural phenomena with significance beyond the stadium or arena. Malcolm Gladwell’s “The Art of Failure” (2000) begins with Jana Novotna’s memorable collapse in the 1993 Wimbledon final, considers the difference between “choking” and “panic,” uses cognitive science’s distinction between “explicit” and “implicit” learning to analyze the actions that led to John F. Kennedy, Jr.’s fatal plane crash, and concludes with Greg Norman disintegrating on the back nine and handing the 1996 Masters golf tournament to Nick Faldo.
Vivid and varied, often witty and always worthwhile, this collection is a compendium for all seasons.
[This review appeared in slightly different form in the Providence Sunday Journal, August 15, 2010.]
This gripping piece of journalism, published last year by McSweeney's, a San Francisco publishing house founded by the author, has just been issued in paperback by Vintage Books, and is slated to be the source for an animated film slated to be directed by Jonathan Demme in 2011. At the simplest level, the book tells the story of how an American family underwent and survived the catastrophe that was Hurricane Katrina. But it resonates on a lot of levels: as a deeply personal chronicle of a not-quite natural disaster; as an immigrant saga depicting the limits of the American Dream; and as a sobering, if not harrowing, case study of eroding civil liberties that can't help but trouble every U.S. citizen.
One reason why the book works as well as it does is the beautifully executed artlessness of its narration. Eggers alternates the point of view in brief passages between the recollections of the Syrian immigrant painting contractor Abdulrahman Zeitoun of New Orleans, and his native-born wife Kathy, a convert to Islam. This method, interspersed with flashbacks of the couple's respective backgrounds, is the organizing strategy for the run-up and experience of the hurricane itself. But it diverges once Zeitoun gets swept up in the improvised criminal justice system of the Federal Emergency Management Agency, as we experience Kathy's panic and despair over her lost husband for a stretch and then witness Zeitoun's horrific ordeal, amid declining health, in the section that follows. Through it all, Eggers manages to maintain editorial restraint, which makes their ordeal, simply expressed in their own terms, all the more compelling. So does the way we occasionally experience moments of wonder, even beauty, in the stillness that follows the storm and the understated, stunning decency of a man who so savored helping people and animals before he was swept up into the cataclysm that his wife, who fled with their children, unsuccessfully begged him to flee as well.
The depiction of Katrina's ravages is upsetting enough. But what may be even worse are the privations the Zeitouns suffer once he's thrown into a jail built by prison inmates in a matter of days -- amid the chaos around them -- and run by an unaccountable FEMA. (Hell of a job, Brownie.) Here we see the crimes of Guantanamo Prison, which, however deplorable, are nevertheless committed on foreign shores, replicated on U.S. soil. Zeitoun is arrested for a crime he didn't commit, incarcerated without the right to make a phone call, and excoriated for even touching the walls of the cage into which he was thrown. Here is a second train wreck that follows the first, with the same sense of awful inevitability. You can't bear it, but you can't stop reading, either. Yes, of course, in this moment of chaos something approaching martial law was necessary. What's disturbing is how badly planned and executed that martial law was -- the sheer senselessness of it, even months and years later. The only really redemptive element in the story is Zeitoun's faith -- not in America, which is broken, probably for good, but for a loving Allah who provides shelter in a man-made storm.
In a way, Zeitoun is revealing less in what it shows about those awful days in August and September of 2005 than as a lightning-illuminated snapshot of the ongoing decay of an egalitarian American democracy. To be sure, the egalitarian strand in American democracy has never been the only or even dominant one -- American democracy has co-existed with slavery and plutocracy, for example, for hundreds of years, and at times been defined by precisely such boundaries. But the convergence between social, political, and economic equality and democracy, which crested in the middle-third of the twentieth century, has been receding ever since. In offering a story of those British Petroleum chairman Carl-Henric Svanberg-- whose company has plunged Louisiana into disaster again -- recently called"small people," Zeitoun documents how viscerally the land of the free is shrinking. And how fast.
The decade of the seventies has become a historiographic cottage industry. For a long time, about the only study out there was Peter Carroll's It Seemed Like Nothing Happened; first published in 1982, it has held up surprisingly well. The consensus on the standard treatment now seems to be Boston University historian Bruce Schulman's 2001 book The Seventies; David Frum gave the decade a puckish -- and pointedly neocon -- reading in How We Got Here in 2000. More recent treatments have tended to focus on aspects of the period, like the Ford and Carter presidencies. In 1973 Nervous Breakdown, (2006) Andreas Killen made a compelling case for that year as a synechdoche for the seventies as a whole. And Natasha Zaretsky rendered a compelling gender reading of the period in No Direction Home: The American Family and Fear of National Decline.
Labor historian Jefferson Cowie, who teaches at the school of Industrial Labor Relations at Cornell, follows the recent tendency to render portraits of the decade through a particular lens. In Stayin' Alive, that lens is both specific and yet capacious: that of the American working class. Working-class culture figures prominently in all the above-mentioned works, but Cowie's focus on it gives his book an energy and coherence that will likely make it among the more useful and durable treatments of the period.
Cowie's take on the seventies is tragic: He posits a decade which opened with sense of possibility, only to end in a sense of division and evisceration in which"working people would possess less place and meaningful identity within civic life than any time since the industrial revolution." To build his case, he constructs a framework of notable symmetry and sturdiness, in eight chapters divided into two parts. In the first four, he develops a line of thinking he first unveiled in an essay for Beth Bailey's anthology America in the Seventies, in which sometimes perplexing cross-currents led people like Dewey Burton, the much-interviewed Everyman of the time, to ricochet between George Wallace and George McGovern before finally settling on Ronald Reagan a decade later. Cowie asserts that the Democratic Party of 1968 was essentially a labor party, albeit a divided one. He offers analyses of events like the 1972 strike at the General Motors plant in Lordstown, which was a much a labor action about deadening work conditions as it was pay, and depicts the literally deadly internecine warfare among the United Mine Workers of America as a struggle for the soul of the labor movement.
In political terms, many observers have noted the obsessively Machiavellian tenor of Richard Nixon's presidency in its attempt to co-opt the Wallace vote. But Cowie traces, with quality research and rich detail, both the administration's difficulties in dealing with labor leaders like George Meany, even as the Nixonites captured, with surgical skill, the language and symbolism of the working class without ever actually addressing its material concerns. In the second half of the book, Cowie argues that new institutions like the Business Roundtable did address such material concerns -- by attacking them directly. They were aided by the indifference and hostility of politicians like Jimmy Carter, who while nominally sympathetic to labor as a Democrat, in effect functioned as the first post-New Deal president.
Cowie shows at least as much facility with cultural history as he does labor and political history. He offers nuanced readings of figures like Merle Haggard, whose background and musical complexities were obscured by the success of truculent anthems such as"Okie from Muskcogee," and suggests that there was less richness than is sometimes supposed in the work of widely hailed independent films like Taxi Driver. Perhaps not surprisingly, country rockers like Crosby Stills Nash and Young are exposed as elitists, even as other figures of their ilk, like Jackson Browne, get surprisingly positive appraisals. Bruce Springsteen, of course, looms large here, though Cowie compellingly suggests how cramped his portrait of working-class life has tended to be, more an ordeal to be endured rather than a vibrant culture it own right. At the same time, Cowie traces the emergence of demographic transformations of the working class that would lead to new subcultures like those of disco, feminist manifestos like 9 to 5, and punk rock (there's a wonderfully nuanced analysis of the Akron-based band Devo).
There is, perhaps, a forgivably romantic air about Stayin' Alive. Although Cowie is scrupulously careful in noting the limits of the McGovern campaign, for example, the gestalt of the book seems to suggest that it had more possibilities than it probably did. Similarly, while Cowie rightly notes a sense of ferment in the racial and gender dimensions of the labor movement, and duly notes the growth of unionization in the public sector, he tends to stint the tectonic plates of the international economy. We do hear a lot about oil shocks; we hear less about the rise of Japan and the tremendous cost pressures it exerted on the auto industry. Such developments cannot single-handedly explain labor's demise, of course; nations like Germany responded to such challenges without dismantling the welfare state. But then Germany never really had a Jacksonian political culture in which a libertarian strain was bred even into the working class. It might have skipped a generation or two after FDR, but it was always there, ready to emerge when the environment was right.
Regardless of what one thinks about the character of the seventies working class, or whether its"end" was inevitable, Cowie foregrounds, with laudable care and clarity, a set of people who too quickly recede in critiques of the New Class, accounts of the rise"Atari Democrats" like Gary Hart, or the emergence of feminism, developments which pointed toward the future more than the past. As Cowie well knows, class struggle did not end circa 1979. But a particular kind of class struggle did, and its contours are worth noting and remembering so that its successes and failures may yet furnish object examples in the battles still to come.
Stuart Buck, Harvard–trained lawyer and now a doctoral candidate in education at the University of Arkansas, wants readers to know that “I believe strongly in integration as a moral ideal. The message that I intend to convey is NOT that desegregation was a bad idea, NOT that the people who pursued desegregation were foolish or misguided, NOT that desegregation is something that we should consider reversing.” That kind of disclaimer, which appears repeatedly throughout his book, gives some indication of how sensitive he believes his topic to be.
I’d like to think Buck worries too much. But, in any case, Buck’s book should go a long way toward settling the increasingly one-sided debate over whether or not accusations of “acting white,” directed by African-American students at other African-American students, really happen, and whether they matter. They do, and they do. Research by scholars including John Ogbu, John McWhorter, Roland Fryer, and many others, as well as individual testimony from people as highly placed as Michelle Obama, have established the reality of the problem. And pointing that out does not make somebody a racist or mean that he is “blaming the victim.”
Buck reviews two decades’ worth of literature on “acting white,” pro and con, with a lawyer’s eye for evidence and argument. But his greatest contribution may be his historical overview, which traces black Americans’ commitment to and involvement in education from before the Civil War to the mid-1960s. Nowhere does he find examples of blacks telling other blacks that their determination and efforts to do well in school meant that they were selling out—“acting white.” Segregated schools may have lacked resources, but their teachers were among the most respected members of the black community, and they generally pushed their students hard to succeed. And their students responded. The “acting white” accusation appeared only with school desegregation and seems to have been most common in the most racially-balanced schools, where separation by race within schools (often enforced by tracking) was often more obvious and aggravating than the previous, now unconstitutional, segregation by schools.
Buck begins with the commonplace observation that “humans are tribal creatures,” often with “oppositional” tendencies that define group membership partly by declaring what the group’s members must not do. And, as he says, “It was an ironic byproduct of desegregation that this universal human expectation—‘be loyal to our group, or else’—showed up in schools.” Ironic, but not surprising. Especially given that desegregation, less than sweeping though it was, came along in the mid-1960s, at the same that “black power” ideologies emerged.
Buck worries that critics may charge that his evidence is “a deluge of anecdotes.” And particularly when dealing with the contemporary scene, that’s how it sometimes reads. But although the plural of anecdote is not data, the deluge washes away any lingering doubts, and the point is made: the issue of “acting white” is real and undoubtedly inhibits efforts to close the stubborn black-white achievement gap.
More debatable is his explanation of “Why Other Theories of ‘Acting White’ Are Not Plausible,” which unfortunately yields to the social scientist’s penchant for trying to isolate a single causal factor. This is particularly true when Buck argues that black popular culture, which “idolizes hip-hop musicians and athletes,” couldn’t have caused the “acting white” phenomenon because that such dubious celebrity-making came along later. That’s true enough, but the mid-60s are a long time gone, and there has been plenty of time for other factors, including a popular culture that can be described at least as “anti-intellectual” (and even as downright toxic), to come into play. The same is true of other factors he considers and dismisses, such as hopelessness engendered by continuing employment discrimination or the high concentration of poverty in the black community. Such factors may change in importance over time, and it calls for careful judgment indeed to determine which one(s) mattered most then, and which matter most now.
Now that Buck has cleared the decks, perhaps somebody can work through this complicated web of causation and find the answer that has eluded us all. It won’t be easy—if it were, we’d have made a lot more progress on the achievement gap. But for now Buck deserves great credit for moving the conversation forward in this clearly written, accessible book, and for demonstrating how much culture matters when we try to improve our schools. We live in an anti-intellectual society, where students of all races who are serious about their studies are commonly disparaged as “nerds.” When race is added to the mix, the tangle of cultural pathology (to adapt Daniel Patrick Moynihan’s phrase) becomes even more intractable. Untangling it, as Stuart Buck shows, matters at least as much as setting up curriculum standards and exit exams.
[Mr. Briley is Assistant Headmaster, Sandia Preparatory School. He is the author of The Politics of Baseball: Essays on the Pastime and Power at Home and Abroad.]
Elizabeth Abbott, research associate at Trinity College, University of Toronto, is the author of broad historical studies targeting more general readers. For example, she has written A History of Mistresses and balanced that study of sexual activity with A History of Celibacy. In her current work on the sugar industry, Abbott focuses much of her attention upon the West Indies, an ancestral home for the author. Sugar: A Bittersweet History is a readable account which should remind consumers that the sugar which sweetens our cakes, pies, soda, ice cream, and candy not only contributes to obesity and diabetes also has a long and troubling history based upon the exploitation of human labor.
Europeans initially relied upon honey to sweeten their food, but the Crusades introduced them to the delights of Mediterranean sugar which became a luxury item for the upper class. The expansion of sugar cultivation and refineries reduced prices and placed the product within the reach of the European working class who used sugar to sweeten their cocoa, coffee, and tea—as well as rum. This growth of sugar consumption was based upon colonialism and the search for a labor supply to harvest sugar cane in the West Indies.
Colonialism proved disastrous to the indigenous populations, and European planters initially relied upon the labor of white indentured servants. This work force proved unstable, and a permanent labor solution was apparently found in the African slave trade. The bulk of Abbot’s book concentrates upon the history of African slavery in the West Indies, as well as efforts by enslaved peoples and their European abolitionist allies to end the slave trade and slavery. Although Abbott discusses the impact of the Haitian Revolution upon slavery and sugar cultivation, her major concern remains with the British West Indies. Condemning the harsh nature of racialized sugar slavery, Abbott writes, “Whites relied on blacks to produce their sugar, counted them as their biggest capital investment, enslaved and mistreated them, vilified their race, sexually assaulted and fell in love with them, and lived dependent on and surrounded by them” (122). The labor and sexual exploitation was made worse, Abbott argues, by the absentee owners who could not abide the climate of the West Indies and left even more unscrupulous overseers in charge of their estates. In London, the West Indies planters lived a life of luxury, and the sugar lobby assured that Parliament would protect the industry from foreign competition.
Yet, the sugar lobby was challenged by the rise of abolitionists, who drew support from East Indian planters whose labor practices were exploitive but not dependent upon the institution of slavery. While Abbott tells her readers little about the East Indian sugar industry, she does emphasize an expanding role for middle-class women in the sugar boycott movement dedicated to ending slavery in the British Empire. Abbott argues, “By boycotting slave-grown sugar, that homemaker could make a moral statement and wield her economic purchasing power as a weapon to bring down the enemy. As her family’s chief food buyer, she and millions of other women could lead the war against sugar slavery” (251).
The British abolition of slavery in 1838, of course, did not end the exploitation of sugar workers. Slavery remained legal in Spanish Cuba and the American South, where Louisiana became a major producer. Meanwhile, desperate planters in the West Indies used “indentureships” to import laborers from India and China. Although technically free labor, these Indian and Chinese workers often suffered the degradation once reserved for African sugar slaves. “Indentureship,” however, failed to restore West Indian sugar to its once dominant position in the sugar trade.
The last two chapters of Abbott’s volume focus upon the twentieth century. She chronicles how a growing consumer dependence upon sugar in the diets of North Americans and Europeans have contributed to health problems as well as a sugar diaspora which has expanded the world production of the commodity. Nevertheless, in a brief examination of sugar cultivation in such diverse areas as Australia, Brazil, and Fiji, Abbott notes that earlier trends of racial division and exploitive labor remain dominant. Nor has the growth of the sugar beet industry altered these conditions as the harvesting of this crop also calls for cheap labor. In addition to the issues of labor and health, Abbott documents the ecological damage of sugar cultivation in regions such as the Florida Everglades. Abbott also examines the feminization of sugar and chocolate, which she insists has “objectified women and, like cheap abundant sugar, undervalued them” (373).
Abbott, however, does hold out some hope that the potential of sugar cane-based ethanol will allow the sugar industry to become more versatile and reduce dependence upon fossil fuels. She concludes, “Although it will continue to delight and comfort, and be the handmaiden of celebration, sugar will no longer need to rely on promoting grotesquely unhealthy consumption to stay in business” (408). Unfortunately, Abbott’s history of sugar cultivation seems to offer little basis for such an optimistic conclusion.
Abbott’s research is based primarily upon an extensive reading of secondary sources as well as printed primary accounts. Her reading is supplemented by her personal travels and experience. There are gaps in her coverage. For example, she mentions the importance of British East Indian sugar production but goes into little detail on this topic. On the other hand, her survey of twentieth century production is such a whirlwind journey that it is sometimes difficult for the reader to remember which country one is examining. The real strength of this volume is Abbott’s detailed investigation of African slavery and the sugar industry in the British West Indies. It is a story well known to most professional historians; however, the connection between slavery and sugar may be less obvious to more general readers. If Abbott’s book is able to awaken consumers to the relationship between consumption and global labor exploitation and ecological damage, the author will have made a major contribution. Continued indifference to the sufferings of the planet and the world’s laboring poor will one day reap the whirlwind.
SOURCE: Gay City News ()
[Doug Ireland, Contributing Editor of Gay City News, can be reached through his blog, DIRELAND, at http://direland.typepad.com/.]
The prolific W. Somerset Maugham (1874-1965), novelist, playwright, and short story writer whose work was frequently compared to that of Guy de Maupassant, was the highest paid author in the world by the 1930s.
More works of his have been adapted for the cinema than of any other writer in the English language (some 98 films, his nearest rival being Sir Arthur Conan Doyle, with 94 big-screen adaptations of his Sherlock Holmes stories.)
Maugham was a man of many contradictions. He achieved fame and fortune as a playwright literally overnight in 1907 with “Lady Frederick,” was lionized by London society, and by 1908 he had four plays running simultaneously in London’s West End; yet he abandoned playwriting in 1933, said he hated the theater, and averred that he considered actors “less than human.” He professed to be a socialist, yet he assiduously collected friendships with aristocrats and royalty. He was obsessed with making money and the luxurious life, yet he was capable of great generosity and gave away large sums. He could be unbelievably kind and unforgivably cruel.
Above all, Maugham was a lifelong practitioner of homosexuality who pretended not to be. An inveterate letter-writer, in his declining years he burned all his extensive personal correspondence that might have revealed his same-sex proclivities, and wrote to his friends begging them to burn all his letters. This request had just the opposite of the desired effect, and a great many of Maugham’s letters repose today in British, American, and French universities and libraries.
In “The Secret Lives of Somerset Maugham,” just published by Random House, the British literary critic Selina Hastings, author of well-regarded biographies of Evelyn Waugh and Nancy Mitford and a fellow of the Royal Society of Literature, has given us a meticulously researched, admirably written, and endlessly revealing portrait of Maugham’s life, loves, and work.
It is one of Hasting’s great merits that she details Maugham’s extensive sex life, from his long-running affairs with men to his addiction to young gigolos and rent-boys, without either sensationalism or prudery. There is not the slightest subliminal tongue-clucking here, for Hastings is resolutely unshockable.
“Throughout his life an appearance of conventionality was of profound importance for Maugham,” she writes, and he “rarely revealed himself except to his closest intimates.”
Maugham was 21 when Oscar Wilde was imprisoned for homosexuality under a law that remained in force until two years after Maugham’s death, and “the exposure of Wilde’s homosexuality and its terrible consequences, the loss of family, of home, of reputation, had made a deep impression on Maugham, who could hardly avoid seeing a number of potential parallels in his own situation.”
Maugham was also traumatized by the suicide in 1904 of his alcoholic older brother Harry, also a promiscuous homosexual, and persuaded himself that it was ‘because of the life he led.’”
As he told a friend in later life, “I tried to persuade myself that I was three-quarters normal and only a quarter of me was queer — whereas really it was the other way round.”
Maugham — “Willie” to his parents and friends — was born in Paris, where his father handled the legal affairs of the British Embassy, and learned French before he did English. The youngest of four children (his older brothers were already in boarding school), Willie was left an orphan at the age of 9 when first his mother (whom he adored) and then his father died in rapid succession. Packed off to live with a cold and cruel uncle who was the vicar of Whitstable in Kent, young Willie had a miserable childhood, especially when he was sent to a boarding school in Canterbury, another purgatory where he was continuously mocked for the inadequacies of his command of the English language and developed the intermittent stammer that stayed with him all his life.
Maugham subsequently was allowed to spend a year studying literature and German at the University of Heidelberg, where at 16 he had a passionate sexual affair with John Ellingham Brooks, a dashing English graduate student ten years his senior and with whom he remained on life-long friendly terms.
Although Willie had his heart set on being a writer, he was sent to medical school by his guardian the vicar (Maugham’s stammer had precluded his becoming a clergyman), and while there he used his experiences doing midwifery in the London slums to write his first novel, “Liza of Lambeth,” which was well-received by the critics and quickly sold out. Maugham left the medical profession and embarked full-time on his 65-year career as a man of letters.
His next nine novels never matched the success of his first, and so the penurious Maugham, after analyzing what he thought would please the play-going public, turned to the theater to make money. “Lady Frederick” was only the first of a series of deft but ephemeral society comedies that gave Maugham the wealth and recognition he so ardently desired.
Maugham’s large income allowed him to play the elegant Edwardian dandy, and his youthful good looks were extremely attractive to both women and men; his dalliances included a brief love affair with Princess Alexandra Kropotkin, the daughter of Prince Peter Kropotkin, the Russian anarchist intellectual then living in London.
Maugham led a dizzying social life in London and then — after the celebrated American theatrical producer Charles Frohman took him up and brought a string of his plays successfully to Broadway — in New York as well.
Willie’s abominable childhood led to Maugham the man being rather shy and withdrawn, which together with his painstaking concealment of his same-sex private life gave the exquisitely dressed playwright something of an air of mystery. He became an observer rather than a participant, and often relied on the second-hand stories related by others about real people for the plots and sub-plots of his plays and fictions.
Weary of his contrivances for the theater, Maugham embarked on the semi-autobiographical “Of Human Bondage,” the 1915 work on which much of his literary reputation was built. It was made into the 1934 Hollywood film that made Bette Davis a star in the role of the working-class Mildred, with whom “Philip Carey” (Maugham) has a masochistic relationship. Hastings writes, “Maugham’s one-time lover, Harry Philips, who might be expected to know, definitely asserted that ‘she’ was a boy,” and this was hardly the only time that Maugham changed the gender of the real-life people on which he based his characters.
The vivid sequence in the book in which Carey’s life as a shop assistant is convincingly described was based entirely on a 6,000 word account Maugham had commissioned for 30 guineas from a young actor, Gilbert Clark, who had been employed in a Piccadilly department store: “‘Willie used my stuff practically word for word,’ Clark later recalled with satisfaction.”
At the outbreak of World War I, Maugham joined the British Red Cross’ ambulance drivers and saw risky duty at the shell-wracked front. One day when his unit joined up with a group of American Red Cross volunteers, he struck up a conversation with “a slender, handsome youth” of 22, Gerald Haxton, the son of a leading writer and editor at the San Francisco Examiner, who was a “charming and gregarious fellow out to enjoy himself” and spoke French perfectly without an accent. “Maugham said he wanted to write and to travel. What did Gerald want? ‘From you or from life?’ the young man asked provocatively. ‘Perhaps both,’ Maugham replied, ‘They might turn out to be the same thing.’” The two of them retired to Gerald’s billet, where he had a bottle of gin, and thus began his three decades as Maugham’s lover and companion until the younger man’s death.
A year after meeting Maugham, Gerald, then in England, was arrested with another man in a Covent Garden hotel and charged with six counts of gross indecency. “Both were acquitted, but the judge, convinced that Gerald was a bad lot, had him registered as an undesirable alien… and banned from ever setting foot in Britain again.”
Maugham, who had become trapped into a loveless marriage by the pregnant, ambitious American Syrie Wellcome, spent more time with Gerald than he ever did with his wife. Gerald served both as Maugham’s muse and as a collector of unusual acquaintances and stories for him. Maugham's stories were thinly disguised episodes involving his host or others he and Gerald had met on their voyages — circumstances that occasionally resulted in threats and lawsuits. The ever-libertine Gerald efficiently organized Maugham’s constant travels, across Europe, to the South Seas (where together they collected material for “The Moon and Sixpence,” Maugham’s novel about the painter Paul Gauguin), to the Far East (which produced “The Painted Veil” and “The Letter,” later another Bette Davis vehicle), and to India (“The Razor’s Edge”), voyages that also gave birth to a series of well-regarded travel books and an endless flow of short stories (including his best-known, “Rain,” thrice made into a movie). Gerald was also the organizer o
f what Maugham termed their “larks,” meaning the procuring of attractive young male bed partners, usually for pay.
By 1927, Maugham had divorced his wife Syrie, and purchased the sumptuous Villa Mauresque at Cap Ferrat on the French Riviera, where Gerald served as the famous writer’s secretary, majordomo (running the house and its large staff of servants), and master of revels, especially when the American fleet dropped anchor at the nearby port of Villefranche and the waterfront bars were awash in attractive sailors.
The Mauresque became one of the great literary and social salons of the 1920s and ’30s, with an endless stream of lavishly entertained guests, from established writers like H.G. Wells to younger (and attractive) queer ones like Glenway Wescott, and an impressive array of the rich and famous, including Winston Churchill and the duke and duchess of Windsor.
While Maugham kept to a rigorous morning routine of writing, Gerald would go sailing, often accompanied in the late ’30s by his cabin boy, Louis Legrand, “known as ‘Loulou,’ a ravishing sixteen-year-old male whore, slender, blond, tanned, with a soft mouth and a sweet smile; he wore gold bangles on both wrists and spent most of the day dressed only in a minute pair of faded swimming trunks. Gerald was infatuated with him, and when not on the boat Loulou passed much of his time at the Mauresque, at the disposal not only of Gerald and Maugham but of any male guest who desired his services, Gerald afterward discreetly settling the bill.”
A frequent visitor to the Mauresque was Maugham’s nephew, Robin, the son of his austere older brother F.H., who in the late 1930s had been named lord chancellor and later was made a hereditary peer. When Robin turned 17, Maugham began to take a particular interest in him; he was, in fact, infatuated with Robin, who was also homosexual and had literary ambitions (later becoming a novelist, playwright, and travel writer himself). Gerald, after an initial pass he’d made at Robin was refused, treated the lad as a younger brother, supervising his sexual education and introducing Robin to the delights of hustler bars and rent-boys, to whom the nephew in turn became addicted (on his death, Gerald left all his money to Robin and his apartment in Paris to Loulou.)
In 1928 on a trip back to London, Maugham met a young man, Alan Searle, a willowy working-class youth who appeared much younger than his 23 years and “who was well known in certain circles as ‘a modified version of rough trade,’ a common, sexy boy who was also quick-witted, good-natured, and eager to better himself. ‘I was quite a dish,’ as he himself said.” He had circulated as a kept boy among a number of Maugham’s homosexual friends, among them Lytton Strachey, who called Alan “my Bronzino boy.”
Mangham’s first attempt to bed Searle was frustrated because the lad, although willing, had already made a previous date with Ivor Novello, the darling of the West End musicals. But their affair was consummated the following night, and it began an association that would last 40 years.
“There was no question of Alan’s taking Gerald’s place,” Hastings writes, “but Maugham frequently had reason to make short trips within Europe, sometimes to see one of his plays, often just to wander around art galleries, and neither occupation was of particular interest to Gerald. Alan, on the other hand, was an enthusiastic traveling companion and had a passion for paintings.”
In the ensuing years, Maugham wrote every few days to Alan, saw him whenever he was in London, and invited him to the Mauresque as well as on his European jaunts. Gerald had always been a heavy drinker, and as his alcoholism got worse so did his temper, and his rows with Maugham ever more frequent. Increasingly, the now aging writer yearned for the undemanding presence of the pliable cockney lad.
Maugham’s two love objects were vastly different: “Gerald was vintage, but Alan was vin ordinaire,” as one friend put it.
Maugham occasionally put his foot down, and Gerald would become sober and his old enjoyable self for extended periods before backsliding into ferocious bouts of drinking and reckless gambling (with Maugham always settling the large debts his muse ran up). When the German invasion of France in 1940 forced Maugham and Gerald to abandon the Mauresque, separating them, Maugham was eventually sent off to America to perform propaganda chores for Britain, which was anxious to see America come into the war.
After a harrowing escape from France and a lengthy enforced stay in neutral Portugal, Gerald was finally able to join Maugham in the States in 1942, but resumed his heavy drinking and began to have horrific bouts of delirium tremens. Maugham, now 70, was emotionally exhausted, and seriously considered ending his relationship with Gerald. He made plans for a post-war life with Alan, then in uniform in England, but his alcoholic lover managed to get a minor job in Washington’s intelligence services, sobered up, and seemed so happy to have a real job for the first time in his life. Sadly, Gerald’s health weakened from the dissipated life he’d led, he came down with a severe case of pleurisy, and after a lengthy illness —during which Maugham lovingly tended him in various sanitariums — Gerald died in November 1944.
Maugham was distraught beyond words; old friends found him terribly aged, breaking into tears at the mere mention of Gerald’s name for months thereafter.
To his publisher’s wife, Ellen Doubleday, Maugham wrote of Gerald, “I can only think of those years when his vitality and his gift for making friends were of so much service to me. Without him I should never have written those stories that did so much for my reputation in the world of letters & it was he who helped me to get out of the commonplace life of the ordinary humdrum writer & put me in the way of gaining that wider experience of life which has made me what I am today… I cannot but weep because his long end has been so miserable and so worthless. I don’t know how much I am to blame. If I had been firmer, if I had had not tried to force a kind of life on him for which he was temperamentally unsuited, it may have been that he would have made less of a hash of things than he did… just at the moment I am broken.”
A year later, after the war’s end when Alan was finally demobilized, Maugham used his influence in Whitehall and Washington to get permission for his second love to get a priority travel permit for America. The two men had not seen each other for five years, “during which time Maugham had thought of him constantly, and had yearned for the presence of his sweet, sexy Bronzino boy.” But the figure who stepped off the plane was no longer that of the slender youth Maugham remembered but of “a stout, round-faced, middle-aged man… ‘You may have looked like a Bronzino once, but now you look like a depraved Franz Hals,’ Maugham commented sourly. Nonetheless, Maugham was happy and relieved to be reunited with Searle, who was to be his devoted companion for the rest of his life,” and “their sexual compatibility enjoyed a surprising longevity.”
The next year, 1946, Maugham and Alan returned to the French Riviera and reopened and restored the Mauresque to its former glory, Alan taking on the tasks of secretary, majordomo, and procurer formerly occupied by Gerald.
But Maugham never wrote anything worthwhile after Gerald’s death, although the advent of television increased his fame and his income with a series of made-for-TV films that aired in both Britain and America in which he himself presented dramatized versions of some of his earlier stories.
Alan Searle stuck with him to the very end, serving as nanny and body servant as Maugham became increasingly paranoid and demented, finally giving up the ghost in 1965 just a few weeks short of his 92nd birthday. Maugham’s will made Alan a very rich man.
Within a few months of Maugham’s death, he was outed in a series of handsomely paid articles in a Fleet Street tabloid by his impecunious nephew Robin, who followed up with a series of tell-all memoirs of his queer uncle.
What remains today of Maugham’s literary reputation? The old man could be quite lucid about himself at times. Late in life he said he belonged “in the very first row of the second-raters.” Hastings is astute in this book about much of Maugham’s writing, but overly generous about some of it. I personally have never had much of a taste for Maugham, and agree with the great American critic Edmond Wilson that his plain prose is “such a tissue of clichés that one's wonder is finally aroused at the writer's ability to assemble so many and at his unfailing inability to put anything in an individual way."
Still, Maugham’s work continues to be turned into films, like the 2004 “Being Julia” (based on his novel “Theater”) starring Annette Bening and Jeremy Irons, and the 2006 “The Painted Veil,” with Edward Norton and Naomi Watts.
If Maugham had been able to live his loves openly and write about them without disguising them, rather than having to devote so much energy to dissumulation, if he could have been a more honest writer, might he have become a great one? That’s a question that cannot be answered.
In any event, “The Secret Lives of Somerset Maugham” will hereafter be considered the definitive biography, and it is a wonderful read.
In May 2010, a young man in China jumped to his death from a factory building in the sprawling FoxConn compound in the southern Chinese city of Shenzhen. FoxConn is the Chinese manufacturer of iPhones, iPads, Dell computers and other technology products. According to Time magazine, FoxConn is “a place where distraught workers regularly throw themselves to their deaths.”
One thinks of such events when reading the following from Karl Marx, writing in the first volume of Capital: “Accumulation of wealth at one pole is, therefore, at the same time accumulation of misery, agony of toil, slavery, ignorance, brutality, mental degradation, at the opposite pole.”
As the triumphalist howling in the wake of the collapse of the fall of the Berlin Wall has receded, the world we have entered seems much more like the world Marx tried to apprehend nearly one hundred forty years ago. It is that understanding that anthropologist and geographer David Harvey illuminates in his just-published Companion to Marx’s Capital. Harvey, who has taught a course on Capital for forty years, has written a reader’s guide to Capital with the stated aim of getting us to read this work. In doing so, he makes accessible the at-times challenging but fundamental insights of Marx, and shines new light on his path-breaking methodology.
One of the striking things in reading both Marx and Harvey is that we live amid a time where there is an awful lot of misunderstanding—and—worse of basic things. Harvey writes: “Conventional economics has in practice a hard time measuring (valuing) the factor of production that is capital. So they just label it K and put it into their equations. But actually, if you ask ‘what is K and how do you measure it,’ the whole of contemporary economic theory is [shown to be] dangerously close to being founded on a tautology: the monetary value of K in physical asset-form is determined by what it is supposed to explain, viz. the value of the commodities produced.” What he’s getting at is that in order to create capital (or profit, if you will) you actually have to generate a surplus from something. And that something is human labor power. In fact, it is the only commodity that can create surplus value.
The failure to grasp this goes a ways toward explaining why there can be feverish excitement about “The New Economy,” only to see it disappear with the bursting of the tech bubble. It shines light on why companies like Enron profited—for a while—on pure ether, but in the end collapsed. And of course it is testament to the failure of 2008 and the meltdown of companies like Lehman Brothers, AIG, and GM. There is, in short, in the dominant culture a sense that money and commodities are somehow magical. This is part of what Marx was getting at when he talked about commodity fetishism, the mistaking of social relations as relations between things.
Those social relations, as Harvey points out, are particularly ugly these days. He tells us the way surplus value is being squeezed out of workers in places like China is akin to capitalism resurrecting its period of “primitive accumulation”—when it grabbed hold of whatever wealth, by whatever means, to be able to jump start capitalism (think of slavery, the acquiring of gold, the outright theft of land). Harvey calls its modern day equivalent “accumulation by dispossession,” and it is a concept worth pondering.
Also worth pondering is Harvey’s bitterly ironic point that “only cranks misfits and weird utopians think that endless growth, no matter what the environmental, economic, social and political consequences, might be bad. To be sure, problems deriving from growth need to be addressed, but rarely is it said that the answer to the problem is to stop growth altogether.” In the wake of the unending oil spill in the Gulf of Mexico, the acceleration of all the dire signs of global warming, the acres of smog hanging over China on any given day, etc, etc, it seems we ought to be talking quite a bit more about stopping such growth.
In that regard, what Harvey says in explaining why he wrote this book—and why people should read it—resonates. He wants to “open up a space of dialogue and discussion in such away as to bring the Marxian vision of the world back onto center stage, both intellectually and politically. Marx’s works have far too much to tell us regarding the perils of our times to consign them to the dustbin of history.”
SOURCE: Democracy: A Journal of Ideas ()
[Jim Sleeper is a lecturer in political science at Yale University and the author of Closest of Enemies: Liberalism and the Politics of Race in New York. He wrote this article for Democracy: A Journal of Ideas.]
Affluent Western democracies have “difficulty maintaining popular support for costly counterinsurgency wars,” laments Victor Davis Hanson, the accomplished historian of ancient Greek wars and fanatical insurgent in his own right, against what he considers the beleaguered American imperium’s fickle liberal elites. He means to restore the legitimacy of the unilateralist U.S. hegemony envisioned in George W. Bush’s National Security Strategy of 2002; writing in The American Enterprise Magazine in 2006, he attributed a dearth of popular support for that project to “ignorance of military history.” Now, in Makers of Ancient Strategy, he assembles ten military historians of classical Greece and Rome (including himself) to rectify that ignorance by showing how Athenians, Romans, and, even before them, Persians extended their sway and coped with challenges to it in ways that American grand strategists can learn from.
At the same time, though, Hanson is a geyser of vituperations in National Review, the conservative Pajamas Media website, and beyond, against challenges to America’s missions abroad from our liberal governing and cultural cliques, “the mindset of the faculty lounge,” and, naturally, the media. As Iraqi casualties rose in 2006, he accused journalists of sensationalizing setbacks in Iraq thusly: “I deeply love [California], but . . . imagine what the reaction would be if the world awoke each morning to be told that once again there were six more murders, 27 rapes . . . and 360 instances of assault in California. . . . I wonder if the headlines would scream about ‘Nearly 200 poor Californians butchered again this month!’ ” Here he’s blaming the messenger instead of reckoning with different kinds of carnage and their causes, but Hanson writes like this day and night–indeed, several times a day.
Even in Makers, a scholarly anthology, he claims grandly that today’s “problems of unification, civil war, expansion abroad, colonization, nation-building, and counterinsurgency all have clear and well-documented precedents in both Greek and Roman culture.” But many of the book’s precedents point in directions Hanson doesn’t want to go, and he ends his introduction by advising cagily that “[r]ather than offering political assessments of modern military leaders’ policies, we instead hope that knowledge of the ancient world will remind us of all of the parameters of available choices–and their consequences.” This, after years spent invoking ancient precedents for decisive American action:
People wonder how Rome could conquer all of northwest Europe with . . . four or five legions. The answer is the Romans had a very similar policy to our own: They looked at the most retrograde, bloodthirsty, nationalist leaders–the bin Ladens of the ancient world–and took them out, but with precision and with a lesson.
Ancient histories, epics, tragedies, and disputations do make clear that at some point in public deliberations, there’s no substitute for decisive action driven mainly by what the nineteenth-century military strategist Karl von Clausewitz, a student of the classics himself, called “the silken thread of imagination.” Before all facts can be known, leaders must act decisively on intuitions about the interplay of their own and others’ traditions, moral structures, and economic practices. The study of classical history and literature revivifies the inevitability of that silken thread, even if also its elusiveness.
But some conservatives seem to go further. Feeling trapped in neoliberal postmodernity, they think that emulating the ancients opens opportunities to shed the mincing Christian moralism, political correctness, and secular revolutionary fantasies of our time. In their view, ancient Greeks and Romans, unburdened by otherworldly preoccupations or the secular nostrums of today’s reigning but empty neoliberal relativism, were more realistic, brave, and exultant in breasting the terrors and felicities of the human condition than are technocrats and bottom-liners or the apostles of progressive groupthink who react against them. The ancients expected not to escape the human condition through science, personal salvation, or historically redemptive Hebraic or Protestant missions, but to bear it nobly through character nourished in a civic culture far stronger than a slippery web of contracts and rights.
“The Greeks accepted the idea that we all get old, there’s certain things that we can’t change, human nature is constant throughout the ages and therefore certain things will always be with us–war, pestilence, the fact that individuals are capable of pretty awful things without civilization and culture,” Hanson told The Boston Globe in 2003, when he was becoming infamous for turning folksy insights into bludgeons against critics of the Iraq War. A fifth-generation California raisin farmer, self-styled Jeffersonian republican, and best-selling historian of ancient wars, Hanson pleased Bush and Dick Cheney with his Carnage and Culture, which cited nine historic battles to attribute the supposed superiority of Western war-making to its rooting in Greek and Roman values. Hanson reserves his deepest scorn for leftist academics, who he claims prefer a politics of moral (or amoral) posturing to taking real responsibility, and for progressive activists who think they can improve the world rather than affirm some dignity amid deprivation, moral depravity, and capricious fate.
The irony–dare one call it the Greek tragedy?–in Hanson’s mission begins with the unlikely truth that anyone who cares about the republic will find much to like in him. He isn’t a red-tooth-and-claw decisionist like Friedrich Nietzsche or Carl Schmitt; he’s an angry civic republican who doesn’t know where to turn. Sitting in his nineteenth-century San Joaquin Valley family farmhouse for the 2003 Globe interview, Hanson impressed interviewer Laura Secor with his rustic, self-deprecating charm: “Every dime I ever lost was in farming in the wealthiest agricultural area in the world,” he told her. “And every money I ever made was in classics, in the most culturally desolate area in the world.” Secor recounts how he “got a job in town,” establishing a classics program at California State University at Fresno in 1984 and teaching there for 20 years, assiduously cultivating minority and working-class students. Hanson, she notes, likes “ ‘keen-eyed,’ egalitarian, hard-working, and largely self-governing” small land-owners, whether they’re American or ancient Greek. “So if we now object to the view of Plato and Aristotle,” he wrote in his book The Other Greeks, “it may be because we have lost empathy with the horny-handed farmer himself and his cargo of self-reliance, hard work, and a peculiar distrust of rich and poor alike.”
Hanson seems to be channeling Christopher Lasch here, and while he voted for Bush in 2000 and 2004, he remains a registered Democrat who claims that he disdains “golf club” conservatives and even think-tank and academic ones. Of the conservative political theorist Leo Strauss’s followers, he says, “I don’t think they understand the brutality of life that I grew up with. I don’t think any of them’s gone out and pruned vines for 30 days on end . . . and nobody’s been in a fight, or nobody’s had to run a business.”
This staging of Hanson’s rusticity has been going on for some time, but his mother was a judge and his father a college administrator. And his affinity for civic republicans, ancient and current, is hard to square with his role as a favorite of those horny-handed sons of the soil George W. Bush and Dick Cheney, or with his $250,000 award from the conservative Bradley Foundation, or with his fellowship at the conservative Hoover Institution (so much for disdaining think-tank conservatives).
Hanson was in the White House in January 2005, working with the Cold War historian and would-be grand strategist John Lewis Gaddis to help craft Bush’s second inaugural address (both men received National Humanities Medals from Bush). The ideologue in him keeps tugging at his scholarly sleeve: In an interview in 2008, he insisted that our Iraq blunders are minor compared to those of other wars, and in Makers, he writes that “the more things change, the more they remain the same.” But the truth in that chestnut owes more to the ancient world’s own historians, such as Thucydides, than it does to those like Hanson who now imagine that studying that world will make us more like Pericles or Epaminondas. Thucydides, who fought the Peloponnesian War of the fifth century B.C. on the side of Athens, wrote the great history of that long conflict not as a cheerleader or lesson-giver but as a bearer of his society’s collective experience. A keen observer of conflict, he recounted how demagogues such as Cleon, an Athenian leader of that time, subtly altered cherished words’ meanings in public discourse to try to dispel Athenians’ ambivalence about their imperialism. Reading Hanson on the Greeks feels more like reading Cleon on Thucydides–in other words, like rewriting history, not because any new facts have come to light but because Hanson, unlike Thucydides (and unlike some of the contributors to his anthology), is willing to compromise the writing of history in order to make it.
Our distance from the buzz and hum of ordinary ancient people who aren’t featured in the great classical narratives makes it easy to cast such stories as precedents for today’s ideological projections. Hanson sighs copiously about the limits to our knowledge and the inevitability of loose interpretations, but that doesn’t keep him from making ancient history a Trojan Horse for his certitudes, stretching what we little know of the past to serve what he wants us to think about the present.
Citing the book’s first chapter by Thomas Holland, the British historian of ancient Persia, Hanson tells us: “Imperial powers . . . create an entire mythology about the morality, necessity, or inevitability of conquest. Their narratives are every bit as important to military planning as men and matériel in the field.” Fair enough, but one can’t help ruing Hanson’s own efforts to help Bush craft a grand narrative. Holland seems to tweak Hanson about this when he contrasts the sixth century B.C. Persian emperor Cyrus’s supple, tolerant handling of conquered populations–by encouraging Jews who’d been exiled to Babylon to return to Jerusalem, Cyrus got himself written into the Old Testament as a great servant of Yahweh–with his successor Darius’s conviction “that there was no stronghold of [falsehood] so remote that it might not ultimately be purged and redeemed. . . . After all, if it was the destiny of the King of Kings [here, Darius] to bring peace to a bleeding world, then what were those who defied him to be ranked as if not the agents of anarchy and darkness, of an axis of evil?”
Hanson wouldn’t get such tweaking from the Yale classicist and fellow Iraq War zealot Donald Kagan, another contributor to the volume. “Today, we assume that empire is an entirely negative notion,” Hanson advises us. “But as Donald Kagan shows . . . rare individuals [in this case, the fifth century Athenian statesman Pericles] occasionally do make a difference. Empire . . . was not doomed to failure, if moderate and sober leaders like Pericles understood its function and utility.” Kagan claims that “the Greeks were free from the modern prejudice against power and the security and glory it could bring” not because they were “a free, autonomous polis,” but because they had a strong leader, Pericles, to rouse them to their imperial obligations. Kagan’s non-academic, political pronouncements have made clear his wish that someone similar would convince Americans that their hegemony is good for everyone and for their own historical glory. His account of Athens reads like an advisory on American hegemony from the Cold War through Vietnam and up to the present. One need only substitute contemporary cases for his ancient ones to sense this chapter’s didactic intent.
Hanson’s own chapter examines a preventive war waged by the Boeotian leader Epaminondas against Sparta in the fourth century B.C. in a way that supposedly clarifies the plausibility if not the wisdom of our venture in Iraq. Commenting on this chapter in the book’s introduction, he notes:
Preemption, coercive democratization, and unilateralism in the post-Iraq world are felt recently to be either singularly American notions or by their very nature pernicious concepts. . . . In fact, these ideas have been around since the beginning of Western civilization and have proven both effective and of dubious utility.
Hanson’s account of Epaminondas’s doughty assault on the mighty Sparta, which had occupied his own country but whose subordinate city-states he liberated from slavery, bears a dubious relation to America’s “preventive” war with a comparatively much weaker, distant Iraq. He nevertheless insists that just as the Spartan Peloponnese emerged from a long and expensive war “largely democratic . . . and the Greek city-states to the north . . . free from Spartan attack,” so the Iraq War, although it “had tragically cost more than 4,200 American dead, along with hundreds of allied casualties, nearly a trillion dollars, and thousands more wounded,” had by 2008 led to a “relatively quiet and democratic” Iraq.
This is such a stretch that even Hanson has to conclude that “history alone will judge, in the modern instance, as it has in the ancient, whether such an expensive preemptive gamble ever justified the cost.” But deferring to history’s judgment doesn’t square with lambasting the media for reporting the plight of Iraqis who’ve been not liberated but murdered, or with ignoring the 2.5 million Iraqis who’ve left their “relatively quiet and democratic” country, thanks to misjudgments that any serious study of ancient strategy might have foreseen.
In an interesting chapter, “Counterinsurgency and the Enemies of Rome,” Susan Mattern, an associate professor of history at the University of Georgia and author of Galen and the Rhetoric of Healing, describes how the empire kept order in many provinces only through “a variety of insidious ‘hearts and minds’ mechanisms,” including “social aid, citizenship grants, a uniform law code, and the indigenous integration and assimilation into Roman life that won over or co-opted local populations.” She portrays the fragility and fluidity of what many assume was a stable civil society but notes:
[W]hen I am asked to comment on the practical lessons of Roman history, my response . . . focuses on the critical role of social institutions. . . . The nearest modern parallel may be the ‘global village’ created by telecommunications technology, financial institutions, free trade, and the consumer tastes and interests that link international communities today. A focus on shared economic and cultural interests rather than on ideology is a promising direction for foreign policy in the future.
Maybe Mattern has our dim prospects in Afghanistan on her mind, but whatever the reason, she declines to do what I suspect Hanson hoped she would–and what Barry Strauss, a neoconservative professor of classics at Cornell, does in his chapter on slave rebellions. He likens these ancient revolts to Afghan tribal insurgencies, and he cites Rome’s overdetermined victories to assure us that “successful insurgencies are the exception” and that “states usually hold all the cards.” The analogies seem too flimsy to invite serious comment.
Adrian Goldsworthy, a biographer of Julius Caesar, shows that grand strategy involves not only what imperial leaders think and do but what “barbarians” do. He analogizes competition among tribes in Caesar’s time, and their bargaining with Caesar himself, to Afghan inter-tribal competition and bargaining with Americans. But Goldsworthy notes that while we are trying to create a democracy and build a nation, Caesar was not: “Personal interest more than anything else dictated whether leaders supported Rome or resisted Caesar.” Caesar’s personal diplomacy, not Roman messianism, made the difference, and Goldsworthy may well endorse Americans’ talking to Taliban leaders without pretending to uplift them.
In Rome’s declining years, notes Peter Heather, who has studied the frontiers of the declining Roman Empire, its grand strategists forgot they weren’t the only deciders. Barbarians were reacting “with intelligence and determination to the opportunities and dangers that imperial policies presented,” including the negative factor of aggressive exploitation. Heather has the last sentence of the book, and he uses it to posit a kind of Newton’s third law of empires: “The exercise of imperial political dominance and economic exploitation will in the long run stimulate a series of reactions that turns initially weaker neighbors into societies much more capable of resisting or even overturning the aggressive imperialism that set those reactions in train.”
This collection makes Hanson look good partly because it transcends him, and it would be pleasant to think that its best contributors have summoned the better angels of his nature. But he keeps on raging at liberals–“America is now a campus, and Obama is our dean,” reads the sardonic title on one of his many recent blog posts in Pajamas Media. In Makers, he warns that today’s radically evolving technology “fools many into thinking that war itself is reinvented with the novel tools of each age.” Why didn’t he tell that to Donald Rumsfeld, a hero of his, when it mattered?
“Since war is and will always be conducted by men and women, who reason–or react emotionally–in somewhat expected ways, there is a certain predictability to war,” Hanson writes in the introduction. But when the conservative online magazine FrontPage asked him in 2008 what lessons Iraq would teach future historians, he answered, “It’s a reminder that . . . no war turns out as one predicts.” Well, sure, and, a few decades ago, he mightn’t have predicted that women would conduct wars or that seismic technological changes would enable lone suicide bombers to destroy thousands of non-combatants in attacks with murky “return addresses.” He seems not to have noticed one of the most “unpredictable” consequences of our time’s immense shifts in communications and in public moral awareness: Huge, armed regimes–of the British in India, segregationists in the American South, Afrikaners in South Africa, and Communists in the Soviet Union and Eastern Europe–have been brought down by acts of moral witness backed by unarmed, non-violent, disciplined mass movements. Nothing like this happened to regimes in ancient Greece and Rome; only the early Christians come close, and, by then, the Roman Empire was already in trouble.
Hanson might counter that the British and Soviet empires were exhausted when these new kinds of dissidence challenged them and that segregation had become problematic for Washington with Africa’s decolonization during the Cold War. But Hannah Arendt, a historian of classical philosophy in her own right, and Jonathan Schell, who reported on the Vietnam War and integrates Arendt’s insights into his The Unconquerable World, show that immense changes in technology and in beliefs about power, legitimacy, and non-violent disobedience are altering the relationship between states’ use of force to assert their authority and others’ capacity to challenge their legitimacy.
No, human nature hasn’t changed. Historians of the ancients perform an important service when they remind liberals of that by making vivid the endurance of force, fraud, fate, and humans’ noble if doomed attempts to defy them. But that doesn’t license historians like Hanson to use the classics as a cudgel to denigrate liberalism as a carrier of unprecedented options. Liberalism has fractured “organic” Aristotelian and medieval Christian understandings of social order irreversibly by separating church and state and by elevating personal autonomy. It has also made possible, though not inevitable, the politics of moral witness and disciplined, non-violent coercion that brought down the vast, national-security states just mentioned, virtually without firing a shot. Another “liberal” irony that only Susan Mattern seems to anticipate is a darker one: The capitalism of John Locke and Adam Smith that arrived with liberalism and modernity has metastasized into a casino-finance and corporate-welfare regime that is dissolving the imperial assumptions about war-making emphasized in Makers and in Hanson’s polemics. Liberalism’s prospects can’t be charted by the conservative minority of classicists who spin ancient history to justify imperial orchestrations of power and to scourge their sometimes feckless critics. Historians who do that will have plenty of “friends” and tactical rewards, but little of the prescience or moral dignity that Thucydides recognized and achieved.
This is a novel that's much easier to admire than it is to like. By just about any meaningful critical criterion -- plot, character, dialogue, description, a sense of place, a sense of history -- Jonathan Franzen has long since proved himself to be a master, and in his latest novel he is at the height of his powers. But as a reading experience, Freedom is as emotionally exhausting as it is impossible to put down.
As with his 2001 novel The Corrections, with which it has strong affinities, Franzen's great subject in Freedom is the tumultuous inner life of the American family, and the indirect but unmistakable way in which that tumult is connected to looming imperial decline. Generationally speaking, the loci of the former were the Greatest Generation and the Baby Boom; this time it's the Baby Boom and Generation Y. The core of Freedom is a love triangle formed by the life trajectories of Walter and Patty Bergland (whom we meet indirectly through their former neighbors in St. Paul; this is a story of multiple narrations) and Walter's best friend Richard Katz, a misanthropic rock musician who meets Walter in college and stays in touch in the decades that follow. Along the way we also meet the two Bergland children, in particular Walter and Patty's son Joey, whose iconoclastic rejection of his parents' progressive values -- Patty's permissiveness and Walter's environmentalism -- lead him into a number of unanticipated directions, among them an early marriage and a career as an international arms merchant during the Iraq War. Actually, all the major characters in this book undergo metamorphoses in one form or another; part of the book's excellence is the way in which very smart people all too credibly find themselves in truly ridiculous, if not paradoxical or even hypocritical, situations.
Yet the novel's narrative energy, and its satisfying resolution, may well be among its secondary pleasures. As with the best recent fiction, this is a book in which you really learn things about the way the world works: how companies like Halliburton game the system; how environmentalists drive Faustian bargains with companies like Halliburton; how the indy rock world has adapted to the end of the traditional record business; how Title IX changed the life of female athletes (Patty was a college basketball star); and so on. Franzen's ability to fully imagine the lives of his characters also turns their frequent arguments into lively ping-pong matches of dialogue, in which each side makes compelling points that resist easy villainy or pigeonholing.
So what's the problem? For lack of a better term, it's Franzen's relentlessness. He bores into these people, anatomizing their pettiness in ways that are real enough -- and recognizable enough -- but that finally feel like a form of aggression that he takes out on them. Take, for example, this quintessential Franzian sentence in which Patty assesses her mother, a small-time Westchester politician:"Paradise for Joyce is an open space where poor children can go and do Arts at state expense." There's something painfully exquisite about this masterpiece of compression: the crudeness of Joyce's verb ("do"), the effortless abstraction of the limousine liberal ("open space,""poor children") and the vindictive quality of her altruism ("state expense"). You laugh out loud when the collegiate Joey muses that"the really attractive girls he'd met in Virginia all seemed to have been sprayed with Teflon, encased in suspicion of his motives." And you nod with grim amusement as you listen to Richard rationalize seducing a fan's girlfriend because"rather than thwarting his father's vicarious ambitions by pursuing entomology or interesting himself in financial derivatives, Zachary dutifully aped Jimi Hendrix. Somewhere there had been a failure of imagination."
But 500+ pages of this can really wear you down. There are few writers who can allow you to really experience what (someone else's) depression feels like, and Franzen's virtuosity in evoking this in multiple characters is admirable, enervating, and addictive. It's hard not to get halfway through this book without sensing that writing is above all else a therapeutic act for Franzen, even as he's one of the very few people who can actually succeed artistically in doing this -- and even as one of the things that you suspect most depresses him are people that read Jonathan Franzen novels (because their motives, like this one being the Big Novel of Fall 2010, are suspect). You end up feeling weirdly implicated.
That said, Franzen does have a larger point to make here, a point of real historical, political, and psychological importance. It's right there in the title: freedom, a term which pops up with subtle regularity."It's all circling around the problem of personal liberties," Walter says at one point."People came to this country for either money or freedom [money of course is another form of freedom]. If you don't have money, you cling to your freedoms all the more angrily." Walter is making a critique of working-class libertarianism here -- his elitism is something he has an increasingly difficult time hiding or resisting -- but the point applies to Americans generally. Our love of freedom, a love unmoored from any larger goal or value, is killing us.
Freedom is not exactly a fun book, readable as it is. But it's an important one, if for no other reason as a vivid document of our time. It shows us a republic that's dying from within, and how, amid very considerable difficulties, a decent life may yet be lived within it. And how the antidote for freedom is love.
No: It was not a matter of overwhelming numbers. Nor was it the outcome of particular battles. Or the vision of statesmen (sorry, Mr. Lincoln). Professor Stephanie McCurry of the University of Pennsylvania doesn't deny these things made a difference. But in the end, the Confederate States of America was doomed from the start because the people who weren't consulted about its creation -- principally white women and black people -- exerted their overlooked power and destroyed it from within. This is what happens, she says, when your vision of politics, and your notion of who counts, gets too narrow.
Confederate Reckoning lies at the confluence of three streams of recent scholarship: studies of secession explored by William Freehling, the pioneering work of Drew Gilpin Faust on Southern women, and Ira Berlin and company's massive body of work documenting the saga of emancipation. There is also a tributary on the discourse of comparative slavery (think George Fredrickson), which surfaces periodically to demonstrate that the closely linked political and military dynamics of the Confederacy were not unique to the western hemisphere or the western world in the 19th century, from Cuba to Russia. But the integration of these bodies of discourse into one forceful and elegantly written volume makes this book a landmark piece of Civil War historiography.
Part of what makes it so is McCurry's ability to make truly surprising points along the way. For example, she shows in the opening chapters of the book that even white male voters were effectively disenfranchised in many Southern states during the pivotal months of 1860-61 when the Confederacy first took form. The actions of the plantation elite in states like South Carolina and Mississippi give the lie to an oft-invoked ideal of herrenvolk democracy, as resolutions were rushed into approval on dubious grounds, the results of voting were suppressed, and widespread intimidation was practiced. Even in Alabama, the very heart of Dixie, opposition to secession never dropped below 39%. These widespread efforts to railroad through secession in face of more obvious resistance would bear bitter fruit in places like Virginia, whose western residents would ultimately secede from the seceders. But passive as well as active resistance would be widespread from North Carolina to Texas. The so-called"Slave Power" invoked by Northern politicians in the 1850s was no myth, and its power was nowhere more evident than in the South of the 1860s.
But that power, while real, was destroyed because those who wielded it failed to consider people they considered beneath notice in their deliberations. Confederate women, assumed by government policymakers to be merely ancillary, quickly became a force in their own right. Ironically, the first way womens' power became apparent was through their presumed dependency. Since the defense of home and hearth was endlessly invoked at the basis of Confederate independence, anything that at least appeared to undercut that defense -- like the death or prolonged absence of men unable to protect families increasingly subject to invading armies and hostile slaves -- became matters of insistent appeals, and, eventually, demands. As McCurry shows, women, especially non-elite women, were increasingly direct in addressing government leaders. By 1863, they began taking matters into their own hands; McCurry emphasizes that the well-known Richmond food riot that summer was only one of a number of highly organized, female-led political actions. In their wake, Confederate leaders were forced to make systematic efforts to address the well-being of wives and widows by allocating precious resources in response to their demands. Which brings us to another surprising finding: McCurry's suggestion that the modern welfare state actually has its origins in the increasingly desperate statist behavior of C.S.A. state and federal governments. While she would never put it that baldly, principally because the women in question did not really use a language of citizenship and explicit political assertion we tend to think of as central to the modern liberal tradition, she makes a compelling case not only for rethinking Confederate history, but American history as well.
To a great extent, the last generation of Civil War scholarship has focused great attention on the African American experience, with a special emphasis on the agency of slaves in achieving their emancipation. This book is broadly consonant with that disposition, but situates it less in terms of liberation that swept down from the North than to the degree to which slave resistance emerged from the very heart of Confederate society. Once again, this hugely damaging power was a direct result of slaveholder inability to grapple with the implications of simply assuming that black people were property, an asset to be deployed to serve their own political ends. For this particular form of property had a mind and a will of its own, and its total exclusion from any rights or privileges meant that slaves had little if any reason, incentive, or loyalty to help advance to those ends (and indeed powerful motives to subvert them). McCurry says that slaveholders could not confront this reality, because their commitment to property rights trumped their patriotism. When a besieged Confederacy sought to utilize that property, the planters balked: the relationship between slave and master mattered more than the relationship between citizen and state. Challenging historians who argue the Confederates were willing to sacrifice slavery for independence, she shows that even in its death throes, slaveholders could not bring themselves to allow the conscription or arming of slaves until it was far too late, and even then in a hopelessly illogical and useless way. They were just too addicted to their peculiar institution.
Confederate Reckoning is not a perfect book. The last third seems a bit labored, even overdetermined. McCurry's moral fervor animates her analysis, but her zeal sometimes gets the best of her, as when she asserts, in the closing pages of the book, that"The Confederate political project had been tried before the eyes of the world and it had failed. The poverty of Confederates' proslavery political vision had been proved once and for all time." For once, certainly. But not all time: the past may belong to the historian, but the future is beyond her reach. We cannot escape history, but we can hope, however dimly, it can light our way. This book helps.
It's a truism that has now vexed generations of historians: that in the United States, the people who have tended to have the most egalitarian class politics have also tended to be the most racist, while those with the most pluralistic vision have tended to be the most elitist. In American Slavery, American Freedom (1975), Edmund Morgan found the locus of this contradiction in the colonial slaveholding south. In The Rise and Fall of the White Republic, (1990) Alexander Saxton found it in the culture of the nineteenth century urban working class (as did labor historian David Roediger, who collaborated on a recent second edition of Saxton's book). In Two Faces of American Freedom, a sweeping reinterpretation of American history from the seventeenth century to the present, Cornell University Law professor Aziz Rana locates this American dilemma on the (shifting) western frontier in an ideology he calls"settlerism." Not surprisingly, Rana laments it. But he appears to be even unhappier with what's replaced it.
Rana begins his study by challenging the premises of American exceptionalism by arguing that in many respects the United States was a fairly typical empire from the beginning -- except for the fact that it tended to expel or exclude conquered people rather than directly subjugate them. (Yet even this was not unique; he notes a similar pattern can be discerned in the development of South Africa.) What did set the United States apart was its ability to successfully gain autonomy from the imperial metropole -- an aspiration that crystallized when Great Britain sought to reorganize a sprawling empire after 1763 by homogenizing privilege and limiting expansion in ways that colonial setters considered antithetical to their errand in the wilderness.
It is this dynamic of local autonomy and imperial expansion --the two faces of the title -- that Rana says historians like Gordon Wood overlook: the democratization of American society he and others have traced overlook its dependence on the exploitation of Others. Settlerism relied on the fuel of (white) immigration to sustain it, which is why certain kinds of Europeans enjoyed surprising rights like suffrage even before they became citizens, and why non-Europeans have had a devilishly difficult time procuring them. With rare exceptions like Orestes Brownson, or the workingmens' parties of New York and Philadelphia, this freedom was understood in negative rather than positive terms: central government was the problem, not the solution. So it was that Jefferson rather than Hamilton, dominated early American politics, and the farmer trumped the merchant in the collective imagination. Abraham Lincoln? A big government guy itching to seize billions of dollars worth of property and destroy centuries of freedom (i.e. the right to own other people).
The irony, as Rana and others have noted, is that what would become a characteristically Jacksonian approach to freedom jealously checked government power but left Americans defenseless against the depredations of corporate power. This new form of tyranny cloaked itself in a free labor ideology presented as the logical heir of Jeffersonian yeomanry. After the Civil War, populist leaders like Tom Watson and labor organizers like Eugene Debs challenged this substitution by arguing for a modified form of producerism that emphasized meaningful work, not mindless drudgery. In a departure from the earlier generation of continental expansionists, many of these people began to also challenge imperial expansion overseas, for reasons that ranged from moral principle to racism. And as the frontier closed, they began to lose their enthusiasm for immigration.
But the charms of empire proved too great to resist. Meanwhile, in the decades after 1896, progressive elites decided that the best way to combat corporate power lay erecting a government apparatus that promoted the common good as something to be delivered via administrative expertise. By the time of the New Deal,"freedom no longer amounted to collective control over the basic sites of decision making; rather, it comprised security from economic want." The presidency was increasingly seen as the barometer of popular will, which justified its ever-growing power. Yet as we all know, that power has also become increasingly less accountable. It has typically been exercised in foreign adventures that are embarked upon in the name of preserving freedom, but which end up actually sacrificing it, both in terms of local power and human life.
Rana renders this story in a sturdily constructed narrative girded by illustrations from an array of Supreme Court cases, some well-known, others obscure, and still others, like the Dred Scott case, analyzed in a fresh light. He identifies a strand running from Thomas Skidmore through Randolph Bourne to Martin Luther King that he believes offers an alternative America of universal egalitarianism, one that emphasizes the distribution of freedom and power broadly. It does not rely, as the Civil Rights movement increasingly has, by defining social progress in terms of creating opportunities for minorities to join elites, rather than challenging the premise of elitism itself. Rana places his future hopes, as improbable as he knows they are, on immigrant protest against second-class citizenship.
This strikes me as an intelligent, honest, and decent critique of American society. I do have reservations. As a matter of style, I wish Rana would wean himself of his tendency to use the phrase"in other words," which is at best tedious and at worst engenders suspicions of rhetorical legerdemain. I think he creates a misleading impression that late nineteenth century farmers and labor ever achieved much resembling real symbiosis in their challenge to industrial capitalism (he fails to note mutual suspicion, and antithetical interests, like food pricing, that characterized their relationship), and I think he underestimates the degree to which managerial elites of the New Deal order were challenged in the decades since (Ronald Reagan isn't even in the index!).
My biggest concern, though, is that there's an oddly abstract air to this lament for a vanishing, albeit flawed, American freedom. Actually, I don't really know what freedom finally means to Rana. I might have a better sense if he actually took us to what he regarded as an effective New England town meeting, or visited a western town in which a real-life Jimmy Stewart was hard at work, so that we could see just what it was that he values. (T. H. Breen does this brilliantly in his new book American Insurgents, American Patriots, in which he peoples his analysis with individuals who took liberty into their own hands and made a new nation.)
For freedom is a means, not an end. And so I feel compelled to ask: What does Rana want? He is sorry that Americans have been so ready to settle for mere security. But what else is there? One answer, of course, is the esteem that comes with publishing books with high-profile presses and teaching at an Ivy League university. But of course not all of us are as smart, talented, and lucky as he his. I myself have another answer, which among other ways takes the form of paying absurdly high local taxes for what some administrators would plausibly consider an absurdly inefficient (read: small) school district. That gives me the right to be a helicopter parent and to vote on an annual school budget. On Sundays, I can nod to a couple local policemen from my pew at church, which I hope will make a difference on a future bad day. I'm not sure how many brown neighbors I have (a few), but none of them are cleaning my house or mowing my lawn. (I do both; Rana regrets to observe at one point that one thing feminism has come to mean is a career premised on low-wage help.) No one would call this utopia. But does it count as an authentic form of freedom, albeit underwritten by the prerogatives of empire? (Really: Can freedom ever not be?) I don't imagine it would satisfy Thomas Jefferson -- I've got too much attachment, literally and figuratively, to the city. But how about Aziz Rana? If this isn't good enough, what is?
These are not rhetorical questions. However he might answer them, now or later, I honor Rana on a fine debut -- and provisionally recommend the pleasures, and maybe even the virtues, of settling for suburban living as one face of American freedom.