History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Fri, 06 Dec 2019 08:25:14 +0000 Fri, 06 Dec 2019 08:25:14 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://hnn.us/site/feed After the Bleeding Stopped

Writing about an event that you were involved in a half-century in the past is dicey business.  This is especially true when there was no contemporary press coverage of your piece of the affair to check against and you’ve either drifted out of contact with principal participants long, long ago or they are, well, rather dead.  

 

It’s not as if the event itself, the disastrous Rolling Stones free concert at California’s Altamont Speedway was not extensively covered in all its painful detail at the time. It was even documented in the iconic film Gimme Shelter.  Multiple cameras captured the chaos on and in front of the stage as Hells Angels “security guards” scuffled with band members and charged into the crowd to pummel anyone they perceived was threatening them or was otherwise “out of line” with pool cues.  A man was stabbed to death by a trio of Angels during the Rolling Stones’ climactic performance.  Altamont immediately became the media counterpoint to Woodstock’s “3 Days of Peace and Music” held four months earlier near New York City. 

 

Though I hesitated to sort through my memories and write about the concert for its 50th anniversary because I lack more details than I’m comfortable with, underground press chronicler Ken Waschberger encouraged me.  “Just say what you remember,” he wisely advised.  So here goes. 

 

My role at Altamont was to coordinate the post-show grounds cleanup. In truth, it was my own big mouth that entangled me with the whole mess. In 1969, I got involved in the music and countercultural scene in Kansas City, successfully negotiating the Mother Love Tribe’s weekly Sunday park concerts with the Parks and Recreation Department while working on the local “underground” paper, the Screw (later renamed Westport Trucker). 

 

It had been an absurdly busy year. I was also, as time and circumstances allowed, helping out a local rock band; romancing a lady; finishing high school a year early, and successfully obtaining the objective of a law suit against a local school district on behalf of some of the inmates left behind. In the midst of this, I was called out at the very last minute to lend some minor assistance at the Woodstock Music and Arts Fair north of New York City. I arranged for a colleague to handle my last two scheduled park concerts and caught a plane to LaGuardia.

 

After helping at Woodstock, I went to San Francisco to work with a local San Francisco production company, the Family Dog, but with the real objective of relaxing and getting myself “grounded” between concert seasons in Kansas City. This was my third brief stint with the Dog since 1966 and I did whatever odd tasks their revered founder, Chet Helms, had for me. Chet put me up in their relatively unused green school bus parked below the sand dunes and beside the Dog’s music hall on the Pacific coast.  The bus’s exterior was well faded from the sun and sea air and the interior required quite a bit of cleaning and reorganizing, but it couldn’t have been a sweeter set-up.  

 

That fall, I’d proudly recounted to Chet, the Grateful Dead’s Ron McKernan and Phil Lesh, It’s a Beautiful Day’s David LaFlamme, and others what a marvelous job Mother Love had done keeping KC’s Volker Park in shape despite the crowds.  Since everyone wanted to hear about Woodstock (it was kind of a mix of curiosity and San Francisco “Woodstock envy”), I’d also relayed my experience working it. These stories put me on the radar. 

 

Originally, the Rolling Stones concert was supposed to be held at Golden Gate Park. The second planned location, the Sears Point Raceway near the Dead’s ranch in Novato, also fell through. In the middle of this location turmoil, I got a call from Rock Scully, one of the Grateful Dead’s managers. He said he heard grrr-aaat things about my work and would like to know if I’d be willing to coordinate the grounds cleanup at the new Altamont Speedway location. 

 

I explained to Scully that I had nothing to do with any grounds work at Woodstock; that my principal function, shared with two other fellows (one of whom had disappeared almost immediately), was keeping an eye on the unused lighting equipment under the stage and shooing people off the elevator frame when equipment had to be moved. With the exception of just one lengthy stint when I helped get helicoptered-in food supplies up to the far crest of the bowl where Hugh Romney’s diligent Hog Farmers and other volunteers were preparing food for the masses, I’d very briefly outside of “the citadel”* perhaps only four or five times.  I also informed him that only two or three of the 15 Volker Park shows I did before Woodstock were attended by more than 10,000 people (which, in truth, was an exaggeration because I should have said 5,000).  

 

But Scully literally wouldn’t take “no” for an answer and pressed hard.  He even put Grateful Dead front-man “Pigpen” Ron McKernan on the line.  His brother had done me some favors “way back” in ‘66-‘67 (no, they weren’t drug related) and Ron brought it up in an effort to push me into agreeing!  Good cop - bad cop?  I liked Ron but this seemed more like bad cop and bad cop. 

 

I knew that, due to the last-minute problems associated with moving the production roughly a hundred miles from Sears Point, it would be a big task. Others at the Dog were asked to assist and, like me, reluctantly agreed only because it was the Grateful Dead doing the asking.  One good friend, ace sound man Lee Brinkman, had mixed feelings but saw it almost as a duty.  I consoled myself with the thought that my work would not truly start until the day after the show and looked forward to, for once, just being a spectator.

 

“OK.  OK.  I’ll do it,” I told Scully. 

 

*The citadel was what Woodstock’s red-shirted security personnel called the fenced-in area that they had withdrawn to when the festival’s outer perimeter was abandoned and the event was declared to be free to all.  It encompassed the area immediately in front of the stage; the conglomeration of trailers and facilities to the rear, and wide expanses to both sides that were used as helicopter landing fields.

 

The Concert

The barren brown hills that marked much of the final drive to the speedway were a far cry from the lush green of the bus ride into Bethel, New York, months earlier, but the crowd at the site was in good spirits, almost festive.  

 

After setting up camp with some friends about a hundred yards from the stage I went around back to ”check in.” Unlike Woodstock, there was no “citadel” and the backstage area was a disorganized and tightly packed jumble of rental trucks, school and metro-type busses tents, and trailers.  I quickly located the Dog’s green school bus which would serve as my post-show office and storage, but the expected tools were nowhere to be found.  They must be somewhere else or coming later, I figured.

 

The most unsettling difference between the two events, however was that the stage at Altamont was frightfully low, about three feet. Gravity matters. This design feature had been perfectly appropriate for the elevated ground of the Sears Point site but invited trouble when situated in the low area here as people would tend to be pressed to it instead of back away. I also didn’t like that the Hells Angels’ San Francisco chapter that frequently worked security for the Grateful Dead did not seem to be there in force.  Nor did I see club president Sony Barger who supplied a steadying hand when things began to escalate between the Angles and those waiting in line outside the Family Dog to see the Dead’s side band, New Riders of the Purple Sage, just a couple weeks before.  Instead, Angels from the San Jose chapter and many bikers of undetermined origin hugged the stage area as a considerable amount of beer was passed around. 

 

A fellow with the band It’s a Beautiful Day who was helping with the staging (it wasn’t singer-violinist David LaFlamme) confirmed that Sony wasn’t there yet and one of the SF Angles said he was dealing with “legal stuff” back in the city but that he’d definitely be coming.  

 

As the concert started, there were immediately problems. The free Stones concert also featured about a half dozen of San Francisco’s top bands and the first one up, Santana, leaped to a bouncing start but quickly came to a jarring halt.  There was some kind of trouble at the stage but I thought it was probably just a problem with the sound system and wondered if my friend Lee was pulling his hair out. I couldn’t have been more wrong.

 

The gruesome events that went on until well after dark have been recounted over and over elsewhere. I went around to behind the stage three times before the show ended in futile attempts to see if I could be of any assistance.  It was on my first trip back as the trouble escalated that I got the shock of my young life. I was carrying out some simple task with another Family Dog Productions guy (whose name is now forgotten) when we were told that the Grateful Dead were bugging out. They were fleeing the scene and leaving everybody else to deal with the mess they’d organized on behalf of the Rolling Stones!  We immediately turned to each other.  His jaw had practically dropped to the ground and the eyes were popped wide. I must have looked the same to him.  The only reason we were there was because the Dead had asked us.  

 

Ultimately, I remember only one band --- and one band only --- making it all the way through their set with no interruptions: The Flying Burrito Brothers.  Because bands shortening their sets and because the Stones wanted their performance to be held after dark for dramatic effect, the absence of the Dead created a gap of more than two hours before the Stones went on stage.

 

At the time, when I learned that the Stones would not go on until it was dark, I naively assumed that there was a bright side to the unexpected moratorium. I believed that the gap would actually provide a “cooling off’” period and the bikers could be delicately gotten under control, off the stage, and reorganized into specific security duties. 

 

Tragically, the people who were in the best position to carry this off, the Grateful Dead had run for the tall grass and by the time Sony finally roared up with an escort of more SF Angles things were far beyond even his ability to restore order. Instead of cooling off, tensions mounted during the long, long wait and beatings resumed as the Stones started up after nightfall and played on, oblivious for a time that the death right in front of them had even happened. 

 

The Aftermath

Later that night, after the bleeding stopped, everyone involved in the organization and staging of the show worked at record speed to “get the heck out of Dodge.”  By first light, there were only two abandoned vehicles in the immediate area, a few cars in the far distance, and the Dog’s faded green bus, my “office” for the clean-up.  

 

Thankfully, the massive throng had come and gone in less than 24 hours and there’d been no rain so the grounds were in nowhere near as bad a shape as after Woodstock.  Nevertheless, the flood of humanity, receding hurriedly in the dead of night, had left enough refuse in its wake to keep a crew fully employed for a couple days. Unfortunately, there was no crew.  

 

When the Grateful Dead’s manager Scully asked me what I needed for the cleanup, I asked how many people were expected. He’d guessed there would be “50-, maybe 100-thousand” attendees, but it was actually about 300,000. Despite my tender age I’d already been involved --- to varying degrees --- in stuff like this for several years. I told him I would need willing workers, cash for 50 yard rakes plus 20 each of garden rakes and shovels, a truck with drinking water, three-days’ meals catered for 50 people, five porta potties in the work area, tents and cots for 50, and for the Speedway to agree to make their phone(s) available. I also said that we would have to hire a firm with the proper equipment to scoop our piles into trucks that would haul the stuff off for disposal. 

 

Scully promised he would personally ensure that it was on site. As for as the workers, Scully said that could be coordinated with the current version of the old Haight-Ashbury Switchboard and that he’d already talked with them --- he said --- and told me who to contact.  

 

To make a long story short, I received almost none of the required support. The folks at the Switchboard said that they’d never heard from him or anybody about this. I found it hard to believe that Scully had simply lied to get an agreement out of me, and rationalized that he’d intended to get that ball rolling and had gotten sidetracked.  Though apprehensive, I pressed on and the Switchboard made a genuine effort to enlist at least some volunteers, but at the site itself, there wasn’t one lousy rake or any of the other things Scully promised. 

 

Extensive improvisation was required to get the job done. Rakes were fashioned from the wooden slats inserted diagonally in the track’s perimeter chain link fence that lined the top of the ridge to the right of the stage. The impromptu clean-up crew, which fluctuated from about 15 to 45-50 people depending on the time of day, compromised principally stragglers who were willing to be put to work.  These were primarily kids who had become separated from their rides. The rest of the stragglers were either those who wanted to linger a little longer or were simply too blasted to arrange a hook-up with others heading back to the city.  There were as many as 250 of them that morning and I’d begun my recruitment campaign the night before.  

 

We faced an enormous task. First, we gathered the glass and heavy items like abandoned ice chests and a surprising number of shoes into irregularly spaced piles sorted by type --- glass, cans, and miscellaneous--- which included some 30,000 wine bottles (roughly one for every 10 to 12 in attendance).  Then followed the attack on the paper waste. Working individually and in skirmish lines of up to a dozen, much of the refuse had been consolidated into piles by the middle of the third post-show day and luckily, at least while I was there, no significant winds came up to rescatter the paper.

 

Though one would have to walk a pretty fair distance to the porta potties near the stage, especially if you were working at the far ends of the grounds, the facilities were clean and not overflowing.  Water could be obtained from the speedway office, which was a pretty good hike up and around the track, but there was no shortage of jugs and bottles to put it in.  What we didn’t have was food.  

 

I would have brought a whole list of contact numbers if I’d known that things would work out this way but, as it was, I had just two: the drummer Mickey Hart’s at the Grateful Dead’s ranch and the Family Dog’s office.  The former was constantly busy (off the hook?) and the latter alternately busy or not answering.  I was, however, able to get the Switchboard’s number from the operator and, true to form, they were apparently able to put out the word on our plight and relief came in the form of a perfectly timed arrival of fruits, vegetables, and hard-boiled eggs for dinner.  Fruits, veggies, baked goods and some desperately need extra rakes were brought the next morning by a small party of ecology-minded students from either UC Berkeley or San Francisco State.

 

The wooden slats in the track’s perimeter fence were steadily disappearing as they early on were recognized by the stragglers as an excellent source of firewood on the cold nights.  As for myself, and later on an injured kid and his girlfriend, I was able to take advantage of the butane heaters for the stage that had been stored in the Family Dog’s bus. I also put up the Dog’s very distressed soundman, Lee, for two nights since he’d accidentally been left behind during the confused nighttime exit after the show. The stragglers, including my volunteer workers, understandably dwindled at an escalating pace.

 

Scully showed up at the speedway office three days after the event with a load of ittsy bittsy, triangle-cut ham and cheese sandwiches, a gaggle of reporters and, of course, no tools.  From atop the hill by the track he gathered the reporters and camera crews and waved his arm over the expanse and the --- from a distance --- neat piles of trash dotting the grounds from the abandoned stage and towers and extending out perhaps a quarter mile. 

 

Scully spoke proudly of the cleanup yet made no effort at all to speak with the remaining kids who had now been at it for several days.  As for me, it was only by luck that I’d been at the speedway office waiting for my ride out when he arrived.  We’d never met each other face to face --- and he made no effort to find me either --- so I used my anonymity to hold back and observe the circus. I made one last trek down to the stage area to tell people that additional food had arrived at the office.  Then I was outta there.

 

If the Grateful Dead and their management had followed up on supplying the technical support they’d promised, the cleanup would have been a pretty straight-forward task, However, the speedway employees I dealt with firmly believed that all the pushing and the promises by Scully to get me to say “yes” was just so that he could tell the site’s owners during the negotiations that he had things all arranged to clean up the inevitable mess.  After all this time, who really knows?  What I can say is that while many features on Altamont’s 50th anniversary will focus on the violence, I’ll always remember what happened after the bleeding stopped: the chaos, the broken promises, and what the willing volunteers--- unsupported and unknown --- ultimately accomplished there.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173736 https://historynewsnetwork.org/article/173736 0
A Wealth Tax? Two Framers Weigh In

 

Wealth taxes are on the current political table and hotly debated. All taxation was on the framers’ table as they considered a new constitution. What would they make of the measures we are considering now? And more to the point: does the Constitution they drafted allow Congress to tax a person’s overall wealth? 

 

The short answer: yes and no. The longer answer requires historical context.

 

In the six months preceding the Federal Convention of 1787, Congress received from the separate states, which alone possessed powers of taxation, a grand total of $663—hardly enough to run the nation. Little wonder that the framers’ proposal, what is now our Constitution, granted Congress sweeping authority to levy taxes: “The Congress shall have Power to lay and collect Taxes, Duties, Imposts and Excises …”

 

Although taxing authority was broad, the framers delineated only five specific types, each with a qualification:

 

“Duties, Imposts, and Excises shall be uniform throughout the United States.” 

“No Capitation, or other direct, Tax shall be laid, unless in proportion to the Census or enumeration.”

“No Tax or Duty shall be laid on any Articles exported from any State.” 

 

Presumably any taxes not mentioned would have to meet qualifications as well. The underlying principle was fairness, but mechanisms to achieve that goal differed. Exports were ruled out so Congress wouldn’t handicap Virginia, for instance, by taxing tobacco. Most other taxes would be uniform. Direct taxes, on the other hand, had to be apportioned according to state populations. 

 

But what, exactly, was a direct tax? And how might that differ from its opposite, an indirect tax, a term that does not appear in the Constitution? 

 

That’s what Rufus King, delegate from Massachusetts, asked his colleagues at the end of the day on August 20, 1787, after some twelve weeks of deliberations. From James Madison’s Notes of Debates: “Mr. KING asked what was the precise meaning of direct taxation. No one answered.” We have no indication that the framers ever defined direct and indirect taxation, either then or at any other time during their proceedings. 

 

Fast forward to the mid-1790s, when two former delegates weighed in. Alexander Hamilton, while Secretary of the Treasury, recommended a federal tax on carriages, an item that only rich people could afford. Congress levied that tax, but Daniel Hylton, a carriage owner, challenged the measure on constitutional grounds. It was a direct tax, he argued, and Congress had failed to apportion it amongst the states. The case found its way to the Supreme Court. There, Associate Justice William Paterson, who had introduced the New Jersey plan at the Federal Convention, offered a coherent explanation of the framers’ treatment of taxation:

 

“It was … obviously the intention of the framers of the Constitution, that Congress should possess full power over every species of taxable property, except exports. The term taxes, is generical, and was made use of to vest in Congress plenary authority in all cases of taxation… All taxes on expences or consumption are indirect taxes. … Indirect taxes are circuitous modes of reaching the revenue of individuals, who generally live according to their income. In many cases of this nature the individual may be said to tax himself.”

 

An individual taxes himself because he chooses to participate in an activity that is taxed; he can circumvent the tax by not purchasing or consuming the item in question. That’s what makes it “indirect.” A “Capitation, or other direct, Tax” is not optional in this sense; both capitation taxes and property taxes were widespread in America, and a person does not choose to be a person (a capitation tax), nor willingly decide to own no property. But why didn’t the framers apply the simple rule of uniformity to such taxes? Paterson recalled that peculiar politics, not abstract reasoning, led to the Federal Convention’s approach:

 

“I never entertained a doubt, that the principal, I will not say, the only, objects, that the framers of the Constitution contemplated as falling within the rule of apportionment, were a capitation tax and a tax on land. … The provision was made in favor of the southern States. They possessed a large number of slaves; they had extensive tracts of territory, thinly settled, and not very productive. A majority of the states had but few slaves, and several of them a limited territory, well settled and in a high state of cultivation. The Southern states, if no provision had been introduced in the Constitution, would have been wholly at the mercy of the other states. Congress in such case, might tax slaves, at discretion or arbitrarily, and land in every part of the Union after the same rate or measure: so much a head in the first instance, and so much an acre in the second. To guard against imposition in these particulars, was the reason of introducing the clause in the Constitution, which directs that representatives and direct taxes shall be apportioned among the states, according to their respective numbers.”  

 

In the end, Paterson and his fellow justices concluded that the tax on chariots must be considered indirect. To apportion the tax among the states would be absurd, they argued; if there was only one chariot owner in a state, he would have to assume his state’s entire liability. And if there were no chariots, how could that state ever meet its apportioned share? But to strike down a tax on chariots because it was unworkable would seriously undermine Congress’s critical authority “to lay and collect Taxes,” which buttressed the entire governmental apparatus. The only alternative was to declare Hamilton’s chariot tax indirect—and thereby constitutional.

 

What might Paterson and his colleagues have concluded if Congress had levied a tax on all wealth, not just one particular luxury item? That depends. They might have applied the same reasoning: treating a wealth tax as “direct” would be hopelessly impractical, but if seen as an excise, it would fall within Congress’s plenary powers of taxation. Indeed, any tax (except one on exports) that the framers had notcategorized as “direct” would be constitutional, if applied uniformly. So it all boils down to one question: what taxes, exactly, are to be considered direct?

 

While Paterson enumerated “a capitation tax and a tax on land” as the “principal” taxes “falling within the rule of apportionment,” he hinted they might not be the “only” ones. Perhaps he could envision others, taxes that were not addressed at the Federal Convention. New England governments in those times relied heavily on property taxes that included not only raw land but also improvements and livestock, which made the land more valuable. Such taxes were not based on “expences or consumption,” activities that might be avoided—Paterson’s criteria for indirect taxes. They taxed who a person was, economically—the apparent (although unstated) standard for a direct tax. 

 

Alexander Hamilton made this explicit. Within a brief filed for Hylton v. United States, he added a third category to the list of direct taxes:

 

“What is the distinction between direct and indirect taxes? It is a matter of regret that terms so uncertain and vague in so important a point are to be found in the Constitution. We shall seek in vain for any antecedent settled legal meaning to the respective terms—there is none.

 

“But how is the meaning of the Constitution to be determined? It has been affirmed, and so it will be found, that there is no general principle which can indicate the boundary between the two. That boundary, then, must be fixed by a species of arbitration, and ought to be such as will involve neither absurdity nor inconvenience.

 

“The following are presumed to be the only direct taxes.

Capitation or poll taxes.

Taxes on lands and buildings.

General assessments, whether on the whole property of individuals, or on their whole real or personal estate; all else must of necessity be considered as indirect taxes.”

 

In Hamilton’s estimation, any “general assessment” of a person’s “whole” property or estate—what we call today a wealth tax—was one of those “other” direct taxes that must be apportioned amongst the states. But to apportion a wealth tax would be absurd; today, a handful of wealthy individuals in West Virginia or Mississippi, to account for their state’s quota, would have to pay several times as much as those residing in states with numerous rich taxpayers. In Hylton v. United States, the Supreme Court used the unfairness of apportionment to declare the chariot tax indirect, and therefore constitutional, but according to Hamilton, that line of reasoning is off the table. A wealth tax is a direct tax, pure and simple, he reckoned; unless apportioned, it will be unconstitutional. 

 

Of course Hamilton was not on the Supreme Court, so his statement has no official bearing. Legal scholars might disagree with his assessment of “general assessments,” but politically, that is beside the point. No wealth tax based on apportionment among the states will make its way out of committee, and any wealth tax claiming to be “indirect” will inevitably wind up before the Supreme Court. There, originalists who idolize the framers will look no farther than Hamilton’s testimony to justify striking down the measure. 

 

And all conservatives, likely the majority for years to come, will weaponize Hamilton to support their preconceived aversion to wealth taxes. They will not be swayed, as some scholars contend, by Knowlton v. Moore, which in 1900 upheld an inheritance tax by treating it as indirect. There’s plenty of wiggle room between taxing a person’s “whole real or personal estate”—a wealth tax—and taxing how a person chooses to dispense with that estate. When conservative justices weigh an obliquely relevant case from 1900 versus Hamilton’s forceful pronouncement that yields the result they prefer, there is little doubt as to which they will favor. 

 

Might Chief Justice Roberts be a swing vote? When upholding the Affordable Care Act as within the taxing authority of Congress, he declared: “A tax on going without health insurance does not fall within any recognized category of direct tax.” Because it was “plainly not a tax of the ownership or land or personal property,” it did not have to be “apportioned among the several States.” A wealth tax, on the other hand, would target “ownership of land or personal property.” It must be apportioned, Roberts will conclude, or else it’s unconstitutional.  

 

We have been through this before. In Pollock v. Farmers’ Loan and Trust Company (1895), the Supreme Court declared that taxing income derived from wealth (rents, interest, and dividends) was a direct tax and therefore had to be apportioned, while taxing income derived from labor (wages and salaries) was indirect and therefore did not have to be apportioned. This meant that Congress could tax working people readily, while taxing wealthy people would be unworkable. Workers cried foul. They pushed for, and got, the Sixteenth Amendment, which repudiated Pollock by lifting the apportionment requirement from all income taxes.

 

Today, facing rampant inequality, we can cry foul again—but we remain saddled with a provision of the Constitution geared to protect the slave-owning interests of Southern states in 1787. Even so, taxing income rather than wealth is always possible. There is no constitutional limit on the tax rate, so long as it is “uniform throughout the United States.”

 

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173734 https://historynewsnetwork.org/article/173734 0
Roundup Top 10!  

China Isn’t the Soviet Union. Confusing the Two Is Dangerous.

by Melvyn P. Leffler

An unusual confluence of events World War II led to America’s bitter rivalry with the U.S.S.R. That pattern is not repeating.

 

The Forgotten Origins of Paid Family Leave

by Mona L. Siegel

In 1919, activists from around the world pressed governments to adopt policies to help working mothers.

 

 

A Historic Crime in the Making

by Rebecca Gordon

400 years of history leading up to Donald Trump.

 

 

Donald Trump, Meet Your Precursor

by Manisha Sinha

Andrew Johnson pioneered the recalcitrant racism and impeachment-worthy subterfuge the president is fond of.

 

 

Pelosi did what no one else could

by Julian Zelizer

From the perspective of presidential history, this will become a major part of how we remember the term.

 

 

Why Did U.N.C. Give Millions to a Neo-Confederate Group?

by William Sturkey

The University of North Carolina’s settlement over a controversial statue is a subsidy for white nationalism.

 

 

Trump’s border wall threatens an Arizona oasis with a long, diverse history

by Jared Orsi

Heavy machinery grinds up the earth and removes vegetation as construction of President Trump’s vaunted border wall advances toward the oasis.

 

 

Calling Trump ‘the chosen one’ is a political act — not a theological statement

by Wallace Best

Claims about God’s plans for the United States often morph into justifications for wrongdoing.

 

 

The History That Happened: Setting the Record Straight on the Armenian Genocide

by Ryan Gingeras

For a brief moment this fall, world interest fixed its attention to an event of the past. News that the U.S. Congress approved a formal resolution recognizing the Armenian Genocide was carried as a leading story by media outlets worldwide.

 

 

 

Spinster, old maid or self-partnered – why words for single women have changed through time

by Amy Froide

Attitudes toward single women have repeatedly shifted – and part of that attitude shift is reflected in the names given to unwed women.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173786 https://historynewsnetwork.org/article/173786 0
How did November become the Mizrahi Heritage Month? And what’s Mizrahi anyhow?

A Yemenite family walks through the desert to a reception camp 

 

Recently, a growing number of Jewish American organizations began marking November as the "Sephardic/Mizrahi Heritage Month." In the American context, awareness months come to illuminate histories of marginalized communities whose stories are overshadowed and underrepresented in the official curricula and memory. The Mizrahi heritage month, by contrast, is not a local, grassroots initiative that emerged in response to experiences of discrimination or marginalization. Instead, it is a transatlantic importation of recent attempts by the Israeli government to commemorate the forced expulsion of Jews from the Arab and Muslim world in the wake of the establishment of Israel. Nor is November a month that has any particular significance in the histories or rituals of any of the dozen Jewish minority communities that resided in Northern Africa and the Middle East. Instead, the specific date, November 30th, was chosen by the Israeli lawmaker as a symbolic birth date of the mass exodus of Jews from Arab speaking lands triggered by the UN Partition Resolution of November 29th, 1947.

 

Erroneously, in the North American Jewish world, the terms “Sephardi” and “Mizrahi” are often treated as synonymous. Yet, unlike the term “Sephardi,” which originates in the expulsion of Jews from the Iberian Peninsula in 1492 (Sepharad is the Hebrew term for Spain), Mizrahi is a category which is not only far more recent in historical terms but is also politically charged and rooted in a specific Israeli context. Mizrahi, literarily meaning “eastern” or “Oriental” in Hebrew, was an adjective-turned-term that was coined in pre-statehood Palestine and later used in Israel to denote any non-Ashkenazi Jew. In early statehood Israel, a considerable degree of patronizing attitude and discrimination towards “Oriental Jews” who were regarded as less civilized, ill-educated, lacking sufficient ideological commitment resulted in discrimination. During the 1950s, Mizrahi Jews were sent to frontier settlements and to newly established "development towns" that were established in the country's peripheral regions. Soon enough, these towns transformed into conspicuous pockets of deprivation and poverty, and their Mizrahi residents became a discernible low-status blue-collar class, deprived of the same employment and education opportunities as their Ashkenazi peers. In that process, the adjective "Mizrahi” became a highly contested and politically charged term, and not a neutral sociological category.  

 

Years of a persistent civic struggle for equal rights by Mizrahi activists and scholars in Israel, accompanied by demands for recognition of their full history, did not solve all social problems and inequality nor erase past scars. During the 1970s, the Likud Party’s leadership reappropriated the Mizrahi struggle to claim a stake at the Israeli national story. Other political parties, such as Shas, the non-Ashkenazi ultra-Orthodox party that was established during the mid-1980s, also tried to harness the Mizrahi struggle with a considerable level of success. A 2014 legislation that created a new day of commemoration for Mizrahi Jews is yet another attempt to divert the Mizrahi call for equality in Israel to a political cause. In particular, it uses the politics of memory to create a false equation between "Jewish refugees from the Arab World" and the Palestinian refugees. Jewish communities in the Middle East and North Africa region had undergone different experiences, not just between different countries, but even between various communities in the same country. 1948 was undoubtedly a turning point for a great number of them, and the Israel-Palestine conflict loomed over much of it. But casting these rich histories into one single-dimensional narrative is, in fact, a cynical strategy employed by the Israeli Right to avoid the need to address Palestinian claim for compensation on behalf of the Palestinian refugees. 

 

Right-leaning Jewish organizations in the US, such as the San Francisco based JIMENA (Jews Indigenous to the Middle East), were quick to adopt the Israeli ready-made mold of “Mizrahi commemoration” and to blend it with the American practice of “awareness months.” Their website describes them as a non-profit “committed to achieving universal recognition to the heritage and history of the 850,000 indigenous Jewish refugees Arab from countries.” Similar ideas are expressed by Hen Mazzig, a charismatic yet controversial "Hasbarah" (pro-Israeli advocacy) speaker who tours North American campuses, to speak to students about his family's immigration from Tunisia and Morocco, his experiences as a gay officer in the IDF, and ways of combating anti-Israeli critics. Critics of Israel on US campuses are described by him as silencing Middle Eastern and North African Jews. Almost simultaneously, a call upon Jews to join a mass kaddish (a prayer traditionally recited in memory of the dead) on November 30th appeared on the pages of the Jerusalem Post. It remains unclear if these are grassroots initiatives or a well-orchestrated state-funded campaign. As the Jewish daily The Forward revealed, Mazzig is most probably a contractor paid by the Israeli government. 

 

While asserting correctly that the heritage of Middle Eastern Jews does not receive equal space in the American-Jewish establishment, the kind of heritage that JIMENA and wishes to promote is equally superficial and shallow and is mostly comprised of stories of persecution and harassment followed by a final expulsion and a Zionist redemption. The historical narrative they are offering has less to do with the particular heritage and histories of these diverse communities and more to do with a politics of competitive victimhood and a “quid pro quo” argument about the nature of the Israeli-Palestinian conflict, in which Jewish refugees from the Middle East are cast as mirror images of the roughly 750,000 Palestinians who were expelled during the 1948 War.

 

As historians who dedicate their careers to research modern Jewish history, who believe in the importance of studying the histories of Middle Eastern Jewish communities alongside Ashkenazi communities, we welcome the intent to deepen and broaden our understanding of the place of these communities. Eurocentric assumptions, including our tendency to understand Jewish modernity writ large as coming out of the experiences of European Jews, provides a very narrow prism that fails to capture the Jewish historical experience in all its richness and diversity. It is about time that academic Jewish Studies programs would expand their curriculum and educate the students and the wider public about the culture and history of Mizrahi Jews, alongside other non-Ashkenazi communities such as the Yemenite Jews, Iranian (Persian) Jews, Greek and Balkan Jews, Caucasus Jews, Bukharan Jews and more. We also believe in the value and the importance of heritage months to raise awareness to and help advance our understanding of marginalized social groups. Notably, per the Library of Congress website, the American-Jewish Heritage month is celebrated in May, as part of an effort of Jews to be part of the “big American Jewish tent.” We raise our eyebrows, however, at what seems as what might be an attempt to hijack this noble cause for a partisan issue and a state-sponsored invented tradition. Jewish communities in the US should pay greater attention to the non-Ashkenazi stories alongside the Ashkenazi saga. But we wonder if a day of commemoration that is copy-pasted mechanically rather than reflectively by Jewish diaspora communities would serve that purpose.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173735 https://historynewsnetwork.org/article/173735 0
William Barr’s Upside-Down Constitution

 

Attorney General William Barr’s November 15 speech before the Federalist Society, delivered at its annualNational Lawyers Convention,received considerable attention.Barr attackedwhat he views as progressives’ unscrupulous and relentless attacks on President Trump and Senate Democrats’ “abuse of the advice-and-consent process.” Ironies notwithstanding, the core analysis of his speech is a full-throated defense of the Unitary theory of executive power, which purports to be an Originalist view of the Founders’ intent. 

 

This defense, however, reveals the two fundamental flaws of the Unitary view: first, that it is built on a fictional reading of constitutional design; and second, that its precepts attack the fundamental tenets of the checks and balances system that the Founders did create. 

 

Barr’s speech begins with his complaint that presidential power has been weakened in recent decades by the “steady encroachment” of executive powers by the other branches. Even allowing for congressional resurgence in the post-Watergate era of the 1970s, no sane analysis of the Reagan era forward could buttress Barr’s ahistorical claim. Ironically, the presidents in this time period who suffered political reversals—Bill Clinton’s impeachment and the thwarting of Barack Obama’s agenda by congressional Republicans in his final six years of office—nevertheless emerged from their terms with the office intact in powers and prestige. 

 

Attorney General Barr’s reading of colonial history claims that the Founders’ chief antagonist during the Revolutionary period was not the British monarchy (which, he claims, had been “neutered” by this time) but an overbearing Parliament. Had Barr bothered to consult the penultimate statement of American grievances, the Declaration of Independence, Barr would have found the document to direct virtually all of its ire against “the present King of Great Britain.” The lengthy list of grievances detailed in the document charge “He,” George III, with despotism and tyranny, not Parliament (where some of whose members expressed sympathy for the American cause). Barr’s message? Legislatures bad, executives not so much. 

 

Barr insists that by the time of the Constitutional Convention there was “general agreement” on the nature of executive power and that those powers conformed to the Unitary vision—complete and exclusive control over the Executive branch, foreign policy preeminence, and no sharing of powers among the branches. Barr dismisses the idea of inter-branch power-sharing as “mushy thinking.” Yet the essence of checks and balances is power-sharing. As the political scientist Richard Neustadt once noted, the Founders did not create separate institutions with separate powers, but “separate institutions sharing powers.” 

 

And as if to reassure himself and other adherents, Barr insists that the Unitary view is neither “new”—even though it was cooked up in the 1980s by the Meese Justice Department and the Federalist Society—nor a “theory.” Barr says, “It is a description of what the Framers unquestionably did in Article II of the Constitution.” Yet aside from the veto power, he fails to discuss any actual Article II powers. And in the case of the veto, he fails to note that this power is found in Article I, and is generally understood as a legislative power exercised by the executive. Shouldn’t an Originalist take a passing interest in original text? Nor does he explain why Article II is brief and vague, compared to Congress’s lengthy and detailed Article I powers. What we know about that brevity and vagueness is that it reflected two facts: the Founders’ difficulty and disagreement in defining presidential powers, and the wish of a few Founders who favored a strong executive to leave that door open, hoping that future presidents might help solidify the office. That wish, of course, came true. 

 

Most of the latter part of Barr’s speech is devoted to a condemnation of the judiciary, where it has not only set itself up at the “ultimate arbiter” of interbranch disputes, but worse has “usurped Presidential authority” by the very act of hearing cases and ruling against asserted presidential powers. Underlying these complaints are the Unitary tenet that the courts have no rightto rule in any area of claimed executive power. Barr vents his frustration at the extent to which Trump administration decisions and actions have found themselves tied up in court. Experts continue to debate what issues and controversies are or are not justiciable. But to assert by Unitary theory fiat that the courts cannot rule is to make an assertion found nowhere in the Constitution. And Barr also misses the fact that court rulings historically have most often favored executive powers. 

 

The Trump administration’s many questionable actions have raised both new and old concerns about the extent and reach of executive power. There is plenty of blame for abuses of power to spread around, most certainly including to Congress. But the Unitary theory offers no remedy to the power problems of the present era. And the idea that it somehow is an Originalist reading of constitutional powers would be laughable if it didn’t have so many adherents in the seats of power.  

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173732 https://historynewsnetwork.org/article/173732 0
Impeachment Has Always Been a Purely Political Process

 

What exactly is impeachment? A common view is that the House of Representatives judges that the president has committed a crime, then the Senate tries him. But the Constitution’s criterion of “high crimes and misdemeanors” does not correspond to anything specific in the U.S. criminal code. Even if it did, legal precedent has already established that a sitting President cannot be indicted. Hence, when special prosecutor Leon Jaworski presented indictments over the Watergate break-in, he listed President Nixon as an “unindicted” co- conspirator. Even if a prosecutor did manage to indict a sitting president, he would not face a trial. Only after being impeached, convicted in the Senate and expelled from office would he be liable to legal prosecution like other citizens.

 

Impeachability is therefore very much in the eye of the beholder. The Mueller probe was sometimes likened to a prosecutor returning his findings to a grand jury made up of Congress. But that was a false analogy. The Constitution is silent about what process is to be followed before articles of impeachment are voted on. No investigation is explicitly mandated. Strictly speaking, a newly elected House of Representatives could vote articles of impeachment on its first day, without any investigation. The reason this does not happen is because those favoring impeachment naturally prefer to be supported in the court of public opinion. But they needn’t rely on the outcome of an investigation if they are willing to gamble on the public’s support regardless. The Mueller probe disappointed House Democrats because it found no evidence of collusion with Russia and no evidence of obstruction of justice that could stand up in court. But it made no difference to Trump’s opponents. Adam Schiff and Erick Swalwell maintained that Trump was guilty of both before the Mueller probe was completed, and they maintain it now that it has come and gone. At the end of the day, they are banking on public opinion sharing their conviction that the president has committed a crime. Even if impeachment articles are voted in, however, the prospect of the Republican majority in the Senate convicting Trump are virtually zero.

 

Why doesn’t the Constitution make a president’s impeachment synonymous with a prosecution for breaking the law? In Germany, for example, the President can be impeached by either of the two legislative chambers. At that point, the case goes to a federal court, which decides if he is guilty and whether to remove him from office. To clarify why the American process is so complicated and indirect, we have to ask ourselves what the Founders meant impeachment for in the first place.

 

As I wrote in TYRANTS: POWER, INJUSTICE AND TERROR, one of the Constitution’s fundamental aims, according to Alexander Hamilton, was to forestall the emergence an American tyrant — a “Catiline or Caesar.” Tyranny in the 18th century context need not connote full-blown monsters like Hitler. If a ruler violated Americans’ rights, like King George III taxing them without granting them representation in Parliament, that was tyranny. Ancient democracies like Athens, according to Hamilton, veered between the extremes of tyranny and anarchy. They could only rely on a virtuous statesman like Pericles to break this cycle. Such statesmen are very rare, and could prove to be tyrants in disguise — it’s too chancy. The “new” political science of the Enlightenment, Hamilton says, relies instead on the institutional division of powers, preventing one branch of government from tyrannizing over the others. Just as the president having the power of the purse would violate Congress’s jurisdiction, Congress’s ability to try a president like a law court would violate that of both the Executive and the Judicial branches. Rooted in Machiavelli’s analysis of the Roman Republic, the American constitution was designed to promote a peaceable political warfare among the three branches of government that would forestall the actual violence of civil strife resulting from one branch reigning supreme. As James Madison put it, the causes of vice are sewn into the human soul. We can’t remove those causes, but through the correct institutional mechanisms, we can impede their harmful effects.

 

In sum, the removal of a President from office through impeachment was designed as a purely political process. It is value neutral with respect to legal guilt or innocence. The Founders stacked the deck against it happening so that it would not be trivially invoked, which is why it’s happened only three times. In the case of Donald Trump, at the end of the day, all that matters is how the House and Senate vote. This may not be as morally satisfying to either side as outright condemnation or exoneration. But the Founders were wary of the power of moral outrage to spark the extremes of tyranny and anarchy characteristic of democracy in the past. Whatever the verdict regarding President Trump’s impeachment, the reaction will be confined to screaming pundits, not armed mobs. Or so we hope.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173733 https://historynewsnetwork.org/article/173733 0
Neville Chamberlain, Sir Horace Wilson, & Britain's Plight of Appeasement

 

Adapted from Fighting Churchill, Appeasing Hitler by Adrian Phillips, published by Pegasus Books. Reprinted with permission. All other rights reserved.

 

In 1941, as his time in office drew to a close, the head of the British Civil Service, Sir Horace Wilson, sat down to write an account of the government policy with which he had been most closely associated. It was also the defining policy of Neville Chamberlain, the Prime Minister whom Wilson had served as his closest adviser throughout his time in office. It had brought Chamberlain immense prestige, but this had been followed very shortly afterwards by near-universal criticism. Under the title ‘Munich, 1938’, Wilson gave his version of the events leading up to the Munich conference of 30 September 1938, which had prevented – or, as proved to be the case, delayed – the outbreak of another world war at the cost of the dismemberment of Czechoslovakia. By then the word ‘appeasement’ had acquired a thoroughly derogatory meaning. Chamberlain had died in 1940, leaving Wilson to defend their joint reputation. Both men had been driven by the highest of motivations: the desire to prevent war. Both had been completely convinced that their policy was the correct one at the time and neither ever admitted afterwards that they might have been wrong.

 

After he had completed his draft, Wilson spotted that he could lay the blame for appeasement on someone else’s shoulders. Better still, it was someone who now passed as an opponent of appeasement. In an amendment to the typescript, he pointed out that in 1936, well before Chamberlain became Prime Minister, Anthony Eden, the then Foreign Secretary, had stated publicly that appeasement was the government’s policy. The point seemed all the more telling as Eden had been edged out of government by Chamberlain and Wilson in early 1938 after a disagreement over foreign policy. Eden had gone on to become a poster-boy for the opponents of appeasement, reaping his reward in 1940 when Chamberlain fell. Chamberlain’s successor, Winston Churchill, had appointed Eden once again as Foreign Secretary. Wilson was so pleased to have found reason to blame appeasement on Eden that he pointed it out a few years later to the first of Chamberlain’s Cabinet colleagues to write his memoirs.

 

Wilson’s statement was perfectly accurate, but it entirely distorted the truth, because it ignored how rapidly and completely the mean­ing of the word ‘appeasement’ had changed. When Eden first used the word, it had no hostile sense. It meant simply bringing peace and was in common use this way. ‘Appease’ also meant to calm someone who was angry, again as a positive act, but Eden never said that Britain’s policy was to ‘appease’ Hitler, Nazi Germany, Mussolini or Fascist Italy. Nor, for that matter, did Chamberlain use the word in that way. The hostile sense of the word only developed in late 1938 or 1939, blend­ing these two uses of the word to create the modern sense of making shameful concessions to someone who is behaving unacceptably. The word ‘appeasement’ has also become a shorthand for any aspect of Brit­ish foreign policy of the 1930s that did not amount to resistance to the dictator states. This is a very broad definition, and it should not mask the fact that the word is being used here in its modern and not its con­temporary sense. The foreign policy that gave the term a bad name was a distinct and clearly identifiable strategy that was consciously pursued by Chamberlain and Wilson. 

 

When Chamberlain became Prime Minister in May 1937, he was confronted by a dilemma. The peace of Europe was threatened by the ambitions of the two aggressive fascist dictators, Hitler in Germany and Mussolini in Italy. Britain did not have the military strength to face Germany down; it had only just begun to rearm after cutting its armed forces to the bone in the wake of the First World War and was at the last gasp of strategic over-reach with its vast global empire. Chamberlain chose to solve the problem by setting out to develop a constructive dialogue with Hitler and Mussolini. He hoped to build a relationship of trust which would allow the grievances of the dictator states to be settled by negotiation and to avoid the nightmare of another war. In other words, Chamberlain sought to appease Europe through discus­sion and engagement. In Chamberlain’s eyes this was a positive policy and quite distinct from what he castigated as the policy of ‘drift’ that his predecessors in office, Ramsay MacDonald and Stanley Baldwin, had pursued. Under their control, progressive stages in aggression by the dictators had been met with nothing more than ineffectual protests, which had antagonised them without deterring them. 

 

Chamberlain’s positive approach to policy was the hallmark of his diplomacy. He wanted to take the initiative at every turn, most famously in his decision to fly to see Hitler at the height of the Sudeten crisis. Often his initiatives rested on quite false analyses; quite often the dictators pre-empted him. But Chamberlain was determined that no oppor­tunity for him to do good should be allowed to escape. The gravest sin possible was the sin of omission. At first his moves were overwhelming­ly aimed at satisfying the dictators. Only after Hitler’s seizure of Prague in March 1938 did deterring them from further aggression become a major policy goal. Here, external pressures drove him to make moves that ran counter to his instincts, but they were still usually his active choices. Moreover, the deterrent moves were balanced in a dual policy in which Hitler was repeatedly given fresh opportunities to negotiate a settlement of his claims, implicitly on generous terms. 

 

Appeasement reached its apogee in the Czech crisis of 1938. Chamberlain was the driving force behind the peaceful settlement of German claims on the Sudetenland. He was rewarded with great, albeit short-lived, kudos for having prevented a war that had seemed almost inev­itable. He also secured an entirely illusory reward, when he tried to transform the pragmatic and unattractive diplomatic achievement of buying peace with the independence of the Sudetenland into something far more idealistic. Chamberlain bounced Hitler into signing a bilateral Anglo-German declaration that the two countries would never go to war. Chamberlain saw this as the first building block in creating a lasting relationship of trust between the two countries. It was this declaration, rather than the dismemberment of Czechoslovakia under the four-power treaty signed by Britain, France, Germany and Italy, that Chamberlain believed would bring ‘peace for our time’, the true appeasement of Europe. At the start of his premiership, Chamberlain had yearned to get ‘onto terms with the Germans’; he thought that he had done just that. 

 

Appeasing Europe through friendship with the dictators also required the rejection of anything that threatened this friendship. One of the most conspicuous threats was a single individual: Winston Church­ill. Almost from the beginning of Hitler’s dictatorship Churchill had argued that it was vital to Britain’s interests to oppose Nazi Germany by force, chiefly by rearming. Unlike most other British statesmen, Churchill recognised in Hitler an implacable enemy and he deployed the formidable power of his rhetoric to bring this home in Parliament and in the press. But Churchill was a lone voice. When he had opposed granting India a small measure of autonomy in the early 1930s, he had moved into internal opposition to the Conservative Party. Only a handful of MPs remained loyal to him. Churchill was also handicapped by a widespread bad reputation that sprang from numerous examples of his poor judgement and political opportunism. 

Chamberlain was determined on a policy utterly opposed to Churchill’s view of the world. He enjoyed a very large majority in Parliament and faced no serious challenge in his own Cabinet. Chamberlain and Wilson were so convinced that their policy was correct that they saw opposition as dangerously irresponsible and had no hesitation in using the full powers at their disposal to crush it. Churchill never had a real chance of altering this policy. It would have sent a signal of resolve to Hitler to bring him back into the Cabinet, but this was precisely the kind of gesture that Chamberlain was desperate to avoid. Moreover, Chamberlain and Wilson each had personal reasons to be suspicious of Churchill as well as sharing the prevalent hostile view of him that dominated the political classes. Wilson and Churchill had clashed at a very early stage in their careers and Chamberlain had had a miserable time as Churchill’s Cabinet colleague under Prime Minister Stanley Baldwin. Chamberlain and Wilson had worked closely to fight a – largely imaginary and wildly exaggerated – threat from Churchill’s support for Edward VIII in the abdication crisis of 1936. 

 

Churchill was right about Hitler and Chamberlain was wrong. The history of appeasement is intertwined with the history of Churchill. According to legend Churchill said, ‘Alas, poor Chamberlain. History will not be kind to him. And I shall make sure of that, for I shall write that history.’ Whatever Churchill might actually have said on the point barely matters; the witticism expresses a mindset that some subsequent historians have striven to reverse. The low opinion of Chamberlain is the mirror image of the near idolatry of Churchill. In some cases, his­torians appear to have been motivated as much by dislike of Churchill – and he had many flaws – as by positive enthusiasm for Chamberlain. Steering the historical debate away from contemporary polemic and later hagiography has sometimes had the perverse effect of polarising the discussion rather than shifting it onto emotionally neutral territory. Defending appeasement provides perfect material for the ebb and flow of academic debate, often focused on narrow aspects of the question. At the last count, the school of ‘counter-revisionism’ was being challenged by a more sympathetic view of Chamberlain. 

 

Chamberlain’s policy failed from the start. The dictators were happy to take what was on offer, but gave as good as nothing in return. Chamberlain entirely failed to build worthwhile relationships. Chamberlain’s advocates face the challenge that his policy failed entirely. Chamberlain’s defenders advance variants of the thesis that Wilson embodied in ‘MUNICH, 1941’: that there was no realistic alternative to appeasement given British military weakness. This argument masks the fact that it is practically impossible to imagine a worse situation than the one that confronted Churchill, when he succeeded Chamberlain as Prime Minister in May 1940. The German land attack in the west was poised to destroy France, exposing Britain to a German invasion. It also ducks the fact that securing peace by seeking friendship with the dictators was an active policy, pursued as a conscious choice and not imposed by circumstances. 

Chamberlain’s foreign policy is by far the most important aspect of his premiership and the attention that it demands has rather crowded out the examination of other aspects of his time at Downing Street. Discussion of his style of government has focused on the accusation that he imposed his view of appeasement on a reluctant Cabinet, which has been debated with nearly the same vigour as the merits or otherwise of the policy itself. In the midst of this, little attention has been paid to Wilson, even though Chamberlain’s latest major biographer – who is broadly favourable to his subject – concedes he was ‘the éminence grise of the Chamberlain regime … gatekeeper, fixer and trusted sounding board’.Martin Gilbert, one of Chamberlain’s most trenchant critics, made a start on uncovering Wilson’s full role in 1982 with an article in History Today, but few have followed him. There have been an academic examination of his Civil Service career and an academic defence of his involvement in appeasement.Otherwise, writers across the spec­trum of opinions on appeasement have contented themselves with the unsupported assertion that Wilson was no more than a civil servant.Wilson does, though, appear as a prominent villain along with Chamberlain’s shadowy political adviser, Sir Joseph Ball, in Michael Dobbs’s novel about appeasement,Winston’s War. 

 

Dismissing Wilson as merely a civil servant begs a number of questions. The British Civil Service has a proud tradition and ethos of political neutrality, but it strains credulity to expect that this has invariably been fully respected. Moreover, at the period when Wilson was active, the top level of the Civil Service was still evolving, with many of its tasks and responsibilities being fixed by accident of personality or initiative from the Civil Service side. Wilson’s own position as adviser to the Prime Minister with no formal job title or remit was unprecedented and has never been repeated. Chamberlain valued his political sense highly and Wilson did not believe that his position as a civil servant should restrict what he advised on political tactics or appointments. Even leaving the debate over appeasement aside, Wilson deserves attention. 

 

Wilson was so close to Chamberlain that it is impossible to understand Chamberlain’s premiership fully without looking at what Wilson did. The two men functioned a partnership, practically as a unit. Even under the extreme analysis of the ‘mere civil servant’ school whereby Wilson was never more than an obedient, unreflecting executor of Chamberlain’s wishes, his acts should be treated as Chamberlain’s own acts and thus as part of the story of his premiership. It is practically impossible to measure Wilson’s own autonomous and distinctive input compared to Chamberlain’s, but there can be no argument that he represented the topmost level of government. 

 

Wilson’s hand is visible in every major aspect of Chamberlain’s premiership and examining what he did throws new light almost everywhere. Wilson’s influence on preparations for war – in rearming the Royal Air Force and developing a propaganda machine – makes plain that neither he nor Chamberlain truly expected war to break out. One of the most shameful aspects of appeasement were the measures willingly undertaken to avoid offending the dictators, either by government action or by comment in the media; Wilson carries a heavy responsibility here. 

 

Above all it was Wilson’s role in foreign policy that defined his partnership with Chamberlain and the Chamberlain premiership as a whole. He was also the key figure in the back-channel diplomacy pursued with Germany that showed the true face of appeasement. Wilson carries much of the responsibility for the estrangement between Chamberlain and the Foreign Office, which was only temporarily checked when its political and professional leaderships were changed. Chamberlain and Wilson shared almost to the end a golden vision of an appeased Europe, anchored on friendship between Britain and Germany, which was increasingly at odds with the brutal reality of conducting diplomacy with Hitler. The shift to a two-man foreign policy machine culminated in the back-channel attempts in the summer of 1939 intended to keep the door open to a negotiated settlement of the Polish crisis with Hitler, but which served merely to convince him that the British feared war so much that they would not stand by Poland. Chamberlain and Wilson had aimed to prevent war entirely; instead they made it almost inevitable.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173728 https://historynewsnetwork.org/article/173728 0
How Tony Kushner’s A Bright Room Called Day Can Help Us Understand Our Political Moment

Tony Kushner’s A Bright Room Called Day, which opened at the Public Theater in New York on October 29 is a rare bird—a revival (with a substantial re-write) that proves to be more timely and incisive than the original was. In my opinion, the play does not work terribly well theatrically—I was not moved by any of the characters—but it is a good play to think about. 

 

The main question that the play poses isone askedby the nineteenth-century Russian thinker Nikolai Chernyshevsky and later Lenin: “What Is to be Done?” More specifically, in this case, “How should we respond to kleptocratic authoritarianism?” 

 

The success of the play as a thought experiment depends on one’s willingness to conceive of the parallels between historical periods. In its first avatar, performed in 1985, the play focuses on a group of friends in Berlin in 1932-33, and draws parallels between that time and the early years of Reagan’s second term, in order to register what it regarded as the incipient fascism of mid-1980s US. Breaking the realistic framework, a woman from Reagan’s time unaccountably appears, urging the young Berliners to flee, or at least to do something. The play was panned by critics like Frank Rich who, writing in the New York Times in 1991, found it “fatuous” and “infuriating.” He felt that Kushner had gone too far in making a simplistic and reductive comparison of Reagan’s America to Hitler’s Germany. 

 

In 2019, however, after a revision in which what is new makes up about 40% of the play, the comparison of the thirties with the later period appears more firmly based and even prescient. In the revised and expanded version, Kushner includes a second character from the future of the Berlin characters—the author, who speaks in our present with the emissary from the eighties. Uncertain about the value of what he is doing and has done—writing plays—the author asks, “Can theater make any [political] difference?” His willingness to examine his own choices and to ask meta-theatrical questions in the theater makes him an appealing figure. 

 

In defense of the parallels he draws between the 30s, the 80s, and the late 2010s, he argues that if one set of events (such as the Holocaust or Shoah) is established as the standard against which all others are to be judged, and yet no others are allowed to be comparable to it, then it is not useful as a point of reference. One cannot, he maintains, exempt one period from the realm of historical comparisons, although, I would add, one must be careful and responsible when drawing them.

 

If one allows this argument, then Kushner’s central political insight in the play is strong: Trumpism was not a sudden anomaly; rather, decades under recent Republican presidents prepared the way for the current embrace of unconstitutional reactionary authoritarianism by eight or nine of every ten Republicans. Having been proudly anti-intellectual, Presidents Reagan and G. W. Bush denied, as Trump denies, that reason should play a role in the conduct of the country’s affairs. This elevation of irrationality, of going with the gut, is linked with a dangerous animus against the federal government that threatens the Enlightenment basis of the American republic, as John Meacham has recently argued. 

 

Reagan and Trump made blatant appeals to the racial resentments of whites, the first by inveighing against “welfare queens and cheats” and by declaring for “states’ rights” in Philadelphia, Mississippi, where three young civil rights workers were murdered two decades earlier. Trump similarly found “very fine people” both among neo-Nazis and anti-fascist demonstrators. Presidents Reagan and G. W. Bush, like Trump, routinely made statements contrary to the truth. Reagan told so many falsehoods, mistaking what happened in movies with what happened outside of films, that the press and media stopped reporting them. George W. Bush deceived the country about warrantless, illegal surveillance of millions of Americans, about the grounds for an unjustified war of aggression against Iraq, and about his authorization of the use of torture by the US. Both helped prepare a major party to accept the barrage of falsehoods that Donald Trump launches every day.  

 

To return to the central question Kushner’s play raises—how we can respond to kleptocratic authoritarianism—it is worth remembering the maxim that the first casualty in war is truth, and, as Thucydides observes, this is especially in the case of civil war. As Masha Gessen writes, language means something until it doesn’t. Take the use under G.W. Bush of “regime change” instead of “invasion,” or the replacement of “torture” with “enhanced interrogation techniques”—as though prohibiting the word makes the thing go away.    

 

One of the prime examples of the deformation of language in the current moment comes from the language used to describe the fractious state of the polity itself—the language of “polarization,” and the supposed diagnosis that we are being “tribal” when we occupy one side or the other of an issue. In fact, however, the allowed spectrum of opinions—the Overton window—has moved far to the right in the last forty years. Ethnic nationalism can now be overtly advocated by a nominee for one of the second-highest courts in the land, while Democrats have been moving toward more centrist, not more extreme positions during the same decades.  

 

I do not know how to move the public discourse back toward the old center in, say, the late 70s, but writers can raise the question of what should be done, as Tony Kushner does in A Bright Room Called Day, and they can point out and bear witness to the corruption of language in their own time, as George Orwell did in his 1947 essay “Politics and the English Language.”  

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173731 https://historynewsnetwork.org/article/173731 0
A Review of Amazon Prime’s Series Dostoevsky

 

The great Russian writer Fyodor Dostoevsky has influenced many writers like William Faulkner, as well as other individuals from around the world. Americans unfamiliar with his life, and perhaps even with some of his greatest works like Crime and Punishment and The Brothers Karamazov can now get to know him via Amazon Prime’s 8-part subtitled series Dostoevsky, directed by the Russian Vladimir Khotinenko.

 

The series first appeared in 2011 (in 7 parts) on Rossiia 1 television channel, and a Western expert on Russian literature and film, Peter Rollberg, then wrote, “In scope and quality, Khotinenko’s 7-part biopic can be compared to the best HBO and Showtime history dramas, such as John Adams (2008) and The Tudors (2007-2010).”

 

Indeed the series has much to recommend it-good acting (especially by Evgenii Mironov as Dostoevsky), picturesque scenery (e.g., in St. Petersburg and foreign sites such as Baden Baden), and a fascinating story that, despite taking some artistic liberties, depicts well the tumultuous and eventful life of one of Russia’s greatest writers. Each episode begins with Dostoevsky sitting for the famous portrait of him painted by V. Perov in 1872. 

 

As we watch the almost eight hours of the series, we witness some of the main events of his adult life beginning with his traumatic experience on a December morning in 1849 when he and other prisoners stood on a St. Petersburg square, heard their death sentences read out, and excepted to be shot by a firing squad. In his late twenties by then, Dostoevsky had already gained some fame as a writer, but became involved with dissidents whom the reactionary government of Tsar Nicholas I considered treasonous. Only at the last minute, did a representative of the tsar bring word that Nicholas I was going to spare the lives of the condemned, and Dostoevsky spent the next four years in a Siberian prison camp in Omsk, which he later described in his novel "The House of the Dead."

 

The first episode of the series is set mainly in this camp, and the gloomy existence of the prisoners may be off-putting to some viewers. But the experience was important to Dostoevsky. Himself the son of a serf-owning Moscow doctor, he was forced to mix with less educated common criminals, but came to appreciate their Russian Orthodoxy, their religion of Christ, of sin and suffering, of resurrection and redemption. He came to regret his earlier rebellious ideas, influenced by Western European utopian thinkers. His prison experiences convinced him that the only path for Russian intellectuals to follow was one that united them with the common people and their religious beliefs.

 

Through a variety of techniques, usually by having Dostoevsky state his convictions or argue with someone like the writer Turgenev, the series conveys his post-prison populism and Russian nationalism. In one scene in Episode 3, a young man at a dinner table tells Dostoevsky that he left St. Petersburg for prison camp “a dissenter and socialist, and you returned a defender of The Throne and Orthodoxy.” Toward the beginning of Episode 6, Dostoevsky tells the painter Perov, “The ones seeking freedom without God will lose their souls . . . . Only the simple-hearted Russian nation . . . is on the right way to God.” 

 

But the series is more concerned with depicting his personality and love life, which begins to manifest itself during Episode 2, set mainly in the Central Asian-Siberian border town of Semipalatinsk (present-day Seney). Dostoevsky served in the army there for five years (1854-1859) before finally being allowed to return to European Russia. But his service allowed him sufficient time for writing and mixing with some of the town’s people, including Maria Isaeva, a somewhat sickly, high-strung, strong-willed woman in her late twenties. 

 

Episodes 2 and 3 depict the writer’s stormy relations with her in Siberia and then in their early days in St. Petersburg. After she leaves Semipalatinsk to accompany her husband, who takes a new job in the distant town of Kuznetsk (today Novokuznetsk), he soon after dies. Dostoevsky makes a secret and unlawful trip to this Siberian city, but has to contend with a younger rival, a schoolteacher, for Maria’s affection.  Finally, after much agonizing by both Maria and Dostoevsky and another trip to Kuznetsk, the two marry there in February 1857.

 

While in Semipalatinsk, Dostoevsky makes a written appeal to his brother, Mikhail, and to an aunt for money. The writer’s financial difficulties, later exacerbated by gambling loses, will remain a consistent theme for most of the rest of the series.

 

After finally being allowed to settle in St. Petersburg in late 1859, Dostoevsky renews acquaintance with Stepan Yanovskiy, a doctor friend, and is introduced to his wife, the actress Alexandra Schubert. She and Dostoevsky soon become romantically involved, while Maria shows increasing signs of having consumption (tuberculosis)-she died of it in 1864. His main health problem was epilepsy, and occasionally, as at the end of Episode 3, we see him having a seizure.

 

In Episode 4, we are introduced to a young woman, Appolinaria Suslova, who for several years became Dostoevsky’s chief passion. Young enough to be his daughter, she reflected some of the youthful radicalism of the Russian 1860s. An aspiring writer herself, she was fascinated by the older author, and eventually had sexual relations with him. But their relations were stormy, often mutually tormenting, and while traveling in Western Europe together, she sometimes denied him any intimacy. A fictionalized portrait of her can be found in the character of Polina in Dostoevsky’s The Gambler (1866).    

 

In Episodes 4 through 8, we see sometimes see Dostoevsky at the roulette tables from 1863 to 1871 in such places as Weisbaden, Baden Baden, and Saxon les Bains, usually loosing, and from 1867 to 1871 most often travelling with his second wife, Anna (nee Snitkina), whom he first meet when she came to him to work as a stenographer in 1866 to help him complete The Gambler and Crime and Punishment. 

 

But Anna does not appear until Episode 6, and only after Dostoevsky’s infatuation with two very young sisters, Anya and Sofia (Sonya) Korvin-Krukovskaya, who later became a famous mathematician. Nevertheless, once Anna appears she remains prominent for the remainder of the series, first as his stenographer, then as his wife and the mother of his children. In Episode 7, they travel to Western Europe, where they remain in places like Baden Baden and Geneva until 1871, when they return to Russia. Throughout their marriage, Anna remains the level-headed, common-sense wife who tolerates and loves her much older mercurial husband. But the couple had their up-and-down moments, including the death of two children. The first to die, Sofia (Sonya) does so in May 1868, at the end of Episode 7.

 

Thus, to deal with the rest of Dostoevsky’s life-he died in January 1881-the Russian filmmakers left themselves only one episode, number 8. And much happened in that dozen years, including the couple’s return to Russia; the birth of more children ( two boys and a girl, but the youngest, Alyosha, died in 1878); trips to Bad Ems for emphysema treatments, and major writings (the novels The Idiot, The Possessed, The Adolescent, and The Brothers Karamazov and his collection of fictional and non-fictional writings in A Writer’s Diary).There are brief mentions and/or allusions to these writings, but not much. 

 

Dostoevsky often encounters people whose last name is ungiven. In Episode 8, for example, he visits the dying poet and editor Nikolai Nekrasov, but only those already familiar with Dostoevsky’s biography might realize who he is. The final scene in that last episode shows Dostoevsky and a young bearded man whom he addresses as Vladimir Sergeevich sitting on hay behind a horse and carriage driver on their way to the famous Optina Monastery.

 

The not-further-identified young man, although only in his mid-twenties, was in fact the already well-known philosopher Vladimir Soloviev, son of Sergei Soloviev, who by his death in 1879 completed 29 volumes of his History of Russia from the Earliest Times. Dostoevsky had read some of his history and earlier that year had attended Vladimir’s "Lectures on Godmanhood." At one of these talks attended by Dostoevsky, Leo Tolstoy was present, but the two famous writers never met each other. During the remaining 22 years of his life, Soloviev went on to develop many philosophic and theological ideas and to influence later religious thinkers including Dorothy Day and Thomas Merton.

 

At Optina, Dostoevsky sought consolation from the death of his son Alyosha by talking to the monk Ambrose, who became the model for Father Zossima in his The Brothers Karamazov. And the brothers Alyosha and Ivan Karamozov, in different ways, reflect the influence of the young Soloviev.

 

Of the novel itself, one of Dostoevsky’s most influential, little is said in the series except when in the final episode, Dostoevsky tells a police official that he plans to write a work about a hero who goes through many phases and struggles with the question of the “existence of God.” The Brothers Karamazov deals with that question-but also, of course, with much more. And Dostoevsky did not live long enough to complete The Life of a Great Sinner, a work he had long contemplated but only managed to include portions of in some of his great novels. 

 

Despite the many positive aspects of the 8-part series, it only hints at Dostoevsky’s relevance for our times. Just a few examples are his significance for understanding 1) Vladimir Putin and his appeal to Russians, 2) terrorism, and 3) whether or not to accept the existence of God and the implications of faith vs. agnosticism. 

 

Regarding his influence on Putin, an excellent article by Russian-expert Paul Robinson thoroughly examines the question. He begins his essay by writing, “I’ve spent the last week ploughing through the 1,400 pages of Fyodor Dostoevsky’s Writer’s Diary. . . . The experience has left me pretty well acquainted with the writer’s views on the Russian People (with a capital ‘P’), Europe, the Eastern Question, and Russia’s universal mission. I’ve also just finished writing an academic article which discusses, among other things, references to Dostoevsky in Vladimir Putin’s speeches.”  

 

In novels such as Notes from the UndergroundCrime and Punishment, and The Possessed, Dostoevsky reflected on and provided insight into the thinking of many a terrorist. As one essay on his insight into terrorism indicates, Theodore Kaczynsk, the Unabomber, “was an avid reader of Dostoevsky.” Freud wrote on the great Russian writer and appreciated some of his insights into what is sometimes referred to as “abnormal psychology.” Some even claim that Dostoevsky “ought to be regarded as the founder of modern psychology.”

 

Regarding the existence of God, it is The Brothers Karamazov that is most often cited, especially its chapters on “Rebellion” and “The Grand Inquisitor,” where the brothers Ivan and Alyosha discuss whether to accept or reject God. Ivan rejects because he cannot accept any God that would allow innocent suffering, especially that of little children. In the agnostic Camus’s The Rebel he devotes his chapter “The Rejection of Salvation” to Ivan’s stance.  

 

In summary, this reviewer’s advice: enjoy Amazon’s Dostoevsky but then go on to read more by and about him. You can even download his great novels and many of his other works at the Project-Gutenberg-Dostoevsky site.     

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173727 https://historynewsnetwork.org/article/173727 0
Losing Sight of Jefferson and Falling into Plato

The Death of Socrates, by Jacques-Louis David (1787)

 

Many professors at higher-level academic institutions profess to be practitioners of a Socratic method of teaching, which is a method of students arriving at understanding by a teacher “pestering” them with probing questions that lead to self-searching. Many, if not most, of such practitioners presume that the method facilitates true learning. Socratic teaching and learning are not reducible, in John Dewey’s words, to pouring information into the heads of students, but they are a matter of drawing out what is, in some sense, already there.

Socrates (469–399 B.C.) was one of Ancient Athens’ most unusual citizens. He professed a profound love of Athens insofar as he claimed he could never offer a return to Athens (or its citizens) as valuable as what he had, throughout his life, gained from the cosmopolitan polis.

 

What was unique about Socrates was his zetetic (seeking) manner of living. He renounced all pleasures other than the pleasure he experienced in searching for knowledge, which he said he never possessed. He claimed to be wise only inasmuch as he recognized that he, unlike other prominent Athenians, understood that he knew nothing, and thus, that human wisdom counted for nothing next to divine wisdom.

 

Never claiming to have any (real) knowledge—that is, knowledge of things substantive such as of the virtues piety, wisdom, justice, self-control, and courage—he spent his days throughout his life in pursuit of knowledge. His daily zetetic activity showed that he did not consider that activity fatuous. That demonstrates that he, at least, thought that acquisition of knowledge was humanly possible. Otherwise, his life would have been as pointless as searching to find great-tailed grackles, birds endemic to hot places like Texas, in Anchorage, Alaska.

 

Socrates’ method of pursuit was elenchus—a method of dialectic exchange in which the one chiefly in pursuit of knowledge, usually Socrates, asked an interlocutor a number of questions, pointedly articulated to get at the nature of a particular virtue, or even virtue in general.  After expostulation of an initial definition from an initial question—e.g., “What is virtue?”—questions would be crafted to expose insufficiencies in the proposed definition—e.g., “Whatever is just is virtuous” (Meno)—with the expectation of either refinement of the proposed definition or proposal of a different definition, in keeping with flaws in the definition, exposed by later questions.

 

The end of all such elenctic Socratic dialogs was aporia—a state of puzzlement or confusion which characterizes an interlocutor who came to recognize that he did not or might not know what he had thought he knew. In Plato’s Socratic dialogs, interlocutors in aporia often walk away in anger from Socrates, but sometimes walk away accepting that they must now seek the knowledge they hitherto thought that they had.

 

Socrates, Plato tells us in Apology, made few friends through his practice as he showed prominent Athenian politicians, poets, and craftsmen that they did not know the things they thought that they knew (things such as justice, piety, and beauty). He doubly irritated many, as the youth often imitated his methods. Hence, he was ultimately sentenced to death because of numerous charges, each reducible to corruption of the young.

 

Socrates was not supposed to be sentenced to death, for Athenian democracy was in the main largely tolerant of differences of opinion—even somewhat disruptive differences of opinion (consider Epicurus’ philosophical Garden, which placed just outside Athens, preached a minimalist sort of hedonism, through freedom from mental disquiet, ataraxia, and social withdrawal). Socrates merely could not promise to stop doing what was considered by many to be so disruptive of daily Athenian affairs—his daily dialectic.

 

There are too few genuine Socrateses in today’s higher education and even fewer students, willing to be challenged dialectically. Anyone who has taught in higher education for more than two decades has doubtless come to recognize that the “New Millennials” and even the generation beyond them, the students born in or after 2000, are difficult, if not impossible to teach. They are, in the words of Professor Elayne Clift, “devoid of originality, analytical ability, [and] intellectual curiosity.” She continues, “Having passed through a deeply flawed education system in which no one is paying attention to critical thinking and writing skills, they just want to know what they have to do to make their teachers tick the box that says ‘pass.’” All the other teachers do that.

 

Why? Students have an effective tool, rage, through a sense of academic entitlement, which many utilize as a security blanket. When they perform poorly, as they often tend to do, they blame their poor performance on poor teaching. Teachers, due to pressure aligned with need of strong evaluations, seldom challenge inculpation. They are too afraid of poor evaluations and direct complaints, which may readily result in loss of adjunct work or failure to attain tenureship. Many teachers, I believe, now sheepishly accept the notion that they, not the students, are the problem. Moreover, students are also in a position of power because they are viewed as consumers, and institutions are in the market to attract as many students each year as they can attract. Education is increasingly following the pattern of successful businesses, which follow the mantras: The more customers, the more money, and the customer is always right.

 

Socratic teaching in such a milieu is impossible, because it is designed principally to expose ignorance. Today’s students are entitled to a passing grade without working because they already know everything they really need to know. Each is a sun of his own solar system—no binary stellar systems here!—and the orbiting bodies orbit for the sake of that sun. Says Daniel Mendelson, “Perhaps because they have received more attention than any generation in the history of Homo sapiens, millennials seem to be convinced that every aspect of their existence, from their love lives to their struggles with reverse peristalsis, is of interest not just to their parents but to everyone else as well.”

 

Plato millennia ago saw the problem as one of pure democracy (demokratia). In a democracy, according to Plato in Republic, each person thinks of himself as the equal of all others in all ways. The city is full of freedom and free speech and everyone is free to do what he wants to do when he wishes to do it. Democracy, constitutionally, is a “supermarket of constitutions,” as it embraces all persons and all rules. “[A democrat], always surrendering himself to whichever desire comes along, lives as if it were chosen by lot” (557a–561b). 

 

Thomas Jefferson, of course, was aware of the pitfalls of democracy. There can be no such thing as a pure democracy over a large expanse of land, he avers, but only in a small parcel, such as a ward, where smallness of political space enables all to have an equal share in political matters. Hence, Jefferson, while in France, speaks to James Madison (30 Jan. 1787) of the need of representative government, where “the will of every one has a just [and not a direct] influence,” for affairs of state and country. Even with representative government, he continues to Madison, “the mass of mankind … enjoys a precious degree of liberty & happiness.” Nonetheless, “it has it’s evils too: the principal of which is the turbulence to which it is subject,” because of its embrace of freedom of expression.

 

Yet the pitfall of turbulence is often massively misconstrued or hyperbolized by scholars—e.g., Conor Cruise O’Brien. Jefferson writes thus in a letter to Madison (20 Dec. 1787) of Shays’ Rebellion, an event which horrified many, especially New Englanders. “The late rebellion in Massachusetts had given more alarm than I think it should have done. Calculate that one rebellion in 13 states in the course of 11 years, is but one for each state in a century & an half. No country should be so long without one.” Earlier in the same year (Jan. 30), he writes to Madison, “A little rebellion now and then is a good thing & as necessary in the political world as storms in the physical.” Some six years later (3 Jan. 1793), he writes to William Short of the sanguinary effects of the French Revolution: “My own affections have been deeply wounded by some of the martyrs to this cause, but rather than it should have failed, I would have seen half the earth desolated. Were there but an Adam and an Eve left in every country, and left free, it would be better than as it now is.” The bloodshed of a rebellion is, of course, abominable in the short term, but “this evil is productive of good. It prevents the degeneracy of government, and nourishes a general attention to the public affairs.” In sum, what appears as a pitfall of democracy on a large scale is really its great strength.

 

Jefferson did have an antidote—a means of preventing too many rebellions. That was periodic constitutional reform, effected through robust discussions by the people themselves. John Stuart Mill agrees. The strength of a vital democracy, Mill notes, is not only its tolerance of difference of opinions, but also the vitality with which it aims to iron out those differences through respectful, progressive debate. Robert Healey agrees: “Democracy is a means of determining courses of action through use of open and admitted conflict of opinion. Its ideal is not the achievement of a homogeneous society, but true cooperation, the working together of different people and groups who have deliberated with each other.” Thus, the aim of democratic thriving is citizens’ engagement with the institution through collisions of ideas. Those collisions are not purposeless, but aim at truth or at least heightened understanding of problems or issues in a solemn effort toward resolution.

 

Robust discussion is what is missing in the educative climate of the twenty-first century which worships a new, radical liberalism—toleration of diversity of opinion as an end, not as a means. The teaching milieu in the twenty-first century embraces tolerance of differences, but not progress with the aim of ironing out those differences through respectful debate—the Jeffersonian ideal behind periodic constitutional renewal. Thus, with disavowal of the Jeffersonian ideal, there is avowal of Plato’s concept of democracy in which freedom has become an end, and a deteriorative end, and not a means to an end—human flourishing.

 

We ought to strive today to recognize and aim at the Jeffersonian democratic ideal, not the Platonic degenerative conception. Within the ideal of Jeffersonian republicanism, there not only is room for Socratic methods of teaching, there also is need for Socratic methods of teaching. Why? It is, as Jefferson noted, better to have turbulent liberty than quiet servitude.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173729 https://historynewsnetwork.org/article/173729 0
The Myth of the First Thanksgiving is a Buttress of White Nationalism and Needs to Go

 

Most Americans assume that the Thanksgiving holiday has always been associated with the Pilgrims, Indians, and their famous feast. Yet that connection is barely 150 years old and is the result of white Protestant New Englanders asserting their cultural authority over an increasingly diverse country. Since then, the Thanksgiving myth has served to reinforce white Christian dominance in the United States. It is well past time to dispense with the myth and its white nationalist connotations. 

 

Throughout the colonial era, Thanksgiving had no association whatsoever with Pilgrims and Indians. It was a regional holiday, observed only in the New England states or in the Midwestern areas to which New Englanders had migrated. No one thought of the event as originating from a poorly documented 1621 feast shared by the English colonists of Plymouth and neighboring Wampanoag Indians. Ironically, Thanksgiving celebrations had emerged out of the English puritan practice of holding fast days of prayer to mark some special mercy or judgment from God, after which the community would break bread. Over the generations, these days of Thanksgiving began to take place annually instead of episodically and the fasting became less strictly observed. 

 

The modern character of the holiday only began to emerge during the mid to late 1800s.  In 1863, President Abraham Lincoln declared that the last Thursday of November should be held as a national day of Thanksgiving to foster unity amid the horrors of the Civil War. Afterward, it became a tradition, with some modifications to the date, and spread to the South too. Around the same time, Americans began to trace the holiday  back to Pilgrims and Indians. The start of this trend appears to have been the Reverend Alexander Young’s 1841 publication  of the Chronicles of the Pilgrim Fathers, which contained the only primary source account of the great meal, consisting of a mere four lines. To it, Young added a footnote stating that “This was the first Thanksgiving, the harvest festival of New England.” Over the next fifty years, various New England authors, artists, and lecturers disseminated Young’s idea until Americans took it for granted. Surely, few footnotes in history have been so influential.

 

For the rest of the nation to go along with New England’s idea that a dinner between Pilgrims and Indians was the template for a national holiday, the United States first had to finish its subjugation of the tribes of the Great Plains and far West. Only then could its people stop vilifying Indians as bloodthirsty savages and give them an unthreatening role in a national founding myth. The Pilgrim saga also had utility in the nation’s culture wars. It was no coincidence that authorities began trumpeting the Pilgrims as national founders amid widespread anxiety that the country was being overrun by Catholic and then Jewish immigrants unappreciative of America’s Protestant, democratic origins and values. Depicting the Pilgrims as the epitome of colonial America also served to minimize the country’s longstanding history of racial oppression at a time when Jim Crow was working to return blacks in the South to as close to a state of slavery as possible and racial segregation was becoming the norm nearly everywhere else. Focusing on the Pilgrims’ noble religious and democratic principles in treatments of colonial history, instead of on the shameful Indian wars and systems of slavery more typical of the colonies, enabled whites to think of the so-called black and Indian problems as southern and western exceptions to an otherwise inspiring national heritage. 

 

Americans tend to view the Thanksgiving myth as harmless, but it is loaded with fraught ideological meaning. In it, the Indians of Cape Cod and the adjacent coast (rarely identified as Wampanoags) overcome their initial trepidation and prove to be “friendly” (requiring no explanation), led by the translators Samoset and Squanto (with no mention of how they learned English) and the chief, Massasoit. They feed the starving English and teach them how to plant corn and where to fish, whereupon the colony begins to thrive. The two parties then seal their friendship with the feast of the First Thanksgiving. The peace that follows permits colonial New England and, by extension, modern America, to become seats of freedom, democracy, Christianity and plenty. As for what happens to the Indians next, this myth has nothing to say. The Indians’ legacy is to present America as a gift to others or, in other words, to concede to colonialism. Like Pocahontas and Sacajawea (the other most famous Indians of Early American history) they help the colonizers then move offstage. 

 

Literally. Since the early twentieth century, American elementary schools have widely held annual Thanksgiving pageants in which students dress up as Pilgrims and Indians and reenact this drama. I myself remember participating in such a pageant which closed with the song, “My Country Tis of Thee.” The first verse of it goes: My country tis of thee/ Sweet land of liberty/ Of thee I sing./ Land where my fathers died!/ Land of the Pilgrim’s pride!/ From every mountain side,/ Let freedom ring!” Having a diverse group of schoolchildren sing about the Pilgrims as “my fathers” was designed to teach them about who we, as Americans, are, or at least who we’re supposed to be. Even students from ethnic backgrounds would be instilled with the principles of representative government, liberty, and Christianity, while learning to identify with English colonists from four hundred years ago as fellow whites. Leaving Indians out of the category of “my fathers” also carried important lessons. It was yet another reminder about which race ran the country and whose values mattered. 

 

Lest we dismiss the impact of these messages, consider the experience of a young Wampanoag woman who told this author that when she was in grade school, the lone Indian in her class, her teacher cast her as Chief Massasoit in one of these pageants and had her sing with her classmates “This Land is Your Land, This Land is My Land.” At the time, she was just embarrassed. As an adult, she sees the cruel irony in it. Other Wampanoags commonly tell of their parents objecting to these pageants and associated history lessons that the New England Indians were all gone, only to have school officials respond with puzzlement at their claims to be Indian. The only authentic Indians were supposed to be primitive relics, not modern, so what were they doing in school, speaking English, wearing contemporary clothing, and returning home to adults who had jobs and drove cars?

 

Even today, the Thanksgiving myth is one of the few cameos Native people make in many schools’ curriculum. Most history lessons still pay little to no heed to the civilizations Native Americans had created over thousands of years before the arrival of Europeans or how indigenous people have suffered under and resisted colonization. Even less common is any treatment of how they have managed to survive, adapt, and become part of modern society while maintaining their Indian identities and defending their indigenous rights. Units on American government almost never address the sovereignty of Indian tribes as a basic feature of American federalism, or ratified Indian treaties as “the supreme law of the land” under the Constitution. Native people certainly bear the brunt of this neglect, ignorance, and racial hostility, but the rest of the country suffers in its own ways too.   

 

The current American struggle with white nationalism is not just a moment in time. It is the product of centuries of political, social, cultural, and economic developments that have convinced a critical mass of white Christians that the country has always belonged to them and always should. The myth of Thanksgiving is one of the many buttresses of that ideology. That myth is not about who we were but how past generations wanted us to be. It is not true. The truth exposes the Thanksgiving myth as a myth rather than history, and so let us declare it dead except as a subject for the study of nineteenth-and twentieth-century American cultural history. What we replace it with will tell future Americans about how we envision ourselves and the path of our society. 

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173689 https://historynewsnetwork.org/article/173689 0
Bodhisattvas and Saints

A Christian depiction of Josaphat, 12th century manuscript

 

On a side of the baptistry of the Piazza Duomo in the northern Italian city of Parma, there is a portal designed and constructed in the late twelfth-century and into the early thirteenth by the architect Benedetto Antelami. Above an entrance for catechumens – that is converts to Catholicism – Antelami depicted the story of two saints whose story was popular in the Middle Ages, but who have become more obscure in subsequent centuries. St. Josaphat and St. Barlaam, the former an Indian prince and the later the wondering holy man who converted him, are depicted in pink Verona marble at the side of the Romanesque church. For the generations of Parmesans who worshiped and were baptized inside of the structure, they passed underneath an entrance that told tale of the “country of the Indians, as it is called… vast and populous, lying far beyond Egypt,” as described by St. John of Damascus in his seventh-century account of the two men. 

 

According to hagiographies, St. Josaphat had been the son of a powerful Indian ruler who’d taken to persecuting the Christians in his kingdom that had been converted by the Apostle St. Thomas. Tortured by a prediction that his son would also convert, young Josaphat was sequestered away in a palace to prevent him from experiencing the suffering of the world. The prince escapes, however, and during his sojourns, he encounters an aged man, a leper, a dead body, and finally the mendicant monk Barlaam who brings Josaphat to an enlightenment. For many of you reading, this story may sound familiar; though for a medieval Italian it would have been simply another beloved saint’s tale. But St. Josaphat is a particularly remarkable Roman Catholic saint, for he’s normally known by a rather different title – the Buddha. 

 

The Buddha, as in the Christian legend which clearly developed from his story in a centuries-long game of telephone which stretched across Eurasia, had been sequestered away within a luxurious palace, only to leave and encounter an elderly man, a leper, a corpse, and finally a wandering ascetic. In the European legend, St. Josaphat converted to Christianity, and for Christian believers thus achieves salvation. For Buddhists, Siddhartha ultimately reached enlightenment and taught other sentient beings how to overcome the suffering which marks our existence. These are different things of course, yet there is a certain congruence between the Buddha’s call in the Dhammapada to “Conquer anger with love, evil with good, meanness with generosity, and lies with truth” with Christ’s teaching in John 8:32 that the “truth shall set you free.” In the legend of St. Josaphat and St. Barlaam those congruences are made a little more obvious, even for all of the differences from its source material.  

 

Author Pankaj Mishra explains that long before the Western vogue for Buddhism associated with the 1960’s counterculture and the Beats, before the appropriation of Siddhartha’s story by Transcendentalists and Romantics, the Buddha “himself reached the West in the form of a garbled story of two Christian saints.” The origin of that story’s route into Christendom is obscure, even while the congruencies between the two narratives show a clear relationship. How Siddhartha Gautama, the historical Buddha venerated by half-a-billion Buddhists, and whose life pre-dates Jesus Christ by five centuries, became a popular medieval Christian saint is a circuitous, obscure, and enigmatic story which tells us something about the ways in which religions are porous countries, endlessly regenerative, continually borrowing from another, and generating shared stories of meaning.

 

So how did the Buddha end up carved on a baptistry portal in Parma, what scholar Gauranga Nath Banerjee describes as being among the “most curious thing borrowed by the Roman and Greek churches”? The etymological genesis of the name “Josaphat” is relatively straightforward, as the saint’s Latin name derived from the Greek “Ioasaph,” itself from the Arabic “Yudasaf,” back to the Sanskrit “Bodhisattva,” the Buddhist honorific for a person who has achieved enlightenment, but rather than extinguishing their suffering in nirvana opts to be reborn so as to help their fellow humans achieve peace. 

 

The gradual development of “Bodhisattva” into “Josaphat” provides a rough genealogy of the way in which the central narrative of Buddhism ended up in a Christian hagiography. Historian Lawrence Sutin explains that it was in the “mid-nineteenth century that scholars first concurred that the origins of the Barlaam and Josaphat legend lay in the traditional life story of the Buddha.” He goes on to explain how the narrative found its way into Western Europe from Greece, and before that Georgia, where it had in turn arrived from Persian sources associated with Manicheanism, a once influential and now extinct religion which venerated both Christ and the Buddha. Sutin takes pains to emphasize that narrative similarity doesn’t imply theological congruence, for it “cannot be demonstrated that any distinct Buddhist teaching had survived in the oft-mutated… legend.”

 

An important point, even as there is something beautiful and strange about the thought of medieval Christians worshiping underneath the mantle of the Buddha without even knowing it. Religions aren’t easily reducible into one another, to convert Buddhism into Christianity is to violate that which is singular about both. A certain strain of well-meaning comparative religious studies from the middle of the twentieth-century, often associated with Huston Smith’s classic textbook The Religions of Man, has a tendency to remake all of the tremendous diversity of world religions into versions of liberal Protestantism. There is an ecumenical tolerance implicit in the argument that all religions are basically the same, that they all share certain deep truths that can be easily translated into one another, but as scholar Stephen Prothero argues this is a “lovely sentiment but it is dangerous, disrespectful, and untrue.” When it comes to consilience between Buddhism and Christianity, a fair and honest accounting which is respectful to both traditions must admit that “salvation” and “enlightenment” are not the same thing, nor is “karma” equivalent to “sin,” and that “nirvana” is not another word for “heaven.” These things are different, and there is a significance and power in that. 

 

However, we still have St. Josaphat and St. Barlaam, Orthodox and Catholic saints rather than Bodhisattvas, but characters whose story still derives from those forgotten Buddhist sources. They may not demonstrate that all religions teach the same thing deep down, they aren’t examples of how faiths can be converted into one another, and teachings being easily translated. But they do demonstrate that in that ineffable domain between beliefs, in that meeting point where the mysteries of different religions can touch, there is a place for communication. Sutin writes that in the Roman martyrology, St. Josaphat and St. Barlaam’s “joint feast day is observed on November 27, a date that has been ignored by present-day Western Buddhists but might well serve as a time for celebration of longtime affinities between the two paths.” Religions may not be reducible to each other, but the example of the Parma baptistry is a reminder that faiths are ever shifting countries whose borders are more porous than can be assumed. Something in the Buddhist story appealed to person after person in a chain that led from India to Italy and beyond, and even as that story altered it was the power of those characters and their narrative that exemplifies a certain shared understanding. Within the space of that portal, there is room for meeting, for mutual understanding, for empathy, for reciprocity, for faith. For mystery.  

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173707 https://historynewsnetwork.org/article/173707 0
Curing Ourselves with Fred Rogers

 

I find it hard not to be upset all the time about American politics, and the American society underneath. For me, things have been getting worse for a long time. Often, I find out that things had been even worse than I thought earlier, but I didn’t know the facts until they were uncovered by some journalist prying into our secretive government. The good news that honest investigations can reveal what powerful people want to keep secret doesn’t quite outweigh the bad news that these investigations reveal.

 

I don’t mean that I am upset all the time. At many times every day, I rejoice at my grandchildren and the children who are raising them, I root for some team on TV, I puzzle over a murder mystery, I accomplish do-it-yourself things all around our old house trying to recapture the past, or I kneel in the gardens pulling up weeds. Thoughts about America as a nation are swamped by the joys of one person’s everyday life.

 

But when those thoughts peek through, or take up all the air when we watch the news at night, they are unhappy ones. In the 15 years I have been writing columns about politics, I have identified all the big problems we face now. The names have changed, but the political ideas and underhanded methods persist. What is new is that those problems all seem more upsetting to me lately. I can identify when this condition began four years ago, as Trump came down the escalator to announce that he was campaigning for President.

 

I think the diagnosis is evident: I suffer from T.I.A.D., Trump-Induced Anxiety Disorder. There may be some help; please see this video for the wonder drug Impeachara.

 

Even if the drug doesn’t work, or exist, just thinking about it provides some temporary relief.

 

I think a longer-lasting cure may be available, one that’s been in front of us all the time. Liz and I joined lots of local baby boomers to see “It’s a Beautiful Day in the Neighborhood”. Fred Rogers was news to me. I had never watched his program and I only knew of his reputation, not him.

 

All the evidence I can find says that Mr. Rogers was just as he was portrayed by Tom Hanks: a lover and inspired teacher of children; impossibly nice to everyone around him; willing to talk to children about the most difficult subjects, like divorce and nuclear war; clever but transparent about using television to spread his message of love and tolerance.

 

Less well known is that he was a determined advocate for public television, was an ordained Presbyterian minister, and that he wrote all the songs for Mr. Roger’s Neighborhood. However far you dig into Fred Roger’s life, he was a remarkably good man who spread goodness all around him.

 

Instead of stressing about Trump’s latest idiocy or the decline of American politics, about which we can do very little, we could try to emulate Mr. Rogers. We could see the world as an opportunity to make a difference in people’s lives and devote our energies to doing that.

 

I’m no Fred Rogers. Coming from New York, I could never talk that slowly. The rest of us are just not so good so much of the time. But that doesn’t matter. We can all inch our way toward goodness by thinking more about the real people right in front of us and less about the personalities we see on the screen and the news we get from people we don’t know.

 

That is really the message of my whole collection of articles. The way to take back our lives is to focus more on the immediate, to practice the principles we believe in, to wrest more control by being intentional whenever we can.

 

Mr. Rogers can’t save us, even though Esquire did put him on the cover of an issue about heroes. He wasn’t trying to save the world himself. He was doing his part as less than a billionth of humanity. If we want to be cured of T.I.A.D. without danger of remission, we all have to do our parts, for our own lives and for others.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/blog/154283 https://historynewsnetwork.org/blog/154283 0
Roundup Top 10!  

Thanksgiving is a good time to lose our illusions about U.S. history

by Nick Alexnadrov

We misread the past each November, when we consider our country’s earliest phase. We like to think tolerance, a love of liberty and a democratic impulse motivated English colonists. But history tells a different story.

 

Queer Like Pete

by Jim Downs

Buttigieg is getting slammed for being a type of gay man America doesn’t understand.

 

 

How to Talk About the Truth and Trump at Thanksgiving

by Ibram X. Kendi

If we are serious about bringing Americans together, the work has to start with our own families.

 

 

Trump's Toadies Should Take Note: Watergate Says Everyone Goes Down

by Kevin M. Kruse

The lesson Nixon imparts to today’s POTUS loyalists is that courts of law and of public opinion will judge them harshly.

 

 

Contrary to conservative claims, the ERA would help families — but it’s not enough

by Alison Lefkovitz

Decades after its introduction, the Equal Rights Amendment is still urgently needed, and passing it .may soon be possible.

 

 

The apocalyptic myth that helps explain evangelical support for Trump

by Thomas Lecaque

Implicit is a vision of the president as a triumphantly apocalyptic figure, one who evokes the medieval legend of the Last World Emperor.

 

 

A 1970 Law Led to the Mass Sterilization of Native American Women. That History Still Matters

by Brianna Theobald

The fight against involuntary sterilization was one of many intertwined injustices rooted in a much longer history of U.S. colonialism. And that history continues to this day.

 

 

It’s Easy to Dismiss Debutante Balls, But Their History Can Help Us Understand Women’s Lives

by Kristen Richardson

The debutante ritual flourished roughly from 1780 to 1914—beginning with the first debutante ball in London and ending with the outbreak of World War I.

 

 

Après Moi, le Déluge...

by Tom Engelhardt

The Age of Trump, the End of What?

 

 

 

Trump’s xenophobia is an American tradition — but it doesn’t have to be

by Erika Lee

Some have always pushed to keep out immigrants, but people have always fought back, too.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173726 https://historynewsnetwork.org/article/173726 0
The History Behind the Rocket Used in the Latest Attack Against Israel

Israel's Ze'ev rocket, c. 1969 (photo: Israel Defense Forces)

 

A shorter version of this article was posted in The Times of Israel.

It took a few days for the Israeli military to disclose that during last week's round of rocketing from Gaza by Islamic Jihad, a new weapon was introduced: a projectile packing a far larger explosive charge than those hitherto fired. Residents of a settlement in the Northern Negev related that it had caused a terrifying blast when it landedlate at night, andin the morning they discovered that it had completely demolished one of their greenhouses, leaving an enormous crater. This pit was incomparably wider and deeper than the relative pockmarks that were made in roads and fields by the Qassam and Grad rockets of previous salvos, destructive as those had been when they scored a direct hit on a residential or industrial building.Imagine, some residents said, what would have happened if this "mega-rocket" had struck one of their houses; none of the reinforced shelters that have been built in residences of the region could have withstood it. If this threat persists and becomes routine, the residents feared, they might have to move away. Expectations of such a response, as well as operational considerations, were presumably part of the military's motivation to delay disclosing the matter.

 

Photos that the Palestiniansposted of their new weapon looked eerily familiar to my co-researcher at the Hebrew University's Truman Institute, Isabella Ginor, and myself. The stubby rocket and its primitive-looking pipe-frame launcher, as well as its scary effect, closely resembledwhat we documented for our recent book The Soviet-Israeli War, 1967-1973. But then it was an Israeli development that figured centrally in the War of Attrition against Egypt along the Suez Canal – and presented a challenge to the Egyptians' Soviet advisers. Their accounts provided extensive detail about the Israeli rocket, which had remained top secret on the Israeli side for years afterward and whose role in the course of the war was therefore largely overlooked.

 

For the Palestinians now, as for the Israelis then, the purpose of wielding this blunderbuss was to counter the adversary's overwhelming advantage in firepower. The rapid Soviet resupply of Egypt's army after its devastating defeat in the Six-Day War soon had the small Israeli garrison east of the canal hopelessly outnumbered – by an estimated 13 to one -- in artillery pieces as well as manpower to fire them. Israeli engineers got to work on a makeshift counterbalance, based on the heaviest variant of Soviet Katyushathat had been captured in the June war.  

 

A year later, a Soviet artillery adviser to the Egyptian II Army Corps on the canal, G. V. Karpov, was summoned to inspect the fragments of Israeli rocket of a heavy and hitherto unfamiliar model, which had “left a big crater” when it was first used. As the canal front was still relatively quiet, this appears to have been a test firing. But Karpov got a better look when, on 8 September 1968, the Egyptians – encouraged by the Soviet advisers – unleashed the first massive shelling on what were the still-flimsy Israeli positions on the east bank of the canal. The Israeli response included, among others, a number of Ze'evs.

 

The Israeli “flying bomb” was inaccurate, and Israeli operators would soon learn that it was prone to boomerang. Still, at virtually point-blank range it could cause a good deal of damage to positions that were hardened only against smaller shells. The intended, and successful, effect of its blast was what would be called, a half-century later, “shock and awe.” Although UN observers reported at least three such rockets fired on 8 September, Egypt – like Israel today – was in no hurry to publicize this, evidently for fear of sowing panic among residents of the towns along the canal's west bank. 

 

Israeli soldiers, from whose outposts the Ze’ev was launched by specialists, were not permitted to handle the top-secret weapon themselves. I and my fellow paratroop reservists were likewise warned when two big crates were installed in our strongpoint overlooking the Jordan, pointed at Jordanian or Palestinian targets across the river. They were described to us only, mysteriously, as "Ze'evim." We never got to witness a launch, but our counterparts on the canal front judged by the rocket's visible impact that that it must deliver a half ton of high explosive

 

Drawing on his expertise, Karpov calculated correctly that the rocket's warheadwas actually less than one-fifth as big, and this too at the expense of very short range – 4km – which would put the launch sites within easy reach of Egypt's new, Soviet-supplied 130mm cannon. He began working out countermeasures, which were partly implemented on 26 October when the next artillery barrage was initiated. 

 

The Egyptians claimed (but UN observers denied) that this round was provoked by Israel's firing of two 216mm rockets which destroyed houses in Port Tawfik, at the southern end of the canal.Out of the 14 rockets the Egyptians accused Israel of launching at civilian targets, they exhibited (and presumably turned over to Karpov for further study) one unexploded specimen, which they claimed was shot down by their anti-aircraft guns. If this was true, and the rocket wasn't simply a dud, it was quite a feat given the missile’s short trajectory. Israel's infinitely more sophisticated anti-missile array did not accomplish the same against the Palestinian "mega-rocket" last week. 

 

Cairo also claimed that its big guns – clearly following Karpov's instructions -- destroyed 10 “newly constructed” Israeli rocket-launch sites. The IDF, as before, denied using any missiles at all. Soviet advisers' memoirs confirmed years later that Egyptian firepower was “concentrated on the Israelis’ 216mm [rockets].” Unofficial Israeli accounts later admitted surprising hits on rear-line position that had been considered out of range (and rumors even spread about female soldiers fleeing naked from a shower shed).  In Egypt the entire engagement was henceforth referred to as “the missile incident” and would later be described as one of the Egyptians' major achievements.

 

Israel, which had sustained relatively heavy losses in those first two bombardments, took advantage of the lull that followed to construct the cannon-proof bunkers of the Bar-Lev Line. Egypt was slower to do the same, and even Karpov's guidelines did not put the Ze'ev entirely out of action. On 9 March 1969, the day after Egypt began the consecutive shelling and commando raids that would become the War of Attrition, a Ze'ev killed Egyptian Chief of Staff Abdel Moneim Riad and some of his officers while they inspected a frontline position.

 

Was the Ze'ev a game-changer? Besides the Soviet-devised riposte, the Egyptians did ultimately improve their fortifications. The appearance of Israel's heavy rocket hastened the USSR's agreement to provide Egypt with the longer-range Luna (Frog) tactical missile and other weapons it had hitherto withheld. The balance in the War of Attrition was tipped in Israel's favor that summer only by the introduction of its fighter jets as "flying artillery." Then the balance was reversed when the Soviets sent in their own SAM batteries, in their largest direct intervention overseas since the start of the Cold War. Although the Israel Air Force won a famous dogfight against Soviet-piloted MiGs in July 1970, its unsustainable losses to the Soviet missiles forced it to accept an unfavorable ceasefire which created the preconditions for Egypt's cross-canal offensive on Yom Kippur, 1973.

 

Israel's improvised heavy rocket did provide at least a temporary stopgap in military terms. But perhaps most relevant for today's confrontation with Gaza-based Palestinian organizations and the exposure of Israeli civilians to a similar weapon, it is worth recalling the role of the Ze'ev's big bang in terrifying the civilian populace of Egypt's civilian communities west of the canal – which emptied out soon after. I'll leave it to Israel's military experts to draw the lessons of this precedent for the present case of asymmetrical warfare – now that the "mega-rocket" shoe is on the other foot.   

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173706 https://historynewsnetwork.org/article/173706 0
Legalize Torture? It’s Tortured Logic

Kathryn Bigelow’s Zero Dark Thirty (2013) starred Jessica Chastain as Maya, a tough, brilliant and single-minded CIA agent who is prepared to use torture in the interrogation of suspected terrorists. There was nothing sadistic about her character, and she comes to doubt the efficacy of torture – though in the end she is able to learn the whereabouts of Osama bin Laden, which she could not have done, the film suggests, had she been unwilling to employ “enhanced interrogation techniques.” This assertion, that the use of torture did in fact produce useful intelligence that helped lead the U.S. to bin Laden, sparked debate as well as outrage. The Report (2019) is, among other things, writer-director Scott Z Burns’ answer to Zero Dark Thirty. It is largely about another single-minded individual, Daniel J. Jones (Adam Driver), lead investigator of the Senate Intelligence Committee, who spent five arduous years doggedly uncovering the CIA’s suspect detention and interrogation program following the 9/11 terrorist attacks. His investigation eventually culminated in a 6,700-page report, a damning exposé of the CIA’s methods of “enhanced interrogation” and the psychologists who helped design them – methods which included walling, cramped confinement, stress positions, waterboarding, the use of insects, and mock burial – despite having no interrogation experience. Like Jones, the film is unwavering not only in its moral condemnation of torture, but in its claim that torture is not effective and never produces real, actionable intelligence. Torture is admittedly an extremely difficult issue to confront. It is so morally reprehensible that we are understandably reluctant to even consider the possibility that it could ever be justified, under any circumstances. The problem is that the world is a messy place – it isn’t morally tidy – and sometimes the right thing to do is not available to us. According to the American Field Manual, rulebook of military interrogators, “The use of force is a poor technique, as it yields unreliable results, may damage subsequent collection efforts and can induce the source to say whatever he thinks the interrogator wants to hear.” However, if we are to deal honestly with this issue, we must recognize the fact that there is substantial evidence that sometimes torture is effective in eliciting information and, indeed, it has been known to save innocent lives. In Why Terrorism Works (2002), Alan Dershowitz writes, “There can be no doubt that torture sometimes works. Jordan apparently broke the notorious terrorist of the 1980s, Abu Nidal, by threatening his mother. Philippine police reportedly help crack the 1993 World Trade Center bombings by torturing a suspect.” If, in certain dire situations, something like nonlethal torture may be justifiable then it appears we should at least consider Dershowitz’s suggestion that if and when torture is practiced, that it is done in accordance with law and with some kind of warrant issued by a judge. “I’m not in favor of torture,” Dershowitz writes, “but if you’re going to have it, it should damn well have court approval.” His claim is that if we are, in fact, going to torture then it ought to be done in accordance with law: for tolerating torture while pronouncing it illegal is hypocritical. In other words, democratic liberalism ought to own up to its own activities, according to Dershowitz. If torture is, indeed, a reality then it should be done with accountability. There are, however, significant problems with the reasoning behind torture-warrants. For one, the legalization of torture would significantly distort our moral experience of the world, corroding the very notion of law itself, which does not rule through abject terror: law is, after all, meant to replace sheer brutality as a way of getting people to do things. Indeed, the rule against torture is paradigmatic of what we mean by law itself. In short, to have torture as law is undermining of what we take the very rule of law to signify. Such considerations are closely connected with the following concern which is addressed in The Report: namely, what are the consequences of institutionalizing torture? That is clearly what the introduction of torture-warrants would imply – and once you institutionalize torture you then have to elaborate on all aspects, including the training not only of would-be torturers but also medical personnel. In other words, the legalization of interrogational torture would apparently require the professionalization of torture; that is, the acceptance of torture as a profession. This normalization is especially disquieting when we stop to consider in particular the role of doctors and medical professionals in torture: for nothing is more antagonistic to what we mean by medicine then its utilization in the prolongation of a person’s agony and brutalization. Sadly, the participation of medical practitioners in torture is nothing new; and we would do well to remind ourselves of that history, for we are now most certainly part of it. In his book Torture, Edward Peters observes that it was under the Third Reich that torture was “transformed into a medical specialty, a transformation which was to have great consequences in the second half of the twentieth century.” Medical involvement in torture first came to world attention with the disclosure of practices in Nazi concentration camps. The Nuremberg trials revealed that physicians had, for example, placed prisoners in low-pressure tanks simulating high altitude, immersed them in near-freezing water, and had them injected with live typhus organisms. It is likely that hundreds of doctors and nurses participated in these experiments, although only twenty-one German physicians were charged with medical crimes. What needs to be emphasized is a point that Robert Jay Lifton, M.D. makes with what he calls an “atrocity-producing situation” – by which he refers to an environment “so structured, psychologically and militarily, that ordinary people can readily engage in atrocities.” As Lifton observes, many Nazi doctors were engaged not in cruel medical experiments, but directly involved in killing. To get to that point, however, they had to undergo a process of socialization; first to the medical profession, then to the military, and finally to the concentration camps: “The great majority of these doctors were ordinary people who had killed no one before joining murderous Nazi institutions. They were corruptible and certainly responsible for what they did, but they became murderers mainly in atrocity-producing settings.” Referring to the CIA program, Atul Gawande, a surgeon and author, observed that “The torture could not proceed without medical supervision. The medical profession was deeply embedded in this inhumanity.” In fact, the program was developed by two psychologists, Jim Mitchell and Bruce Jessen, who – as the film relates – based their recommendations on the theory of “learned helplessness,” which essentially describes a condition in which an individual, repeatedly subjected to negative, painful stimuli, comes to view their situation as beyond their control and themselves as powerless to effect any change. The crucial point is that medical professionals were an integral part of the program. Referring to American doctors that were involved in the torture at Abu Ghraib, Robert Lifton points out, “Even without directly participating in the abuse, doctors may have become socialized to an environment of torture and by virtue of their medical authority helped sustain it.” We can hardly underestimate the significance of the process of socialization in facilitating participation in torture. Certain factors are decisive in terms of weakening the moral restraints against performing acts that individuals would normally find unacceptable. Following Harvard University professor of social ethics, Herbert Kelman, we can identify three forces that are particularly important. Kelman was particularly interested in what he described as “sanctioned massacres” – such as occurred at My Lai during the Vietnam War – but his observations are relevant to the torture setting as well. The first factor is authorization: rather than recognizing oneself as an independent moral agent, the individual feels that they are participating in a mission that relinquishes them of the responsibility to make their own moral choices. The presence of medical professionals helps to lend a sense of legitimacy to the enterprise. Routinizaton is another factor, which speaks directly to the establishment of torture as a profession – so that the torturer perceives the process not as the brutal treatment of another human being but simply as the routine application of a set of specialized skills; or as Kelman puts it, “a series of discrete steps most of them carried out in automatic, regularized fashion.” Finally, dehumanization, whereby the victim is deprived of identity and systematically excluded from the moral community to which the torturer belongs: it becomes unnecessary for the agents to regard their relationship to the victim as ethically significant – in short the victim is denied any inherent worth and therefore any moral consideration. Medical personnel who act as advisors, as it were, on torture techniques are directly implicated in the practice of torture. But if we were to follow Dershowitz’s suggestion and effectively institutionalize torture, this medical involvement would be an inevitable result – for it was present already when torture was being practiced clandestinely. It seems strange that Dershowitz, who finds the current hypocrisy so outrageous, would attempt to remedy the situation not by eliminating the hypocrisy but rather legitimizing it. For what could be more hypocritical than doctors, sworn to do no harm, taking a more or less active role in the systematic and scientific brutalization of another human being? But such would be the unavoidable outcome of legalizing torture through “torture-warrants.” In closing, institutionalizing torture would have very bad consequences – far worse than the hypocrisy that so troubles Dershowitz. Not only would the practice of torture likely metastasize – instead of being limited to one-off cases – its professionalization would contribute to the formation of “atrocity-producing situations,” and we have seen how this relates in particular to the complicity of doctors in the torture situation. Physicians, nurses and the medical establishment itself would be severely ethically compromised by the institutionalization of torture. All of which is to say that the legalization of torture should be avoided. Best to then uphold the absolute ban on torture, even if that ban will be subject to violation under extraordinary circumstances.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173709 https://historynewsnetwork.org/article/173709 0
Cinderella, Whose History Goes Back to the First Century, Is Still a Delight, Glass Slippers and All

 

Who does not know the story of Cinderella, one of the world’s most beloved fairy tales?

 

The story:  Lovable peasant girl Cinderella’s evil step mother treats her very poorly while showering love and affection on her two idiot sisters. The Fairy Godmother arrives, gets CInderella dressed to kill and sends her off to the Prince’s Ball in glittering glass slippers, a wonderful carriage and instructions to leave the ball by midnight because that is when her lovely carriage turns into a pumpkin. At the ball, she meets the incredibly good looking Prince and they fall madly in love but, OMG, she did not tell him her name (or Facebook page). She has to flee at midnight and, running away, leaves a glass slipper behind. The love-sick Prince tours the Kingdom looking for the owner of the slipper. He puts the shoe on the feet of hundreds of young women (all hopelessly pathetic) and then finds Cinderella. The slipper fits. They get married, feed the hungry, house the homeless and fix the Kingdom’s economy (all of this in two minutes) and live happily ever after.

 

The latest version of Cinderella is a revised version of a 1957 television play with music and lyrics by Richard Rodgers and Oscar Hammerstein. It was based on the 1950 Walt Disney animated film of the fairy tale. This new version opened Sunday at the Paper Mill Playhouse, in Millburn, N.J and it is as wonderful as musicals about good looking Princes and lost Princesses can be. The new play features superb acting, memorable choreography, very good music and one crackerjack Fairy Godmother.

 

I went to the Paper Mill Playhouse thinking to myself that the play was going to be mediocre at best. Throughout my life I have seen most of the Cinderella plays and movies. What could possibly be left?

 

Well, I was hooked from the first moment of the play, when adorable peasant girl Cinderella wanders through the woods and, what ho!, accidentally meets the Prince. He likes her right away, despite her peasant girl status on the Kingdom social rung. He continually loses her though (should have paid the extra for caller ID).

 

Then we meet the God awful step mother, a real social climbing shrew. She makes Cinderella do all of the dirty work in the household while the other two daughters relax. For them life is fun, fun, fun, while for Cinderella it is work, work, work.

 

The stepmother prods the two sisters to go a ball the Prince is throwing to find a wife... They are outrageous and make total fools of themselves at the ball, as does everybody there except, of course, Cinderella. At times, the way the women converge on the Prince is like The Bachelor television show.

 

Anyway, as you all know, the slipper is dropped and the biggest woman hunt in fairy tale history follows. The scene where the Prince gives up after trying the slipper on hundreds of women is wonderful. The Cinderella slowly steps out of the crowd and he slips the slipper on her easily. All the other women groan.

 

Every generation has its own take on Cinderella. This one is pretty heavy on the woes of working class Americans, the problems of families, the inexperienced Prince ignoring his trusted advisor on the issues and making his own, wise, decision. There is a lot of material on insanely jealous women and how they act, but let’s not go there.

 

The Kingdom long ago was not much different from the U.S., or any other country, today. All the Prince really needed was a good wide screen TV, an I-Phone and some Taylor Swift CDs.

 

Mark Hoebee does a sensational job as director. He brings back the old fairy tale, but adds lots of new wrinkles, too. He gets fine work from his cast.  Ashley Blanchet is a revelation as Cinderella. She plays the role wonderfully and has a majestic singing voice. Her marvelous Prince is well played by the equally talented Billy Harrigan Tighe. Other fine performances are from Michael Wayne Wordly as the Prince’s audacious advisor,  Donna English as the stepmother and Rose Hemingway and Angel Lin as the step daughters.

 

One of the reasons the musical succeeds is the spritely choreography of Joann M. Hunter. It is just dazzling.

 

Cinderella is not only one of the world’s most beloved fairy tales, but one of the oldest. There are debates about when and where the story was invented, but it narrows down to two places.

 

The tale seemed to have first appeared in Egypt around 100 A.D. That story featured a lost Greek girl who stumbled into a party hosted by the Pharaoh. Some of the Cinderella elements – bad parents and foolish sisters, were in it. The next version was produced around 700 A.D. in the T’And dynasty in China. It, too, had some of the later elements of the story. Between 100 A.D. and the late 17th century there were over 300 Cinderella type stories in dozens of countries. In one, poor CIndy had to eat her big toe (uggh!).

 

The popular Cinderella tale that we all know and love, called Cendrillon, was written by Frenchman Charles Perrault in 1697. It had the mean stepmom the two dolt sisters, the charming Prince, carriage and slippers. It stood for centuries. Walt Disney came along in 1950 with probably the most successful Cinderella tale as an animated film. It did well in theaters and then was shown over and over again on Disney’s television shows. It cemented the Cinderella legend. Over the last twenty years there have been a few more live action films about Cinderella. This latest show has a brand new book, very up to date, by Douglas Carter Beane.

 

PRODUCTION: The musical is produced by The Paper Mill Playhouse. Sets: Anna Louizos, Costumes: William Ivey long, Lighting; Charlie Morrison, Sound: Matt Kraus. Choreography is by Joann M. Hunter. The play is directed by Mark Hoebee. It runs until December 29. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173708 https://historynewsnetwork.org/article/173708 0
We Cannot Forget About Acid Rain In the 1960s, ecologists started to record the detrimental effects of acid rain. While acid rain damaged many areas in America, the Adirondack Park (located in upstate New York) endured the worst consequences of any area in the nation.  

 

Acid Rain is created when nitrogen oxides (NO) and sulfur dioxide (SO2) mix and combine with water to create sulfuric and nitric acids. These acids can be carried through the air for hundreds of miles and return to the earth in a number of ways, including rain and snow. Numerous things create these acids but burning coal is one of the biggest creators of acid rain. When coal became the primary fuel used to generate electricity in America in 1961, acid rain became a significant problem. The Adirondacks and other down-wind areas suffered from the consequences of acid rain even though very little coal was burnt there. This was an alarming indication that pollution like acid rain was not just a local issue, but a national threat.  

 

Acid Rain destroys forests, harms wildlife, degrades buildings, pollutes water supplies, creates caustic fog, and threatens the lives of humans. If this list is not bad enough, acid rain also increases the number of black flies—arguably the peskiest of all pests. If anything is certain, it is that nobody wins with acid rain (except the black flies, of course). Acid Rain is a dangerous problem and history can teach us an important lesson about it that we cannot afford to ignore.

 

In the Adirondacks alone, the effects of acid rain were astounding. During the 1980s acid rain scare, a third of the red spruce trees died and over a fourth of the lakes were so acidic that they could not support fish. For perspective, the Adirondack park is six million acres large, with 2,800 lakes and millions of trees. Acid rain not only destroyed many of these lakes and trees but it endangered livelihoods as well. Fisherman during the time were so desperate to save the fish population that they would dump truck beds full of lime into the lakes to try and counter the acidity (to no avail). The beautiful landscape of the Adirondacks (which draws tourists from all over the world) was degraded and fog obscured the unique Adirondack views. The beautiful park was being destroyed by coal burning hundreds of miles away.

 

After the terrible consequences of acid rain were recognized, legislation and regulations were enacted to help save the park. In 1990, Congress passed the Clean Air Act to help control acid rain. The effectiveness of this law is debated but it was an important start. As the 1990s progressed, more lawsuits and settlements were brought against polluters (through the work of New York Attorneys General Eliot Spitzer, Andrew Cuomo, and Eric Schneiderman) which improved the conditions of the Adirondacks. These lawsuits, paired with the Clean Air Interstate Act and the eventual Cross-State Air Pollution Rule, provided desperately needed help to the Adirondacks. After these actions were taken, it took years for fish to return and trees to recover. Yet these actions turned a bad situation into a true environmental success story.

 

While major improvements have occurred in the last few decades, the Adirondacks are still not safe from acid rain. In 2018, President Trump repealed the Clean Power Plan. This plan, which was passed during President Obama’s presidency, was designed to further reduce emissions and coal burning. This plan would continue to protect the Adirondacks from the harmful effects of acid rain. In addition to the repeal of the Clean Power Plan, other layers of critical protection have been removed as President Trump has made other environmental deregulations as well. These actions threaten us with the same dangers present during the 1970s.

 

We cannot afford to ignore the history of acid rain. President Trump’s deregulations and repeals could harm many areas of the country, including places like the Adirondacks. Too much is at risk (as history has shown us) to allow acid rain to reoccur. We must remember the dreadful history of the Adirondacks when deciding the environmental future of our nation.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173619 https://historynewsnetwork.org/article/173619 0
Trump's Official Withdrawal from the Paris Climate Agreement Mirrors George W. Bush's Exit from Kyoto Protocol Earlier this month, President Donald Trump made good on a campaign promise when he officially notified the United Nations of the United States’ intent to withdraw from the Paris Agreement, While the President has repeatedly criticized the Agreement, last week was the first possible day the withdrawal process could legally begin as peranguage of the agreement that the United States helped craft in 2015. 

 

Secretary of State Mike Pompeo announced the initiation of the withdrawal process via a statement and on Twitter. Using a justification popular with Trump voters, Pompeo’s statement claimed the Agreement will hurt the U.S. economy:

"President Trump made the decision to withdraw from the Paris Agreement because of the unfair economic burden imposed on American workers, businesses, and taxpayers by U.S. pledges made under the Agreement," Pompeo said… The United States has reduced all types of emissions, even as we grow our economy and ensure our citizens’ access to affordable energy."

 

The United States has a long history of being hesitent to match other western nation's commitment to the climate. In fact, Trump’s decision to withdraw from the Paris Agreement marks the second time that the United States has not only entered, but helped craft, a climate agreement, and then exited it. In fact, The pro-economy, America-first rhetoric used by the Trump Whitehouse regarding Paris is eerily similar to that used by President George W. Bush in 2001 to justify withdrawing from the 1997 Kyoto Protocol

 

In the 1990s, the  international community realized that greenhouse gas emissions were negatively impacting on climate health as global temperatures rose. To respond, international leaders committed to substantial emission reduction targets via the Kyoto Protocol, an extension of the 1992 United Nations Framework Convention on Climate Change (UNFCCC). Under President Bill Clinton, the United States agreed, along with 40 other countries and the European Union, to reduce emissions 5.2 percent below 1990 levels during the target period of 2008 to 2012. Much like President Trump, then-candidate George W. Bush distinguished himself from opponent Al Gore (a man instrumental in the United States’ adoption of Kyoto) by campaigning against Kyoto:

“The Kyoto Treaty would affect our economy in a negative way,” Bush said during his 2000 presidential campaign. “We do not know how much our climate could or will change in the future. We do not know how fast change will occur, or even how some of our actions could impact it.”

 

Bush officially withdrew from Kyoto in 2001, putting the United States far behind its European counterparts in efforts to control climate change. Despite some backlash at the time, only a slim majority of Americans believed the effects of climate change were immediate: 58% of Americans either agreed with Bush’s withdrawal from Kyoto or had no opinion at all. Today, however, climate policy is more important to a majority of Americans. Two thirds of Americans believe they actively witness the effects of climate change and a slim minority favor Trump’s decision to withdraw from Paris. 

 

This begs the question: Why aren’t Democratic candidates talking more about the climate? Yes, it is a fair assumption that any individual attempting to secure the 2020 Democratic nomination possesses a significantly more activist stance than Donald Trump regarding the climate, and many do consider re-entry into the Paris Agreement a necessary condition for legitimate candidacy. No “frontrunner” candidate, however, has discussed the fact that Trump’s withdrawal from Paris won’t go into effect until November 4th, 2020 - a day after the 2020 general election. The timing of the withdrawal process would allow for a near-seamless re-entry into the agreement by a newly elected Democratic president, and Trump’s election all but secures the death of the Paris Agreement in the United States. 

 

Public support for action on climate change is higher now than ever, and the potential to reverse Trump’s Paris decision could carry significant weight for undecided voters. Despite this the climate was mentioned only 10 times in the October Democratic primary debate. General election season is fast-approaching, and the Democratic party as a whole would do well to shift it’s rhetoric toward more generally popular topics, like environmental activism and re-entry into the Paris Agreement, so as to establish a rapport early those undecided voters who will decide the 2020 election.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173558 https://historynewsnetwork.org/article/173558 0
Historians criticize Trump after he calls impeachment inquiry a ‘lynching’ On Tuesday October 22, 2019, President Donald Trump described the House’s impeachment proceedings against him as a “lynching.” He tweeted “so some day, if a Democrat becomes President and the Republicans win the House, even by a tiny margin, they can impeach the President, without due process or fairness or any legal rights. All Republicans must remember what they are witness here – a lynching. But we will WIN!”

 

Trump evokes one of the darkest chapters of American history. Concentrated in the 19th and early 20th centuries, lynching’s were extrajudicial executions of African Americans. They were often public events used to enforce racial subordination and segregation in the South. Trump's use of the term to describe his political predicament provoked significant outage from many historians and politicians. Let’s take a look at how historians denounced his use of the term.

 

Lawrence B. Glickman, a history professor at Cornell University wrote a Washington Post article about the long history of politicians claiming to be victims of lynching and racial violence. The article explains a type of conservative rhetoric described as “elite victimization.” Glickman argues Trump’s use of the term is a mode of speech typically used by wealthy, powerful elite men who employ such language of enslavement to claim to be victims. Glickman’s article gives the reader a key insight to how wealthy White men have appropriated the language of minority rights in order to depict themselves as precarious and weak.

 

In the Washington Post article, Glickman provides examples of previous politicians who used images of racialized subjection, including slavery and lynching, to describe their plight. For example, on December 2, 1954, the Senate voted to censure Senator Joseph McCarthy, who led the fight in Congress to root out suspected Communists from the Federal Government. Sen. McCarthy (R-Wis.) complained the “special sessions amounted to a lynch party.” Glickman also highlighted the 1987 ad-campaign from the National Conservative Political Action Committee, which condemned the “liberal lynch mob” for criticizing President Ronald Reagan during the Iran-Contra scandal. Like Trump, these politicians conceived of themselves as a persecuted  minority. Instead of embracing their elite position of power, some conservative men have instead appropriated victimhood, distorting the history of lynching.

 

Seth Kotch, a history professor at UNC –Chapel Hill and an expert on lynching, tweeted that “lynching is not something that can be appropriated by a billionaire president who wants to do crimes without consequences. But victimhood apparently can be.” In a follow-up tweet, Kotch said the President’s complaint “is really revealing [how] lynching was about the perverse and enduring idea of white male victimhood.”  The idea of white male victimhood is a topic Kotch mentions in his latest book Lethal State: A History of the Death Penalty in North Carolina. In an interview with The INDY newspaper, Koch detailed that lynching's after slavery targeted African American men to preserve white supremacy and capital punishment. Mob murders in North Carolina disrupted Black communities, stole Black wealth, and destroyed Black owned property. White men who joined lynch mobs did so “because maintaining White dominance was materially and symbolically important to them… as part of their racial inheritance.” Kotch’s historical references are  significant because they teach others how to acknowledge and memorialize the victims of the lynch mobs.

 

Kevin Kruse, a history professor at Princeton University delivered a thorough takedown of President Donald Trump’s claim that the impeachment inquiry represents a lynching. In a past tweet that was reposted in an article by AlterNet, Kruse stated “I’m not sure what legal rights’ he thinks he’s entitled to in the current stage of the impeachment process – which are akin to a grand jury investigation and indictment – but whatever rights he imagines he has will apply in the Senate trial.” Kruse convincingly argued the constitutional mechanics of the impeachment process in the House only require a minimum majority of lawmakers in order to advance. In the same thread, Kruse says “comparing impeachment proceeding to a lynching is even more insulting when you’ve cozied up to the very forces of White supremacy that historically have used lynching as a tool to terrorize racial minorities.” Kruse’s twitter thread helps us understand the ways that the impeachment process is being properly conducted, destroying the president's assertion it is unfair. Kruse also historicized the inappropriate metaphor by informing readers that the first time impeachment proceeding were describe as "lynching" was when conservatives tried to defend Richard Nixon in the Watergate investigation.

Historians have made it clear that the term “lynching” should not be applied to situations like impeachment inquiries. Historians say metaphorical use of the term is problematic because it erases the history of the racist violence once practiced in the United States.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173557 https://historynewsnetwork.org/article/173557 0
The History Briefing on "Quid Pro Quo:" The Evolution and History of Quid Pro Quo Quid Pro Quo: A favor or advantage granted or expected in return for so.

 

Over the past two months, the impeachment inquiry has sparked intense debate over the alleged quid pro quo agreement between President Donald Trump and the Ukrainian government to investigate Joe Biden in exchange for lifting halted military aide.

 

In the rough transcript of the phone call between President Trump and President Zelensky released by the White House, President Trump states “I would like you to do us a favor though” to President Zelensky while discussing the United States providing military support for Ukraine. President Trump has fiercely defended himself, claiming that there was “no quid pro quo” in his “perfect” phone call to the Ukrainian President. As Congress brings public hearings in the impeachment inquiry, it is important to understand exactly what “quid pro quo” means to determine if it applies to this phone call. Historians have provided an important perspective on how our understanding of “quid pro quo” has changed over time.

 

In an interview with NPR, the Wall Street Journal’s language columnist Ben Zimmer discussed the definition and past understanding of the term. Quid pro quo means “something for something” in Latin. Zimmer explained that in the 16th century, apothecaries would substitute one medication (quid) with a similar one that often did not work as well or may have even been harmful (quo). It was a “practice people were scared of,” Zimmer stated.  Once the term quid pro quo was used in a legal context, the term retained its initial negative connotation although it should seem neutral. Even though “quid pro quo” has been used in the English language for over 500 years, “the political situation can't help but reform the way that we're going to understand this particular phrase.” History demonstrates that the use of this term has evolved based on the ways it has been utilized over time, the most recent being its legal use in the impeachment inquiry.

 

Today, lawyers evaluate quid pro quos in cases involving bribery, extortion, and sexual harassment, Columbia Law School professor Richard Briffault explained in a New York Times article. He highlighted that while not all instances are illegal, in politics the term is often used to describe corruption. The Washington Post’s video “Quid pro quo, explained” highlighted that quid pro quo is usually very hard to prove because it is rare that there is an explicit demonstration of trading one thing for another.  The initial deal does not have to be successfully completed to be considered quid pro quo; an attempt is sufficient.

 

Doug Rossinow, a history professor at Metropolitan State University, compared the Ukrainian quid quo pro with the Iran-Contra affair in a Washington Post article.  In 1984, Congress passed a law that essentially barred President Ronald Reagan from using a proxy army, known as the Contras, to destabilize the socialist government in Nicaragua. In an attempt to secretly maintain supplying the Contras with weapons and money, Reagan engaged in illegal efforts with other governments to supply the contras for the United States in return for U.S. military aid. The article explains that what made this the Iran-contra affair was the fact that once this scandal came to light in 1986 after a supply plane was shot down by Nicaraguans, it was also discovered that Reagan’s team had authorized the sale of weapons to Iran, which was labeled a terrorist state. Even when this scandal emerged publicly, Reagan avoided impeachment. Rossinow emphasized a key difference between these two situations: “Like Reagan, Trump has played fast and loose with American assets and security policy… Reagan committed impeachable acts out of zealotry, Trump played the Ukraine card in what seems a crassly political gambit.” Reagan was able to avoid impeachment his motives did not seem to be for his own personal political gain. 

 

While the use and meaning behind “quid pro quo” evolved over time, its history demonstrates how it has come to be associated with corruption and abuse of power. Understanding the history of this phrase is important because it allows for an informed opinion on the current impeachment.

 

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173583 https://historynewsnetwork.org/article/173583 0
Democrats Should Welcome Michael Bloomberg Into the Primary Race

 

Soon after Michael Bloomberg filed for the presidential primary in Alabama, politicos rushed to criticize the former mayor of New York City. They berated Bloomberg for trying to enter the presidential race without doing the hard work that long engaged other Democratic candidates. Some dismissed Bloomberg as an ambitious billionaire. Voters wanted leaders who attacked Wall Street, critics asserted, not someone who made a fortune there. Others warned that Bloomberg, a New Yorker, could not appeal to voters in the heartland. These arguments received considerable attention in the national media, but they do not hold under scrutiny. 

 

There are good reasons for the late filing. Bloomberg hinted months ago that he would not consider a run unless moderates, especially Joe Biden, slipped. Biden has been losing ground to Elizabeth Warren in the early primary states of Iowa and New Hampshire. Perhaps Bloomberg worried about recent polls that indicate Warren and other Democratic candidates might struggle against Trump in the general election. A New York Times Upshot/Siena College poll shows that Trump is running close to or ahead of current leaders in the Democratic field in six states that may decide the 2020 presidential election. Time is running out for Bloomberg or other potential candidates. Many Democratic strategists think additional choices could prove helpful. 

 

They recognize that the party’s four current leaders, Joe Biden, Elizabeth Warren, Bernie Sanders, and Pete Buttigieg bring distinct skills, yet each has vulnerabilities. Biden is likeable and experienced but seems unsteady in the debates. Warren offers well-researched plans for reform, but some voters think her proposals are too costly, and polls indicate she is behind Trump in most tossup states. Sanders impresses numerous followers with strong challenges to inequality, but some characterize him as a cranky socialist. Buttigieg is a brilliant communicator, but some think he is too young and inexperienced to be elected president in 2020. 

 

David Axelrod, formerly a key adviser to Barack Obama, summed up the worries. Axelrod pointed to “nervousness about Warren as a general election candidate, nervousness about Biden as a primary candidate . . . and fundamental nervousness about Trump and somehow the party will blow the race.” The major reason some Democrats welcome an entry from Bloomberg is that no other Democrat presently appears comfortably positioned to defeat Donald Trump. 

 

Like all the top Democratic contenders, Michael Bloomberg brings strengths and vulnerabilities. He was very effective as three-term mayor of New York City. Historian David Greenberg noted in the New York Times that under Bloomberg, “Crime plummeted, schools improved, racial tensions eased, the arts flourished, tourism boomed, and city coffers swelled.” There were controversies, of course, especially over “stop and frisk” anti-crime measures that disproportionately affected black and brown citizens. Bloomberg will need to address this important issue if he campaigns for president. African Americans and Hispanics want just treatment, and they have a substantial presence in the party. Other criticisms focus on Bloomberg’s personal characteristics rather than issues related to his terms as mayor. Some say Bloomberg is too old or too short or that Americans are not ready for a Jewish president.

 

A more frequently articulated criticism is that voters do not want to see another rich man in the White House. Bernie Sanders warned, “Sorry, you ain’t going to buy this election.” A billionaire like Bloomberg cannot be counted on “to end the grotesque level of income and wealth inequality which exists in America today,” argued Sanders. Critics that echo Sanders’s attack, complain that Democrats have given too much power over the years to wealthy candidates, officials, and benefactors.

 

Michael Bloomberg is, indeed, one of the wealthiest Americans, but it is worth noting that several of America’s best presidents also ranked among the country’s richest. George Washington and Thomas Jefferson possessed fortunes in land and slaves. Theodore Roosevelt and John F. Kennedy benefited from the success of wealthy, business-oriented fathers. Franklin D. Roosevelt, born of the manor, probably did more for America’s unemployed and poor than any other president. 

 

Self-made achievement in business and the professions does not guarantee success in politics. Herbert Hoover, an orphan at age 10, achieved multi-millionaire status as an mining engineer but fared poorly at the White House. Donald Trump claims to be a high-achieving billionaire, yet scholars rate him the worst or one of worst presidents.

 

We cannot judge Michael Bloomberg’s potential for effective presidential leadership in terms of his record in business. Nevertheless, a Bloomberg candidacy provides distinct opportunities for Democrats. Michael Bloomberg could become a Democratic asset if the U.S. economy continues humming along in 2020. Studies reveal that economic growth and low unemployment frequently benefit an incumbent in presidential races. Trump says his policies ignited a business boom. The claim is incorrect. Markets tumbled when Trump weaponized trade wars and threatened to shut down the government. Current Democratic candidates have not been persuasive when responding to Trump’s bogus claims about brilliant economic management.

 

More effectively than any other Democrat currently in the presidential lineup, Michael Bloomberg can counter Trump’s boasts about business acumen by demonstrating greater expertise in financial affairs. Unlike Donald Trump, who received a $60.7 million loan from his father to launch a business career, Michael Bloomberg started without a strong initial boost. Through brilliant planning and investing, Bloomberg emerged as the fourteenth richest person in the world, according to Forbes. Also, after three terms as mayor, he left New York City’s finances in excellent shape. After retiring from city politics, he committed much of his money to progressive causes. Bloomberg funded global health programs and supported political candidates that challenged the National Rifle Association, protected the environment, and worked to limit climate change. Bloomberg also took the Giving Pledge. Nearly all his net worth will be given away in the years ahead or left to his philanthropic foundation. 

 

Bloomberg’s critics are focusing on irrelevant matters when denouncing his candidacy. It is not particularly important that Bloomberg is a senior citizen or a diminutive New Yorker or that he entered the primary race long after other candidates began campaigning. Nor are facts about Bloomberg’s enormous wealth likely to turn off millions of voters (the image of business success benefited Trump in 2016). 

 

Economist and columnist Paul Krugman, usually insightful, recently joined the chorus of complaints about irrelevant matters. He mocked the idea “that America is just waiting for a billionaire businessman to save the day by riding in on a white horse.” But whether a rescuer is rich or middle class is not especially important. The significant question for Democrats is: who will be available to the party if polls reveal in 2020 that Donald Trump is competitive or dominant in battleground states against leading Democratic contenders? If the 2020 surveys indicate that America and the world are in danger of experiencing four more years of deeply flawed presidential leadership, Michael Bloomberg’s candidacy may look promising. 

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173699 https://historynewsnetwork.org/article/173699 0
Investigating Technology and the Remaking of America

 

An agricultural and ranching valley in Northern California, the “Valley of Heart’s Delight,” became the cradle for technological innovation and manufacture that reshaped America in the decades following the Second World War and led to the Information Age. By the 1970s, with the upsurge in silicon chip makers there, a writer labeled the area Silicon Valley, and the name stuck. Integrated circuits, microprocessors and microcomputers were among the technologies developed in the Valley.

 

As a result of this work, we now carry supercomputers--smart phones—that have more power than the computers that made possible the American journey to the moon a half century ago. And it’s possible now to access a wealth of information through these devices and our computers, a realization of the vision of legendary MIT professor, engineer, and computational pioneer Vannevar Bush. In 1945, Bush wrote of his dream for “the memex,” an office machine that would organize and hold all human knowledge. Now we have the Internet.

 

Many of the major innovators of Silicon Valley dreamt of making the world better through technology by connecting people and making available a world of information. In recent years, however, the optimism about technology has faded with increased concerns about privacy, monopolization, disinformation, toxic social media platforms, and other issues. 

 

In her lively and extensively researched book The Code: Silicon Valley and the Remaking of America (Penguin Press, 2019), acclaimed History Professor Margaret O’Mara chronicles the story of the Valley from the wartime era of Vannevar Bush to Steve Jobs and Bill Gates to more recent innovators, such as Mark Zuckerberg, as she tackles the origins of emerging questions today about dark side of the technology. 

 

In this comprehensive history of Silicon Valley, Professor O’Mara lays out the political and historical context of technological advances over the past seven decades. She shares engaging profiles of many of the leading figures in technology from engineers and scientists to venture capitalists who made many of the achievements possible. Her writing is based on rigorous archival research as well as dozens of interviews, original research of company and personal records, and many other materials.

 

Before graduate school, Professor O’Mara worked in the Clinton-Gore White House as a policy analyst specializing in private-public partnerships. She brings that expertise to The Code as she details the often overlooked but critical role of massive federal government funding of technology in the wake of the Second World War Two with the nuclear arms race, the Cold War, and the space race—an infusion of funding that continues to the present. She pays particular attention to the politicians and lobbyists who were often enthralled by high tech and made possible special government treatment for this unique industry with generous funding as well as tax breaks, lack of regulation, trade deals, and more.  

Written for scholars and general readers alike, The Code puts a human face on the development of our technology today as it chronicles major developments and illuminates the personalities who made our high-tech world possible. The book will serve as an important reference for all who study the history of technology and politics, and for those who want to understand how we got to the questions about omnipresent technology that we grapple with today.

 

Margaret O’Mara is the Howard & Frances Keller Endowed Professor of History at the University of Washington where she teaches undergraduate and graduate courses in U.S. political and economic history, urban and metropolitan history, and the history of technology. Her other books include Cities of Knowledge (Princeton, 2005) and Pivotal Tuesdays (Penn Press, 2015). She has also taught history at Stanford University. She earned her doctorate in history at the University of Pennsylvania.

 

Professor O’Mara is also contributing opinion writer at The New York Times, and her writing also has appeared in The Washington Post, Newsweek, Foreign Policy, American Prospect, and Pacific Standard, among others. In addition to teaching, she speaks regularly to academic, civic, and business audiences. She lives near Seattle with her husband and two daughters.

 

Professor O’Mara generously discussed her background as a historian and The Code at her office at the University of Washington in Seattle. 

 

Robin Lindley:  Thank you for meeting with me Professor O’Mara and congratulations on your groundbreaking history of technology, The Code. Before getting to your book, I wanted to ask first about how you came to be a history professor. When you were young, did you think about being a historian? 

Professor Margaret O’Mara: No, that wasn’t the first sort of thing I saw myself doing. The first time I thought about it, I wanted to be an astronaut or a doctor or other things that five-year-olds want to do. 

I was really involved in theater when I was a young teen and I was a theater kid, so I wanted to be an actor. My mother was a professional actor. So many members of my family have done things that involve standing up in front of people and performing in some way. We have actors, we have musicians. My brother's a professional rock musician, and he's way cooler than me. My father is a retired clergyman who stood in front of people. My grandfather was an elected official and civil servant in the U.K. My grandmother was a concert pianist. 

When I think about the larger ecosystem of my two families, what I do as a professor seems [natural] because I stand up in front of people and teach or I speak to groups of people. I came into graduate school from the very beginning with a very public facing history in mind. My goal was to be a historian who was speaking to policymaking and public audiences because I'd come from policymaking. I think that's animated everything I've done ever since.  

 

Robin Lindley: Thanks for sharing that background. Did you get your undergraduate degree in history then? 

Professor Margaret O’Mara: I did. I went to Northwestern partially because it was on my radar screen when I was a teenage theater kid and wanted to be a theater major and they've got a renowned theater program. But by the time I was applying to college, I realized I was going to do something else. I didn't know what, but I originally was an English major and then I added history as a second major. In little ways, I realized that so much of what I loved about literature was its history and I saw these texts as historical and material culture of the past, and reflections and depictions of the past. And that's really what interested me the most.      

And, reflecting back on it, you realize when you're young, you sometimes have experiences and don't realize how formative they are. I went to Little Rock Central High School, the site of the famous desegregation crisis in 1957 in which the governor called out the Arkansas National Guard troops to prevent the integration of this Southern high school. And then Eisenhower had to call in federal troops to enforce the integration order. This was a seminal moment in the struggle to integrate public schooling in the South. 

My graduating class was exactly 30 years after the crisis at Central High. The Little Rock Nine, the nine African American students who integrated the school, came back as a group for the first time that year to visit the school. I remember vividly their visit. They walked down the hallways and were welcomed and celebrated at this place, in the space that had been so incredibly hostile to them. 

 

Robin Lindley: Were there many black students in the school when you graduated?

Professor Margaret O’Mara: Yes. It was majority minority by then. It spanned the socioeconomic spectrum, was multiracial, and produced many national merit scholars every year, and also had a lot of kids who were living in poverty, and then every stop in between. It was a really fantastic, amazing high school.

 

Robin Lindley: You were raised in Little Rock then?

Professor Margaret O’Mara: Yes. I grew up in Little Rock, Arkansas. And so Bill Clinton, the person who was governor when I was growing up, was someone I knew because Little Rock is tiny. It may be 200,000 people now, but it was about 150,000 people when I was growing up there, and yet it was the biggest city in the state. There was a kind of intimacy and familiarity with which everyone knew one another, including the Clintons. That cannot be overstated. It was really a very small stage. 

 

Robin Lindley: So Bill and Hillary Clinton were a real presence as you grew up? 

Professor Margaret O’Mara: Oh, absolutely. We lived in the same neighborhood. They were younger than my parents and Chelsea is younger than me, so we didn't socialize, but we knew lots of friends in common. It was just a small town. 

As a side note, I will say that Bill Clinton is an extraordinary figure and so is Hillary. From a very early point, even when he was the governor of this tiny state, he was so extraordinarily charismatic. He just glows with charisma. You always remember every interaction with Bill Clinton, even when he was governor of Arkansas. 

 

Robin Lindley: We were at a rally for him in Seattle in 1992, and he just happened to walk by and we shook his hand. He was extremely enthusiastic and my wife Betsy said she felt this electricity emanating from him.

Professor Margaret O’Mara: Exactly. He's a remarkable politician. When I tell people how he’s mesmerizing and magnetic, they think about his problematic record with women, but it's something that transcends that. He’s always in campaign mode, seeking your vote. He always expresses this intense interest in you and he wants to know exactly what you are up to and that makes you feel so incredibly important. That’s a talent. 

So all this is related to the question of growing up in this place that was very historically resonant. I was part of that experience and then someone I knew become president.

 

Robin Lindley: How did you get involved in politics?

 Professor Margaret O’Mara: After I graduated from Northwestern, I worked on the campaign for Clinton. That was partially because I was a history major and didn't have a job. I tried to do this corporate recruiting and hadn't gotten hired mostly because I had my intellectual passions but hadn't really figured out my professional ones. 

If you had asked me then if I wanted to wanted to be a historian, no, I didn't really. I didn't know I wanted to be historian until I applied to grad school. And even when I in grad school, I thought I'm not going to be a professor. I was going to get my history degree and then go back to Washington to work on policy and have a career there. This was exactly my plan.

I never thought I'd be doing what I'm doing. With the broader restructuring of the academic job market where there's so few jobs like mine that continue to exist, the fact that I'm sitting here as a full professor talking to just continually blows my mind. Before I went to grad school, I had thought that going becoming an academic was walling oneself off from public conversation, which is why I wanted to take my PhD and use it in a different way. Then I realized that I could actually craft a career for myself where I could do both. And that's why I've always consistently stayed doing things in public throughout my academic career.          

 

Robin Lindley: What did you do in the Clinton campaign? 

Professor Margaret O’Mara: I started off in the mail room as all great careers start. Well, technically the correspondence office. As a note to all striving young people out there who are trying to get in on the entry level like I did, I started in the most unglamorous position ever. I was operating the autopen, which is this crazy machine that essentially forges the candidate’s signature. It has this mechanical arm that would write Bill Clinton in Sharpie on letters. You sit at it and operate it with a foot pedal. 

The great thing about politics is it's a young person's game. If you're young and you're motivated and you hook up with the right mentors, then you can rise pretty fast. So, I started with the autopen and then I moved into field operations. I started at the headquarters in Little Rock and was doing get out the vote and worked for the campaign in Michigan for the last month before the election. 

Then I went back to Little Rock to work on the transition team on economic policy as a staff assistant. I was a junior person doing proofreading and copy editing, and I was right there in the heart of everything. Clinton and Gore had an economic summit in December 1992 in Little Rock where they brought down all these leading business leaders and economists to talk about what to do about the economy, and I helped put the content for that together. I began to understand suddenly this landscape of power and business that I didn't know, and who was who and what was what. 

 

Robin Lindley: And then you moved on to work at the White House.

Professor Margaret O’Mara: I ended up working in the West Wing of the White House on economic policy. And then I moved to Health and Human Services as a policy aide. 

You realize as a young person too that there's a tradeoff between high glamour and substance in these political appointments. The high glamour is definitely the White House, right? So you're kicking around the West Wing and you're going to Rose Garden ceremonies and you get the cool badge that you show when you walk in every day. It's pretty trippy.

But you're rarely doing anything substantive. You're answering the phone and running memos from one place to another. I really wanted to do something with more substance, so I went to the agencies. You're going to these giant concrete block buildings that are so unglamorous, right? But then you go and you learn about public policy and you learn how these programs work and you can learn the operations and what it really means to be in the executive branch and to execute the laws. 

The glamour quotient goes down significantly, but the substance goes up. And I was fortunate that I was working for someone who was an extraordinary mentor and boss who's still a good friend of mine who was directing intergovernmental affairs at HHS. That’s one of the most important jobs at that agency because it deals with states and localities, and the programs run by HHS at the time were all state-federal cooperative programs such as Medicare, Medicaid, and then AFDC which was turned into something else. You’re with the states and the states are the ones implementing the programs. So that was an incredible education in how policymaking works.

Then I went back to the White House and worked for Al Gore, but not on tech policy.

 

Robin Lindley: What were you doing for Vice President Gore?

Professor Margaret O’Mara: It was urban-focused economic policy. I worked on the Empowerment Zone program, which was this program to recapitalize urban neighborhoods that had been redlined. That was a centerpiece of Clinton's urban policy, and it was a program given to Gore for his portfolio. It was really interesting because it involved a whole different set of domestic programs and agencies. 

There was all kinds of targeting of communities for Empowerment Zones. There was one in Harlem. Local coalitions would apply for and get these zone [designations], and then get access to special benefits such as tax breaks and incentives and programmatic support from a whole host of different agencies. It was supposed to both provide more social capital to the local organizations on the ground who were trying to rebuild the social infrastructure of these communities and also create incentives for private sector capital to invest in real estate development and infrastructure development and all sorts of other things. 

 

Robin Lindley: That was a very important program, especially for inner cities.

Professor Margaret O’Mara: The verdict on how well that worked is still out. Historians are turning their attention to these programs and finding rather problematic and mixed results. Timothy Weaver’s Blazing the Neoliberal Trail is one example. We're still trying to figure out how to thread that needle of uneven capital investment and if capital investment in a poor area also means gentrification and displacement. So that again was another education. 

I think cumulatively the experience gave me an appreciation of not only how politics works, but also how power works, and an appreciation for the essential humanity of people who are in very powerful positions, who are simply human beings trying to figure things out and sometimes they make good decisions and sometimes they make wrongheaded decisions. Generally speaking, presidents and political leaders are trying to do the best they can in terms of implementing agendas that they think are important. There have been exceptions, but [this experience] continues to shape the way that I write about history and the way I teach history. 

I worked with Gore for a couple of years and I decided during that time that I didn't want to work in Washington or work in the hurly-burly of political life for my career. I loved writing and research and I wanted time to reflect on how these policies got to be the way they are and how the political landscape grew. 

 

Robin Lindley: And then you went to graduate school in history at the University of Pennsylvania.

Professor Margaret O’Mara: Yes. The thing about Washington DC is it's all reactive. By necessity, you're just ricocheting from one thing to another. The wonderful thing about the scholarly world is you have an opportunity to be reflective and proactive so you can sit back, you can read lots of books, you can think about how these pieces fit together. And then you can produce scholarship, right? You're not reacting to the news of the day. You’re thinking more in a more measured and long-term way. That's how I made the rather strange decision to transition from politics to grad school. 

 

Robin Lindley: It seems that urban history was your primary focus in grad school. Your doctoral dissertation was award-winning and published as a book, Cities of Knowledge. 

Professor Margaret O’Mara: Urban history was my interest, and, at first, high tech was not at all on my radar screen. I'd come from working chiefly on programs that served poor people and were seeking to address poverty.  I came to grad school assuming that I was going to continue my work on that. I went to work with the late Michael B. Katz who was an extraordinary scholar of poverty and social inequality. 

When I embarked on the dissertation project, I knew I wanted to look at the American economy of the 1950s and suburbanization and poverty, as well as look at the world and economic geography of the US before the War on Poverty. Then, I started thinking about the role of federal economic policy. What was federal economic development public policy during this time? There were certainly things like the Area Redevelopment Act and efforts targeted toward poorer parts of the country that were designed to remedy their economic situation. But really the Big Kahuna was not an economic development policy at all. It was the military industrial complex. Then I knew what I wanted to do. 

 

Robin Lindley: And universities were at the center of your research for your dissertation. 

Professor Margaret O’Mara: The great lesson I learned early on was don't ever have too many preconceived ideas about what your dissertation is going to be about and what it's going to discover and what it's going to conclude. It's very tempting to say, I'm going to show that X happens. 

I learned from that first project that the questions I was asking were not all the questions I needed to ask, and that the archives told me things and led me in places I hadn't expected. So, it became a book about universities as economic engines. It became a book about the transformation of American higher education. It became a book that was about the West Coast of the United States, a part of the country that I had not lived in, and I had not really spent much time in before I started writing about it. And it became about the origins of the technology industry, and I was not a historian of technology. I was a political historian. I was someone who was interested in policy.

 

Robin Lindley: How do you prefer to be seen as a historian now? You have a background in urban history, political history, presidential history, and now tech history.

Professor Margaret O’Mara: I'm a historian of modern America. I'm interested in how the private and public sectors interact across time and space. I'm a political and economic historian and, in doing that, a historian of cities as sites of particular forms of economic production. I think they're all intertwined. I find my home in sub-disciplines. 

The different playgrounds I play in are political history, urban history and technology history, although I should be quite clear that I'm not a historian of technology in the way of historians trained in history of science and technology who have a much deeper, more granular sense of the technological dynamics and the science itself.  I'm a science, technology and society person, broadly defined. 

I'm going to continue to resist being just one thing. For a while I felt I needed to choose a lane. I wrote a book about presidents and I followed up with a book about high tech and it seemed like they were disparate subjects, but really they're all tightly connected. 

 

Robin Lindley: You certainly provide historical context and illuminate interconnections between politics, culture and economics in The Code.

Professor Margaret O’Mara: I wrote The Code the way I did to show that when we interlace political history, social history, business history and technology history, new insights emerge about each of those domains because we understand the relationship of each with the others. To look particularly at the phenomenon of the modern American technology industry and Silicon Valley as a place and an industry without considering the broader political and cultural currents is too limiting. 

To make the tech industry a sidebar in the world of 2019 seems absurd. It’s central. And the way it got to be so central was because it has been intertwined all along. It's never been separated. It's never been a sidebar. It's never been a bunch of wacky guys out in California doing their thing to be different. They weren't that different. They were different in distinctive ways, but their differences were constructed by and enabled by American culture, the broader currents in American culture. I think people who are students of cultural history, intellectual history, social history, political history, and urban history can all gain from this understanding of the history of the technology industry and of Silicon Valley in particular. 

 

Robin Lindley: How did The Code evolve from your initial plan? Did you envision this comprehensive history of Silicon Valley or did you have something else in mind?

Professor Margaret O’Mara: I went into this book because, ever since I wrote Cities of Knowledge, I was asked what's Silicon Valley's magic formula? How did it come to be? Only one part of that book was about Silicon Valley and that narrative ends around 1970. I set out to answer those two questions. Initially I was going to focus from the seventies through dot-com boom. And then I was thinking very much in terms of writing a political history of that era and the role of politics and policy and the growth of the tech industry. And as soon as I set out, I realized I was going to have to first go further back in time for the story make sense.

There were a lot of things that I had reflected on since the publication of Cities of Knowledge that had broadened and deepened my analysis of the origins of the Valley, and I wanted to bring that in. And you can't really start in 1970 without explaining how all these players got there. And, as I kept on going, I was encouraged to push it to our present day because one of the things that has happened, and I think that the book really makes clear, is how the scale and the scope and the speed of tech went into hyperspace after 2000. After the dot-com bust, you see the growth of new companies and new industries that are of a different order of magnitude. Yet the culture has some of the same persistent patterns. 

What I realized when I was finishing the book and about to send off the manuscript in October 2018 was that this was an explicit explanation of how we got to now with big tech and how we now have these big five companies: Apple, Amazon, Facebook, Google and Microsoft. And this book was not only for people inside the technology industry, and not only scholars, but it was for everyone who uses these technologies and these platforms, which are pretty inescapable. 

It's very hard to navigate life in modern America without in some way using one of one of the products of the big five. Even if you choose to turn everything off, this stuff is touching you whether you know it or not. And it may not be obvious to the reader but, from the very first page when I start the narrative, when I started the 1940s, I wanted to make sure that there were continuing threads of ideas and processes that take us all the way to the present. Take the idea about connecting people and making the world more open and connected, a mantra repeatedly invoked by Mark Zuckerberg of Facebook. It has its origins deep in the past. I also wanted to show where a particularly important element of the tech story, the practice of high-tech venture capital, began and how the venture capital industry shaped what was possible in tech—including who got to be a technologist. 

 

Robin Lindley: Thanks for explaining that process. A major theme in your book is how Silicon Valley grew because of a flood of government money and other public support such as tax breaks, favorable trade deals, etc. You offer a counterpoint to a popular perception that individual entrepreneurs such as Steve Jobs alone created the flourishing tech industry. 

Professor Margaret O’Mara: Here is where you've had the presence of government and politics and policy all along. It’s never gone away, and not just with the Defense Department and NASA, but with other matters. The government nudged the tax code favorably in the tech industry’s direction. 

There is a reason that this industry rose so high and for so long. It was treated politically as a golden child. Every city wanted a high-tech employer. And every lawmaker in Washington thought these companies, until recently, were the prime example of great American companies that they held up and celebrated. 

And now, that mood has shifted dramatically. So, it's been so interesting. When I started this book, everyone was still pretty rah-rah on tech. It was still the golden years of the Obama era, when Obama was doing town halls at Facebook and all seemed so great and so hopeful. And now it's so dark. 

For every scholar, if you're doing your job, you are a gentle critic. If you're deconstructing myths that people like to tell about themselves, you're speaking truth to power to some degree. Now I sometimes find myself saying, slow down a minute, and let's think about this. We're using these devices and there have been extraordinary technological advances that have made [some situations] better for humankind. At the same time, they also have brought these other very serious consequences. Let's take a more measured in historical view about it. 

 

Robin Lindley: I appreciate that you planned to write for a general audience. Frankly, I was somewhat intimidated by this big book on tech. Thank you for making this history so lively and engaging. 

Professor Margaret O’Mara: Thanks. I went into this project knowing I wanted to write a trade book, not an academic book. I wanted it to be for general audience because I felt that there was a need for a comprehensive history of Silicon Valley that connected the deeper past to the present. 

This was the book that I wished existed in 1999 when I first embarked on my dissertation research and moved out to the California and felt like the blind man and the elephant. I was getting little pieces of this history, but I couldn’t put it all together. I didn’t quite understand how all these things are connected. And after 20 years, I decided to write it myself, and I think there's an important need for it now. 

I like to think I was writing a book about technology for people like me, meaning people who are not technologists and the people like the me of 1999: non-technologists interested in history and policy, interested in social history, interested in more broadly in the past. People who are technology users but don't really understand how it works. 

To help readers, I wanted to write something that was neither cheerleading, aren't these guys great, nor isn’t it terrible--burn it all down. I hope that I got the tone right. It's a work of history and, as a historian, we aren't supposed to be writing Jeremiads. Our job is to do the best we can to build an archive and write from that. That's really what I did. 

 

Robin Lindley: Speaking of building an archive, what was your research process? I imagine that you had to start from scratch just to find many materials. Did you find archives on this relatively recent history of technology?

Professor Margaret O’Mara: I had to build my own archive. One of the challenges is that it's such recent history and another challenge is that companies that are busy building the future aren't really big on archives. They don't get that they should be saving stuff. And when you get to a big company that actually has resourced it out and has an archive, in many cases, they are closed to the public or they are extremely restricted in what they show people and what you can use. So there's limited utility there.  

At the same time, I was fortunate in that people around tech funded and participated in a number of really robust oral history projects. There is one on venture capitalists at the University of California, Berkeley, that was funded by venture capitalists. It has a lot of interviews and oral histories with VCs performed by a trained oral historian, which I am not. I was so grateful for those for that archive. The Computer History Museum in Mountain View, California, has an extensive and growing oral history collections. And the professional organization, the IEEE has a lot of oral histories that they have both recorded and transcribed. Many of these are available digitally. So those [archives] are incredible resources for anyone doing this. 

I will say that a lot of the questions being asked in the oral histories were, rightfully so, about the technology itself and the development of the technology. I was really interested in understanding more about the social conditions. Hey, what was it like for a woman in tech? What was it like living in Palo Alto in 1965? Tell me about what you were doing after hours.

I wanted to know about the things that were not often as visible about business operations and organization, for which the venture capitalist oral history project is very useful for. Then you're talking about financing from banks. And that was very helpful. 

 

Robin Lindley: And you interviewed dozens of people as part of your research.

Professor Margaret O’Mara: The interviews helped me better understand the network itself. Like who's friends with whom?  I would interview someone, they would add, Oh, you should talk to my friend so-and-so. And I'm like, how do you guys know each other? Oh, we've known each other since 1972, and this is how you reach him or her. I was also interested in talking with people whose voices had not been represented in the archive. 

In these conversations, I also asked about politics. There was almost nothing I could find about lobbying trips that electronics executives took to Washington and who they met with, so I talked with former politicians and to people involved in the lobbying with the executives. By and large, the CEO themselves would go and lobby and that that was part of their power. A group of high-tech CEOs went to DC in the early eighties and lobbied for changes in trade policy because they were getting slaughtered by Japan in the chip market. 

They were doing personal, one-on-one lobbying. So very interesting. For that, I relied a lot on newspaper and magazine reporting. I think I've read every issue of Business Week between 1978 and 1982. I'm overstating it, but I did call up back issues from the library and they were not digitized, so I had a giant stack of volumes. It was actually quite useful to sort through pages of the magazines and see what's proximate to what, and then what’s on the front page. 

 

Robin Lindley: You probably saw the faces of rising tech luminaries on many older magazine covers.

Professor Margaret O’Mara: That’s right. So you can see how it was growing and who was reporting on it and why and when. 

Then I had tons of books from the period like journalism books about economic competition and technology. I was spending a lot of time on sites for Powell’s and Amazon just finding used books that you can buy for penny because that was the only place I could find them. There also were some obscure journalistic books about the trade war with Japan and stuff like that. I have boxes full of books actually that I’ll give to a foundation for other historians. 

 

Robin Lindley: The Code is sure to become a major reference for other historians who research technology and economics.

Professor Margaret O’Mara: I really wanted to put a trail of breadcrumbs in this book for future historians to pick up on because there's a lot more to be written. 

 

Robin Lindley: Why did technology flourish in Silicon Valley? I understand from The Code and Cities of Knowledge that Stanford was a hub for this development, particularly because of its innovative engineering school. What else attracted people to the area initially in the post Second World War era? 

Professor Margaret O’Mara: They come out for jobs in electronics. So first, you have this agricultural valley. Stanford is there and it's pretty good, but it wants to be really good. Fred Terman, who is a student of Vannevar Bush, comes back home to the Stanford faculty. He says, all this federal money will be flooding us and it’s best to spend this money on science research and we need to be ready for it. He completely reworks the curriculum and builds up physics and engineering and builds up these big labs. He gets these big federal contracts and is also at the same time courting industry. 

And there was great weather, lots of open space, as well as ongoing aeronautics and military projects in the vicinity. And also, as I wrote about in Cities of Knowledge, the Defense Department was incentivizing these big defense contractors to decentralize and not have all their operations in one place so, if a Soviet bomb came, it wouldn't wipe out the whole joint. That’s why Lockheed moved its missile and space division to Sunnyvale. 

There were these twin magnets in the Valley. You had Stanford, which was on the make and working really hard to bring in federal money and upgrade its place in the hierarchy of universities. And you had Lockheed, which was hiring thousands of electrical engineers to work on missile and space projects. 

And from the get go, the nascent tech industry was already there before the war, specializing in oscillators and communications technologies like radar and microwave radio. So that's the building blocks of the modern computer revolution, with miniaturization of electronics, right? Once the transistor was invented—not in Silicon Valley but in Bell Labs in 1947—the capacity for electronics to get smaller and smaller and more powerful starts amping up.

Then you have communications technology. First there was time sharing and then there was the internet. 

At this is time, Seattle was building airplanes and Boston was the hub of computing. There was no computing industry in the Valley for a long time. It was all East Coast. But once you had these twin magnets in this agricultural Valley, you start seeing East Coast electronics industries opening labs and satellite facilities in and around there to take advantage of all these smart young men coming out of Stanford or the offshoots of Lockheed. 

By the end of the fifties, the Valley was not Silicon Valley yet, but it was known as a hub of small electronics. If you wanted to look for electrical engineers, you needed to go to California. There was this new symbiosis. The young men who came out there when it was still remote. It wasn’t that close to San Francisco. There was nothing going on there. Just two bars, and it was just boring. By and large, they were not people with connections or rich fathers or guys with Ivy league degrees. They weren’t going to get a job at aFortune500 company or work their way up in their father's law firm or bank, or they wouldn't go all the way out to California. The guys who come out were middle-class boys. Many of them were scholarship students and smart engineers who didn't have family connections and didn't have personal wealth even though many of them became very wealthy later. 

And they didn't come into the game with money, but they were lucky. They were privileged. They were white, they were male, they were native born. They were middle or lower middle class but they were college educated. So that set them apart. And they were coming out when all of the winds were blowing in their direction. 

If you were a smart MIT- or Stanford-trained engineer in the fifties, the world was your oyster. The Cold War was creating this huge demand for people just like them. They've got their pick of where to work. They worked in companies like Sylvania or Litton or other companies that no one remembers anymore. 

Some went to Lockheed, which was the biggest employer in the Valley through the 1980s. This is not really recognized, partly because almost everything they did was top secret and no one could talk about it. They couldn’t write magazine cover stories about top secret missile research, so there wasn't buzz about it like there was on the commercial side.

So that's the beginning. That's how all these people started.     

 

Robin Lindley: You vividly bring to life the daily activities of the workforce of mostly white male tech experts. You also mention some outstanding women in tech. What would you like readers to know about the role of women in Silicon Valley? 

Margaret O’Mara: That there have been women there all along. The early Valley was a manufacturing region, filled with microchip fabrication plants and the rest, and that workforce was heavily feminized as well as being disproportionately Asian-American and Latina. Women who started their careers picking and canning fruit when the Valley was mostly an agricultural region then shifted over into electronics production as the industry grew. (The fiercely anti-union stance of the tech companies, however, meant that these jobs were not unionized, nor did workers often share in the benefits given to white-collar workers, like stock options).

The early days of computer programming involved a heavily female workforce, in good measure because the coding was seen as something rote, simple, unskilled.  All the art—and all the money—was in the hardware.  Even as a software business started to bloom in the 1970s and early 80s, however, there remained a good deal of technical women in the industry simply because the pool of trained people was smaller and a growing industry was desperate for programmers. 

I should also add that another critically important group of women in the Valley were the wives of the male engineers and executives who pulled long hours in semiconductor firms and others. The work hard, play hard atmosphere of the industry was made possible by the fact that most of these men had wives at home who were keeping everything else running, caring for children and household, so that the husbands could throw themselves into their work.  In short, women have always been part of the tech story. They just haven’t gotten much of the glory.

 

Robin Lindley: And Silicon Valley eventually eclipsed the traditional research hub in Boston. Was that because of the very different cultures?

Professor Margaret O’Mara: They had different cultures, but something I came to appreciate in the process of writing this book was the symbiotic relationship between Boston and the Bay Area. It’s similar in some ways to the symbiotic relationship now between Seattle and the Bay Area where these two [tech centers] now are. They are competitors, but they share people who ping back and forth and money that goes back and forth. And the same thing with Boston and the Bay Area. You see people going from MIT to Stanford and back to MIT. 

Stanford and the Bay Area had the weather advantage, so people tended to move West and not go back East. But the capital was still East Coast centered until the eighties when you had tech venture capital, the money guys, out West. And there was a lot of venture capital. investing in high tech. These guys were all over. Some were investing in Chicago, on the East Coast, in the Midwest. 

The decisive move West was not really until the late eighties with the death of the minicomputer industry and the swift decline of Digital and Wang, which were two big players in Boston. At the same time, the end of the Cold War shook the Boston economy more. It was more defense dependent than the Bay Area by then. Both areas were shaken by the end of the Cold War, but Boston didn't recover, and it didn't have a second act after minicomputers. It didn't have high tech venture capitalists or entrepreneurs that were then going on to found other companies. It didn't have that multigenerational dimension. It just had one big act, and that was it, although there's still plenty going on there now in biotech. 

So Boston's still very much an important tech hub, but it's not what you have in the Valley. I think that's where the culture comes in. What develops in the Valley develops partially in isolation. I call it an “entrepreneurial Galapagos” because of the isolation. You have these strange species such as law firms that are specializing in high tech startups, like Wilson Sonsoni, that are figuring out how you structure a corporation that's founded by a couple of 22-year-olds who have no experience [as in the case of Apple]. You have high tech venture capitalists that are not just providing money, but they're providing very hands-on mentorship and executive direction to these companies. And in fact, they staff them up there. Basically, the VCs swoop in and bring the rest of the executive team and bring the adult supervision. They connect these new companies into the network, and that becomes this multigenerational thing.

And then you have the fact that the Valley has been specializing from day one in small electronics and communications devices. At the beginning of the commercial internet, it's perfectly poised to be the dominant place in that space even though the internet was not invented in the Valley, but was a Department of Defense creation. But the Valley researchers and technologists were at the forefront of miniaturization of digital technology and digital communication and software and hardware that enabled communication since the very beginning. 

 

Robin Lindley: Speaking of the internet, we know that Al Gore didn’t invent the internet, but wasn’t he largely responsible for bringing this technology from the military and academia to consumers?

Professor Margaret O’Mara: Al Gore was one of the few politicians in Washington in the 1980s and 1990s who really took the time to learn about and understand the industry and where it was going. Newt Gingrich was another.  And Gore’s great contribution was pushing forward the commercializationof the Internet in the early 1990s, opening it up to all kinds of users and allowing it to become a place of buying and selling. 

The Internet had been around for over 20 years by then, but it was a noncommercial space, restricted for most of its existence to academics and defense-sector government employees.  As a senator, Gore sponsored legislation that gave the Internet backbone with the computing power it needed to scale up into a commercial network and supported the opening of the Internet to commercial enterprises. As Vice President, he led the push to write the rules of the road of the network, which resulted in the protocols and standards that govern its use today as well as resulted in Internet companies being quite loosely regulated. This allowed the dot-com boom and the social media and search platforms that followed, but, as we now see, had consequences that few could have anticipated in the early 1990s. 

 

Robin Lindley: You write that Seattle and Silicon Valley are part of the same whole. How do you see that relationship?

Professor Margaret O’Mara: I talk a lot in the book about Amazon and Microsoft and the evolution of those companies because they're very important now. One reason I do that is you see how, from the very beginning, both companies had very close ties to the Bay Area and that every element of the Seattle innovation ecosystem has connections here that are really important. 

You have very early venture capital money from the Valley that capitalized Microsoft. You have the same for Amazon. And then it goes the other way. Jeff Bezos personally invested in Google at a very early stage. And you have this crisscrossing of people and capital and expertise that’s just a two-hour flight away. 

My theory is that one reason the venture capital community hasn't gotten bigger in Seattle than one would expect is partially because it's easy to fly down and raise money. And now Seattle is getting some benefit from the overcrowding and saturation of the Bay Area because it’s harder and harder to live there. So people are coming up to Seattle. We'll see what happens. 

 

Robin Lindley: That’s an illustration ofthe importance of free movement of people in America, as you stress. You also discuss how immigration shaped the tech industry, especially after the 1965 Immigration and Nationality Act. How was immigration important to the development of Silicon Valley?

Professor Margaret O’Mara: Critically important. In the book, I highlight the 1965 Hart-Celler Act, the immigration reform that ended more than 40 years of racially restrictive quotas on foreign immigration and made possible whole new streams of immigration from Asia, Latin America, and the rest of the world. Many of them, particularly immigrants from East and South Asia, came to Silicon Valley.

Even before that reform, immigrants and refugees were critical parts of the tech story. Take Andy Grove, legendary CEO of Intel, who came here as a 20-year-old refugee from Hungary, speaking little English and undoubtedly doing little to impress the immigration officials processing his entry paperwork when he arrived in 1956. Or Yahoo founder Jerry Yang, the California-raised son of a Taiwanese single mom. Or Sergey Brin, son of refugees from Soviet Russia. The list goes on and on.

 

Robin Lindley: What are your thoughts on regulation or other measures to address big tech as concerns deepen about monopolization, disinformation, privacy, and other issues?

Professor Margaret O’Mara: It’s up to lawmakers to decide the best path going forward, but this history is critical to helping them make informed decisions about how to do so. And American history, more broadly, provides instructive insight into understanding this moment.  

Over a century ago, Washington DC and the states were beset by similar debates about how to rein in the power of giant corporations and their billionaire CEOs. Then the industries in question were railroads, oil, and steel. Now its social media and search and e-commerce and cloud computing. But the basic questions of fairness, competition, and finding the right balance between capitalist enterprise and government guardrails remain.  

 

Robin Lindley: I wanted to close with your perspective as a historian. You have said that history makes you an optimist. That may be an unusual posture for a historian in view of the innumerable accounts of disaster, war, and injustice that you study. 

Professor Margaret O’Mara: I think history makes you a realist and it can make you an optimist. And a real, very important thing for historians who teach history and who care about history is that we need to interrogate and deconstruct narratives that don't actually align with historical truth. And we must discuss times when we didn’t live up to our ideals, and people who've been long been marginalized, and disempowered voices, and the privileging of some voices over others. That’s what we call being a realist. We must be real. 

Particularly now, in thinking about American democracy and global democracy, you both need to have realism, but you also have to provide the people who are reading your history or listening to you in class help in understanding where they can have grounds for optimism as well as a realistic sense about the past.

 Facts can be empowering. Knowledge is power. We can use that power to think about and give tangible examples of how people spoke truth to power. There are examples of collective mobilization or individual actions that have had significant societal consequences. 

There are examples in American history of dark, violent, horrible, horrible moments in our past, and so many times in which America did not live up to the ideals it purports to stand for. And yet these are ideals that were laid out in the first place that we are asked to aspire to. There are examples of particular people who have been excluded by the way that these ideals have been executed in practice, and they fought against that exclusion and for having a voice and their rights.

I go back to the fact that the descendant of slaves was our last First Lady. And that tells us there's some progress, right? And here I am, a senior tenured female history professor at the University of Washington. You go back to the era of my great grandmother and I would have not even have been given a job. And, if I had been given a job, I certainly wouldn't have been given tenure or job security or the authority to speak in the way I now can with this platform. And I feel that's an incredible privilege I have.

 So what can I do to use that in the way that lifts up as many other people and inspires people to change the world that they see and make it a truly better place. I have spent a lot of time writing about people who yammered about making the world a better place. I think they believed that. There’s a desire that lies within the human heart to make the world a better place. I ask how can society be arranged in a way that is as fair and as just as possible to advance that desire and to allow that human potential to be realized.

 

Robin Lindley: Thank you for your thoughtful and inspiring remarks Professor O’Mara and congratulations on your groundbreaking new book on Silicon Valley, The Code.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173696 https://historynewsnetwork.org/article/173696 0
The History of Black Incarceration Is Longer Than You May Think

 

The United States contains less than 5 percent of the world’s population but incarcerates one-quarter of all prisoners across the globe. Statistics have long shown that persons of color make up a disproportionate share of the U.S. inmate population. African Americans are five times more likely than whites to serve time in prison. For drug offenses alone, they are imprisoned at rates ten times higher. 

 

Recent scholarship has explored the roots of modern mass incarceration. Launched in the 1980s, the war on drugs and the emergence of private, for-profit prison systems led to the imprisonment of many minorities. Other scholarship has shown that the modern mass incarceration of black Americans was preceded by a nineteenth-century surge in black imprisonment during the Reconstruction era. With the abolition of slavery in 1865, southern whites used the legal system and the carceral state to impose racial, social, and economic control over the newly liberated black population. The consequences were stark. In Louisiana, for example, two-thirds of the inmates in the state penitentiary in 1860 were white; just eight years later, two-thirds were black.

 

The incarceration of African Americans did not begin suddenly with the end of the Civil War, however. Confinement functioned as a punishment during bondage as well. Masters were the law on their own plantations and routinely administered their own brand of justice. Although they usually relied on the whip, countless enslavers also chained their human property in plantation dungeons below the main dwelling house or in a barn. Some locked enslaved persons in a hot box under the scorching southern sun. The more formal legal system, too, sometimes deposited enslaved individuals in state or local incarceration facilities. 

 

Charlotte, an enslaved woman from northern Virginia, experienced several of these institutions firsthand over a seventeen-year period. Using court records to trace her life illustrates the many official, lawful forms of imprisonment that the enslaved might encounter in the antebellum era.

 

In 1840, Charlotte was held in bondage in Clarke County, Virginia, west of Washington, D.C. She was only sixteen or eighteen years old, a dark-skinned, diminutive young woman, standing just four feet eleven inches tall. Legally, she was the property of Eliza Pine, a white woman whom Charlotte despised. Reportedly thinking that committing a crime would prompt Pine to sell her, on March 10, Charlotte set fire to a house in the town of Berryville. She was arrested for starting the blaze and placed in the local jail as she awaited trial.

 

Enslaved people were imprisoned briefly in local public jails or workhouses under a variety of circumstances. Masters sometimes made use of such facilities to punish bondpeople deemed troublesome or, if needed, to store them securely. Enslaved individuals apprehended as runaways or awaiting trial or sale at auction also saw the inside of city or county jail cells. In all of these instances, the enslaved usually measured their terms of incarceration in just days or weeks.

 

Even that was too long for most slave owners. Local jails were notoriously overcrowded, damp, and disease-ridden. The deplorable conditions inside endangered inmates’ health and imperiled their lives. Consequently, most masters preferred to keep their valuable human property out of jail.

 

Charlotte was taken out of her cell for trial on Monday, March 23, 1840. Although she pleaded not guilty, the five Clarke County justices who heard her case convicted her of arson – a capital crime – and sentenced her to hang. They scheduled Charlotte’s date with the gallows for Friday, June 26, between the hours of 10 a.m. and 2 p.m. They valued her at $500, which represented the amount her owner would receive from the commonwealth of Virginia as compensation for the loss of the valuable young bondwoman. As customary, after the trial, authorities escorted Charlotte back to her cell in the Clarke County jail. There she would bide her remaining days until her planned execution.

 

Meanwhile, whites in Clarke County labored to prevent Charlotte’s impending doom. The five justices who had convicted her, in fact, recommended at the time of the verdict that Virginia governor David Campbell commute Charlotte’s punishment to sale and transportation outside the limits of the United States – a lawful alternative to hanging – due to her “Youth and evident Simplicity.” They sent the governor a separate petition as well, also signed by the prosecuting attorney at Charlotte’s trial. Two other petitions from dozens of citizens of Berryville and the surrounding area likewise reached the Virginia governor. Citing Charlotte’s youthful age and purported deficiency in intellect, they begged for executive mercy on her behalf.

 

Newly inaugurated Virginia governor Thomas Walker Gilmer viewed Charlotte’s case sympathetically and issued the desired reprieve. Since Charlotte would now be sent outside of the United States, authorities transferred her to the Virginia State Penitentiary in Richmond, where she and other enslaved convicts awaited purchase by a slave trader willing to carry them out of the country for sale. She was admitted on April 15. In her new prison world, Charlotte listened for her name at roll call each morning, wore prison garb, swept her cell daily, ate carefully doled out rations, and labored for the commonwealth, all the while struggling to avoid punishment and disease.

 

Enslaved people like Charlotte rarely saw the inside of a penitentiary in the pre-Civil War South. Maryland sentenced bondpeople to the penitentiary from its opening in 1812 until 1819, taking in some sixty slaves during those years. Arkansas permitted the imprisonment of enslaved convicts in the state penitentiary for certain, specified crimes only briefly, before changing the law in 1858. After 1819, only the state of Louisiana habitually punished enslaved criminals with prolonged sentences in the penitentiary, usually for life. Virginia courts did not sentence enslaved people directly to confinement in the penitentiary, although the commonwealth did house on a temporary basis those individuals such as Charlotte as the process of sale and transportation outside of the United States unfolded. Virginia bondpeople typically spent only months to a year or two in the penitentiary before being purchased by a slave trader.

 

Charlotte remained in the Virginia State Penitentiary for five months before she was bought by a slave trader willing to carry her out of the country. On September 16, Rudolph Littlejohn, an agent for Washington, D.C., slave dealer William H. Williams, took delivery of her and twenty-six other enslaved captives. Altogether, Williams and a partner paid the commonwealth $12,500 for the lot.

 

Charlotte and the other enslaved transports first made their way to Williams’ private slave jail in Washington, D.C., known as the Yellow House. This was the same establishment, just south of the National Mall and within easy sight of the U.S. Capitol, where the kidnapped free black man Solomon Northup would find himself enchained in a basement dungeon the following year. Williams and his agents purchased enslaved people from throughout the Chesapeake and stored them in the Yellow House until they had assembled and prepared a full shipment for sale in the Deep South, where enslaved people were in high demand and attracted high prices. New Orleans was the usual port of destination.

 

After less than a month, William H. Williams had gathered enough enslaved men and women to fill a ship. His slaving voyage set sail from Alexandria aboard the brig Uncas on October 10, with sixty-eight total captives on board, including the enslaved convicts purchased in Richmond. On November 1, Williams and his human cargo arrived in New Orleans.

 

Authorities in New Orleans had been warned, however, that Williams may appear in their city with enslaved convicts in tow, a violation of a Louisiana state law passed in 1817 that prohibited the introduction of enslaved criminals. When officials spotted Williams, they confirmed the criminal pasts of the transports, some of whom had been convicted of violent offenses against whites. Concerned for the safety of Louisiana’s citizens, the New Orleans Day Police confiscated the convict bondpeople and carried them to the Watch House at city hall for safekeeping. Charlotte and the other convicts entered yet another jail.

 

Williams protested that he was merely passing through Louisiana en route to Texas, in 1840 a foreign country eligible to receive enslaved convicts. As Williams launched his defense in the Louisiana court system, Charlotte and the other transports were transferred from the Watch House to the recently completed Orleans Parish prison. Several of the convicted bondpeople from Virginia remained there for years as Williams pursued his case; others ended up in various other incarceration facilities within Louisiana. Litigation continued, and years elapsed before the Louisiana Supreme Court ultimately ruled against the slave trader.

 

Soon thereafter, on March 13, 1845, Charlotte and nine of her fellow transports were transferred to the Louisiana State Penitentiary in Baton Rouge. Listed as “forfeited to the state,” their new master was the state of Louisiana. Some two hundred enslaved people were held in the Louisiana State Penitentiary in the antebellum decades. While enslaved male inmates toiled in the brickyard or cotton factory for the penitentiary lessees, Charlotte and the other female convicts did the washing and mending in the prison laundry. Prisoners at the penitentiary donned the convict’s uniform, which included an iron ring around the leg, linked by an iron chain to a belt around the waist. The penitentiary itself consisted of a three-story brick structure. Prison guards deposited inmates in cramped, individual cells, three and one-half feet wide and seven feet deep, secured by a iron door, poorly ventilated, and unheated in the winter. Prisoners slept on mattresses placed on the floor and, at mealtime, ate mush and molasses from a tin plate in their cell, in the dark and alone. Eventually, overcrowding at the institution forced inmates to share the space with another prisoner, although new accommodations were eventually built for female convicts in 1856.

 

Segregation by sex or race was never perfect during the antebellum decades. Imprisoned bondwomen routinely bore offspring more than nine months after they entered the penitentiary. Charlotte gave birth while in prison to three children – John, Mary Ann, and Harriet – before January 1855. The identity and race of the father or fathers are unknown, the circumstances surrounding conception uncertain. With both black and white men among the prison population, enslaved women may have willingly participated, in spite of vigilant officials, in loving relationships or clandestine affairs with fellow prisoners. At least as likely, female convicts proved captive, convenient, and vulnerable targets for the unwanted advances of inmates, coercive white guards, or other penitentiary authorities who wielded power over them. The prospect of rape was ever-present. At the same time, it is possible that the relatively few enslaved women in the Louisiana State Penitentiary were able to leverage their sexuality to extract various favors from those in charge or from inmates able to smuggle in goods from the outside. Given the range of possible encounters, Charlotte’s son and daughters may have been the products of consensual acts, forced sex, coercion, or some combination thereof.

 

A Louisiana law of 1848, unique among the slaveholding states, declared that children born to enslaved female prisoners confined in the penitentiary belonged to the state. An act of 1829 forbade the sale of enslaved children under the age of ten away from their mothers, however, so the state was legally obligated to keep them together until the child’s tenth birthday. At that time, the state could seize the youngster as state property and auction him or her off to the highest bidder. The proceeds of such sales went to the free school fund, to finance the education of Louisiana’s white schoolchildren.

 

Charlotte and her children met a different fate. Slave trader William H. Williams spent years lobbying the Louisiana state legislature for the return of the enslaved convicts confiscated from him in November 1840. Finally, in 1855 and 1856, lawmakers passed a pair of individual acts for his relief. By the terms of these agreements, Williams regained possession of the surviving enslaved transports from Virginia as well as the “issue” – the children – born to the enslaved women of that shipment. On February 7, 1857, the Louisiana governor discharged Charlotte, her son and two daughters, and the other Virginia convicts from the penitentiary and restored them to Williams for sale to new owners. At that point, Charlotte can no longer be tracked in the historical record. Presumably, the only lingering form of imprisonment she suffered prior to emancipation was the institution of slavery itself.

 

Over the course of her lifetime, Charlotte was incarcerated, sequentially, in at least six different facilities: a local Clarke County jail, the Virginia State Penitentiary, the Yellow House slave pen, the New Orleans Watch House, the Orleans Parish prison, and a second state penitentiary, in Louisiana. Although a few bondwomen in Louisiana served prison terms in excess of two decades prior to abolition, Charlotte’s seventeen years in confinement ranked her among the longest-serving felons, black or white, in antebellum U.S. prison history.

 

Individually, Charlotte’s experience was unusual among the enslaved. Masters wanted their human property working profitably, not imprisoned, except perhaps briefly as a punishment that owners themselves determined. Charlotte’s story is nevertheless significant in demonstrating the range of carceral institutions to which black people were subjected even during slavery. With the exception of William H. Williams’ private slave jail, designed specifically to accommodate enslaved captives bound for sale, whites outnumbered blacks detained in all of these facilities in the antebellum decades. But African Americans were nevertheless present in these institutions even during slavery. By the outbreak of the Civil War, the seeds for the later mass incarceration of black people were already planted, the institutional structures already in place, and the precedents for black imprisonment already set. With the end of slavery, prisons were well positioned to transition from a secondary to a primary form of black oppression.

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173701 https://historynewsnetwork.org/article/173701 0
A Marvelous Christmas Carol

 

The Christmas season has opened before you even smelled the turkeys from Thanksgiving Day and, thanks to New York’s new and wonderful play A Christmas Carol, it is a joyous season to behold.

 

This new A Christmas Carol, based on Charles Dickens’ novel, has a different look to it, a different musical score, a different Scrooge and different ghosts. But it is the same heart-warming story of cheap old Ebenezer Scrooge and how three ghosts appear from nowhere and help him change his miserable life. And, of course, it has Tiny Tim in all of his working-class glory.

 

God Rest Ye Merry Gentlemen and God thank thee Charles DIckens.

 

The fun in this new production at the Lyceum Theater, on W. 45hth Street, that opened last week, starts before the show begins when more than a dozen men and women, dressed in md nineteenth century London clothing, scatter through the audience handing out oranges and bags of cookies to whoever they can find. You didn’t get any? Don’t worry. The folks scramble up on to the stage and toss them out to different spots in the crowd (one 30ish woman was throwing perfect strikes to people way up in the balcony. The New York Yankees should sign her as a pitcher).

 

Then, solemnly, the Brits all re-appear with bells and play Christmas carols. Then, at long last, cranky old Ebenezer, the Cratchits and the old gang appear on stage and plunge into this delicious chestnut of a play with glee and joy.

 

The story is simple and just about everybody knows it. Scrooge is an old creep who hates everybody and dismisses Christmas with a big “humbug.” He is mean to his employee, hard-working Bob Cratchit, the father of fragile young, crippled Tiny Tim, who is going to die because mom and dad have no money for doctors.

 

Scrooge is awakened from his sleep that night by his former partner, Marley, dead these long seven years now. Marley tells him that he will be visited by three ghosts representing Christmas past, Christmas present and, ominously, Christmas future. The ghosts, all women in this play, absolutely terrify Scrooge. They take him around London and remind him that in his youth he was a normal human being. He loved a woman, Belle, worked for a witty and warm man, Fezziwig and had a fine nephew, Fred. Then he began to chase the almighty dollar and gave up his relationships with everybody. Oh, he became very rich. What did he have for all his money?  Well, not much. The ghosts point that out.

 

Can he be saved on this long and cold Christmas Eve or will he fly into the arms of Satan well below upon his death? He sees himself dead in the play and shivers.  

 

In ghost round three, Scrooge struggles to keep his senses as he finally sees his life as a real heartless, soulless tragedy, and not a trip to the bank.

 

Playwright Jack Thorne has kept most of the legendary Scrooge story intact, but he has taken out some parts and added others. Example: most productions of A Christmas Carol are heavy on mid nineteenth century sets, large casts and plenty of onstage parties. Thorne keeps it simple and streamlines the story. He works hand in hand with gifted director Matthew Warchus to create a handsome new version of the classic play. A Christmas Carol has got to be the most produced play in the world during the holidays. It seems like everybody presents a version and the different film productions of it made in Hollywood are on television constantly. This new one on Broadway is clearly one of the best, and has all those bags of chocolate chip cookies flying through the air, too (I’ll bet that old Scrooge saw no “humbug” in chocolate chip cookies).

 

If you like history, you’ll enjoy this play about the 1850s era in England, and the moves, too. Dickens wrote it after observing numerous incidents of discrimination and persecutions against poor people in London and studied the lives of children who persevered in the numerous workhouses of the day. A Christmas Carol is Christmas for the lower classes, but, really, Christmas for everybody.

 

Much of the success of the play must be attributed to the sterling performance of Campbell Scott as Scrooge. He is happy and sad and joyful and miserable. He had a forlorn movement to his step in much of the play, but a bouncy one at the end. Most importantly, he plays Scrooge as a 60ish man and not the very old grouch seen in most productions of the play. He is young enough to be saved, and young enough to save himself. Scott s just wonderful.

 

Director Warchus, who does a fine job recreating 1850s London, gets other fine performances from Andrea Martin and Lachanze as the ghosts, Dashiell Eaves as Bob Cratchit, Brandon Gill as nephew Fred, Evan Harrington as Fezziwig, Sarah Hunt as Belle, Dan Piering as young Ebenezer and Sebastian Ortiz as lovable little Tiny Tim.

 

So, a Merry Christmas to all of Scrooge’s friends in merry olde London town and a merry Christmas to all in America, too.

 

PRODUCTION: The play is produced by the Old Vic Theater, in London. Sets and Costumes: Rob Howell, Lighting: Hugh Vanstone, Sound:  Simon Baker. The play is directed by Matthew Warchus. It runs through January 2, 2020.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173702 https://historynewsnetwork.org/article/173702 0
Fake News and the Founders: Get Used to It!

 

“American Nation Debauched by WASHINGTON!” screamed a newspaper headline before charging the Father of Our Country with “the foulest designs against the liberties of a people.” 

President Donald Trump would call it “fake News” and George Washington most certainly would agree. 

 

After he read Philadelphia’s Aurora in December 1796, President Washington blasted the story as “indecent …devoid of truth and fairness.”—and most of America’s Founding Founders concurred. Indeed, when South Carolina’s Charles Pinckney and Elbridge Gerry, of Massachusetts, proposed to the 1787 Constitutional Convention “that the liberty of the press should be inviolably observed,” Connecticut’s Roger Sherman responded angrily, “It is unnecessary!” Most delegates agreed, rejecting the proposal seven to four against. 

 

Four years later, however, the First Congress included free-press guarantees in the first of ten constitutional amendments, collectively called the Bill of Rights. Freed from government constraints, many newspapers used First Amendment rights to uproot government corruption, but others used them as licenses for libel, giving birth to “fake news” in America. 

 

After Washington left office, the press assailed his successor, President John Adams, with fake news that he was “a warmonger,” “insane,” and possibly “a hermaphrodite.” Adams’s successor Thomas Jefferson fared no better, as opposition newspapers tarred him as “an atheist, radical, and libertine” and “son of a half-breed Indian squaw sired by a Virginia mulatto.”

 

Jefferson had championed press freedom until fake news changed his thinking: “Nothing can now be believed which is seen in a newspaper,” he charged. “Truth itself becomes suspicious in that polluted vehicle.” The press had the last word, however, publishing the not-so-fake news of Jefferson’s sexual relationship with Sally Hemings, a slave girl Jefferson inherited. 

 

Fake news did not diminish as the nation matured. Indeed, it became entwined in the nation’s literary fabric. In the run-up to the 1828 presidential election, the Cincinnati Gazette “exposed” candidate Andrew Jackson, the hero of the Battle of New Orleans in the War of 1812, as a “murderer, swindler, adulterer, and traitor…. 

 

General Jackson’s mother was a COMMON PROSTITUTE, brought to this country by the British Soldiers! She afterwards married a MULATTO MAN, with whom she had several children, of which number General JACKSON IS ONE!!!

 

Americans ignored the fake news and elected Jackson President, but Rachel Jackson, the new President’s wife, suffered a heart attack and died before his inauguration.

 

Twenty years later in 1848, fake news that “Canada’s woods are full of Chinese…ready to make a break for the United States,” provoked American ‘49ers to run thousands of Chinese ‘49ers off the Sierra Nevada gold-laden mountains at gun point and steal their claims. As fake news of a “yellow menace” intensified, Congress passed the 1882 Chinese Exclusion Act barring Chinese entry into the United States for the next 60 years. 

 

Fake news about Asians grew more virulent after Japan’s December 1941 attack on Pearl Harbor. Newspapers across America clamored for expulsion of everyone of Japanese ancestry.  “They are not to be trusted.” AndThe Argus in Seattle predicted: “If any Japs are allowed to remain in this country, it might spell the greatest disaster in history." The Bakersfield Californian concurred, claiming, “We have had enough experiences with Japs.” 

 

President Franklin D. Roosevelt’s responded with an executive order that sent anyone with at least 1/16th Japanese ancestry into concentration camps without trial or due process--120,000 in all, including 17,000 American children.

 

Since its first appearance during the early days of the republic, the wide variety of fake news has often made it difficult to identify. Some is born of innocent misreporting or failure to uncover all facets of a story, but as much or more results from bias in the form of misstatement, misreporting, or misinterpretation.  Deliberate placement of a story on the front or inside page—or omission of a story--can also reflect bias by lessening the story’s impact on readers.

.

Aside from its ill effects on American politics, fake news can have dangerous consequences, as in 2011, when newspapers published a rogue scientist’s contention that vaccinations caused diseases they were meant to prevent. Enough parents responded by refusing to allow their children to be vaccinated. A nation-wide epidemic of measles followed--years after compulsory universal vaccinations had eradicated the disease.

 

When President Washington ended his second term in 1797, he had so tired of fake news by “infamous scribblers,” he rejected pleas to remain in office. “I am become a private citizen,” he wrote with joy, “under my own vine and fig tree, free from the intrigues of court” -- and, he might have added, “fake news.” 

 

© Harlow Giles Unger 2019

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173695 https://historynewsnetwork.org/article/173695 0
Russian Victories in the Post-Cold War Era

 

For over four decades following World War II, the Soviet Union engaged in a global Cold War with the U.S., aiming to destabilize America’s status as a world power. By any measure, the U.S.S.R. lost that war. But years later --after the fall of the Berlin Wall and dissolution of the Soviet bloc – Russia, under the leadership of Vladimir Putin, sought a different approach to continue to battle against the United States. Unlike the previous outcome, Russia is clearly winning this new war! 

 

The original goal to chip away at American global dominance was fairly simple, but old Cold War tactics were mostly obsolete in the 21st century. Combining sophisticated misinformation and hacking initiatives, in addition to artfully using old Cold War methods of espionage (especially targeting Americans who could be compromised], Russia under Putin has tallied some remarkable achievements beyond wildest dreams of his predecessors who tried unsuccessfully to undermine the United States.  Russia’s timing was perfect, as Putin and his oligarchs put in place the pieces of a puzzle that have been wildly successful in weakening its Western nemesis.

 

Robert S. Muellers’ Report On The Investigation Into Russian Interference In The 2016 Presidential Electiondocuments extensively the coordinated Russian cyber attacks with the intent of influencing the American electorate with a disinformation campaign and to sow long standing racial and other discord. This effort could only have its intended impact by aligning with a U.S. presidential candidate --who by any calculation had little chance of being elected but who was obviously already compromised--  and hoping that somehow he could pull off an upset victory.

 

When Donald Trump was elected as U.S. President in November 2016 (to the great surprise of most Americans and probably to Putin’s astonishment as well), Russia achieved what no regime had ever achieved before. The golden prize was an American president, perhaps compromised far beyond what U.S. intelligence has revealed thus far – the leader of the Free World who has consistently advocated a pro-Russian agenda. This was a remarkable feat on the heels of an equally successful campaign to lure and reel in several close associates to Trump to do Russia’s bidding with the new president and his administration. 

 

Putin’s plan worked like magic: a U.S. president who at every step supports Russia’s international agenda and publicly advocates pro-Russian positions. The list grows every month of Trump’s efforts to bolster Russia: from inciting divisions within NATO,  to the recent G6 Summit where the president tried to argue on your behalf for Russia to be readmitted to the group, and most recently the departure of U.S. armed forces from northern Syria clearing the path for Russian dominance in the region. We can only wonder what information Putin has on Trump to make the president an ardent defender and enthusiastic pro-Russian advocate.

 

Putin’s Russia is winning battles to destabilize the U.S. that former USSR leaders such as Joseph Stalin, Nikita Khrushchev, and Leonid Brezhnev had tried but failed.  Russia’s “new” war against the U.S., weaponized with a president who every day undermines American democratic institutions, has opened an unprecedented front in the ongoing battles between the two nations. Who will win this war remains uncertain.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173694 https://historynewsnetwork.org/article/173694 0
Overcoming Cold War Narratives: Remembering the Progressive Politics of Louis Adamic

 

From the 1930s through the 1950s, Louis Adamic was one of the best known journalists and immigrant rights activists in the United States. His editorials and columns championing immigrant and African American rights, workplace justice, and anti-colonialism appeared in the New York Times, The Nation, Harper’s Weekly, and the Saturday Evening Post. His advocacy of ethnic equality changed middle school, high school, and college curricula by encouraging teachers to recognize cultural differences as an asset for students to discuss not an obstacle to be overcome. Eleanor Roosevelt praised his efforts to fight nativist and racist policies in her syndicated newspaper column, My Day. In fact, her appreciation for his work resulted in Adamic and his wife being invited to dinner at the White House.

 

During World War II Adamic’s writings, speeches, and radio interviews about politics in his native Yugoslav helped convince Americans to support communist guerrilla fighter Josip Broz Tito’s Partisan fighters there. However, Adamic’s support of Tito led Federal Bureau of Investigation agents to surveil him until his death in 1951 because they labeled anyone who praised a Communist as a subversive. Simultaneously, Americans’ growing admiration for the Partisans convinced the Treasury Department to make Adamic one of its leading spokespeople for selling war bonds.

 

Adamic’s nuanced positions have been forgotten today because the binary nature of anticommunism in the mid-twentieth century made his progressive politics incomprehensible to a majority of Americans. Adamic urged Americans to embrace pluralism. The US, he argued, was not an Anglo Protestant nation, but a land of migrants and refugees continuously redefining themselves. Adamic called for a global embrace of pluralism. He pointed to Tito’s promise to his Serb, Slovene, Croat, Montenegrin, Bosnian Muslim, and Macedonian soldiers that victory over their Axis occupiers would result in a Yugoslavia based on ethnic equality. Adamic continued to support Tito after the Yugoslav leader broke from the Soviets in 1948. He hoped that  the new Yugoslavia’s pluralist roots would ultimately lead Communist Yugoslavia to evolve into a true democratic republic. He believed both the Soviets and Western powers were imperialists and he predicted, correctly, that Yugoslavia would join with colonial nations to champion a world where countries did not have to choose an alliance with either the US and USSR. During the early Cold War period Adamic’s thinking presented such a conundrum to the FBI, the agency created a special category of subversive for him, “a pro-Tito Communist.” At the same time the US poured millions of dollars into Yugoslavia hoping to exploit divisions between the Soviets and Tito.

 

As my research shows, most scholars have failed to grasp Adamic’s politics. He called himself a progressive which for him meant supporting racial and ethnic equality, workers’ rights, and a foreign policy that granted nations the right to self-determination. He criticized both liberals and Communists as unreliable. Liberals, he charged were “too wishy washy” and Communists always followed the dictates of Moscow. He wanted to advance a globally conscious anti-colonial left. His views threatened the emerging Cold War consensus, and anticommunists of all stripes purposefully mischaracterized his positions in order to silence him. Anticommunist activists feared that the growing anti-colonial sentiment he and African American activists, non-Communist labor leftists, and peace activists advocated would challenge the growing contingent of Cold War liberals. 

 

Adamic’s quest to convince Americans to find an alternative to a Cold War ended on September 4, 1951. His local coroner ruled his death a suicide despite the fact that New Jersey State Police detectives suspected foul play. Prior to his death, Adamic had been moving from hotel to hotel in New York City while he worked on his thirteenth book, an account of his 1949 visit to Tito’s Yugoslavia. He went into hiding because he believed that former Croatian fascist soldiers (Ustaše), who came to the US as Displaced Persons, would follow through on their threats to kill him. The Nazis allowed the Ustaše, under the leadership Ante Pavelić, to set up the Indepndent State of Croatia during World War II. Living in exile in Argentina after the war, Pavelićordered assassins still loyal to his vision of Croatian nationalism to kill his enemies all over the world. According to interviews conducted by both FBI agents and journalists of Adamic’s friends and associates, Adamic believed they wanted to kill him for his role in convincing the American public to support Tito during World War II. The FBI agents who monitored Adamic noted that although a Soviet agent also threatened him, he only feared the Croatian fascists. By early September, he thought the danger had passed. He was clearly wrong. 

 

Adamic’s anticommunist critics used his death to portray him as an agent of Stalin who broke with the Kremlin by supporting Tito and suffered death as a consequence. Smearing Adamic as a Communist made him toxic. Teachers and college professors stopped assigning his books, and he and his progressive politics, rooted in his opposition to fascism, faded into obscurity.

 

Death to Fascism demonstrates that the reemergence of progressive politics today, and its links to antifascism, has a long history. For Adamic, fascism, at its root, was an ethos uniting disparate strands of conservative and reactionary thinking into an anti-Enlightenment counterrevolution that sought to destroy democracy by appealing to beliefs in racial superiority and glorifying violence. For democracy to truly be fascism’s antithesis, those claiming to fight for democracy then and now need to commit to his progressive agenda of racial and ethnic equality, workers’ rights, and the rights of nations to self-determination.     

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173691 https://historynewsnetwork.org/article/173691 0
Trump Skips ASEAN Summit, Continuing a Presidential Tradition

 

The Association of Southeast Asian Nations (ASEAN) held its 35th Summit in Thailand in early November 2019. ASEAN holds two summits annually. The second this year included meetings with Dialogue Partners including the United States. The 14th East Asian Summit –(EAS), a gathering of countries formed in 2005 initiated by ASEAN to nurture an East Asian community, which the United States belatedly joined in 2011, was held back to back. 

 

On 29 October, the White House announced that President Trump would not attend the summit and that Robert O’Brien, the new National Security Advisor, and Secretary of Commerce Wilbur Ross would go instead. Trump also invited the leaders of ASEAN to meet in the United States for a “special summit” at “a time of mutual convenience in the first quarter of 2020”. The invitation reminds us of two previous invitations in the recent past.  

 

In May 2007, it was announced that then-US President George W. Bush would visit Singapore in September to attend the ASEAN-US commemorative summit marking 30 years of relations. However, in July it was reported that Bush would not be coming to Singapore after all and that the meeting of ASEAN leaders would be rescheduled “for a later date”. Compounding the disappointment, Secretary of State Condoleezza Rice also decided to skip the ASEAN Regional Forum (ARF) because of developments in the Middle East that required her attention. Her deputy, John Negroponte, represented her. Rice’s absence was certainly a “dampener”. 

 

Not surprisingly, the Southeast Asian countries felt that they were, to quote the late Surin Pitsuwan who would assume the post of Secretary-General of ASEAN the following year, “marginalised, ignored and given little attention ”while Washington and other allies were “moving firmly and systematically to cultivate a closer and stronger relationship in the Asean region”. President Bush attempted to make up for the aborted meeting with ASEAN leaders in Singapore by inviting them to a meeting at his Texas ranch on a date convenient for all. Bush apparently reserved such invitations “as a diplomatic plum for close allies”, but in the end the meeting did not take place because of scheduling difficulties and dis-agreements over Myanmar. Bush also announced that Washington would appoint an ambassador to ASEAN “so that we can make sure that the ties we’ve established over the past years remain firmly entrenched”. However, there remained the feeling that these actions were an after-thought. 

After Barack Obama became president, many in Southeast Asia hoped that the United States under Obama would pay attention to their region. To a certain degree, Obama delivered,  but the Obama administration was also distracted by domestic politics and the financial crisis (beginning in 2008). President Obama made a final sprint to shore up US-Southeast Asia relations when he hosted the Sunnylands Special Summit in February 2016 - the first such summit between the US and ASEAN held in the United States. In a short statement issued at the end of two days of relatively informal talks, everyone reiterated their “firm adherence to a rules-based regional and international order” and “a shared commitment to peaceful resolution of disputes, including full respect for legal and diplomatic processes, without resorting to threat or use of force”. ISEAS Senior Fellow Malcolm Cook commented that “it was late-term exercise in symbolism over substance, lacking any clear affirmation of future U.S. commitment to Washington’s Asian rebalance policy”. 

President Trump’s decision to skip the meetings in Southeast Asia this month must feel like déjà vu to the leaders in the region. It is too early to tell whether the proposed summit will take place given the small window of opportunity before the US election season gets into full swing. If it materializes, it is uncertain how productive it will be given the many distractions facing President Trump. The Southeast Asian states critically need a countervailing force, both in form and substance against China, which only the United States can provide. It looks like the region will have to wait with bated breath till late-2020 before we can tell how US-Southeast Asia relations will further develop. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173698 https://historynewsnetwork.org/article/173698 0
Lincoln – not Pilgrims – responsible for Thanksgiving holiday

 

Most Americans believe that the Thanksgiving holiday originated with New England’s Pilgrims in the early autumn of 1621 when they invited the Wampanoag Indians to a feast to celebrate their first harvest.                  

 

However, the Pilgrims’ Thanksgiving was actually a continuation of a European agricultural tradition in which feasts and ceremonies were held during harvest time.                            

 

In fact, President Abraham Lincoln established the holiday in 1863 as a permanent fixture on the calendar to celebrate Union victories in the Civil War and to pray to God to heal a divided nation.

 

Prior to 1863, the U.S. government informally recognized periodic days of thanksgiving. In 1777, for example, Congress declared a day of thanksgiving to celebrate the Continental Army’s victory over the British at Saratoga. Similarly, President George Washington, in 1789, declared a day of thanksgiving and prayer to honor the new Federal Constitution.  But it took the national trauma of a Civil War to make Thanksgiving a formal, annual holiday.                                                

 

With the war raging in the autumn of 1863, Lincoln had very little for which to be thankful.  The Union victory at Gettysburg the previous July had come at the dreadful human cost of 51,000 estimated casualties, including nearly 8,000 dead.  Draft riots were breaking out in northern cities as many young men, both native and immigrant, refused to go to war. There was personal tragedy, too.                        

 

Lincoln and his wife, Mary, were still mourning the loss of their 11-year-old son, Willie, who had died of typhoid fever the year before. In addition, Mary, who was battling mental illness, created tremendous emotional angst for her husband.           

 

Despite - or perhaps because of - the bloody carnage, civil unrest and personal tragedy, Lincoln searched for a silver lining. Sarah Josepha Hale, editor of Godey's Lady's Book, provided the necessary inspiration.                                                 

 

Hale, who had been campaigning for a national Thanksgiving holiday for nearly two decades, wrote to the president on September 23 and asked him to create the holiday “as a permanent American custom and institution.”                           

 

Only days after receiving Hale’s letter at the White House, Lincoln asked his Secretary of State William Seward to draft a proclamation that would “set the last Thursday of November as a day of Thanksgiving and Praise.”                                             

 

On October 3, the president issued the proclamation, which gave “thanks and praise” to God that “peace has been preserved with all nations, order has been maintained, the laws have been respected and obeyed, and harmony has prevailed everywhere, except in the theater of military conflict.”                                 

 

Unlike other wartime presidents, Lincoln did not have the arrogance to presume that God favored the Union side. Instead, he acknowledged that these “gracious gifts” were given by God, who, while dealing with us in anger for our sins, hath nevertheless remembered mercy.                                         

 

Lincoln also asked all Americans to express thanks to God and to “commend to His tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife,” to “heal the wounds of the nation,” and to restore it as soon as may be consistent with Divine purposes to the full enjoyment of peace, harmony, tranquility and Union.”                                                                     

 

Since 1863, Thanksgiving has been observed annually in the United States. Congress insured that tradition by codifying the holiday into law in 1941, days after the U.S. entered World War II.                             

 

At a time when we are struggling with the volatile issues of race, immigration and the impeachment of a president who has divided the nation along partisan lines, Lincoln’s Thanksgiving proclamation reminds us of the necessity to put aside our differences, if only for a day, and celebrate the good fortune that unites us as a people regardless of ethnicity, race or creed.                           

 

Perhaps then we can do justice to the virtuous example set by Lincoln, who urged us to act on the “better angels of our nature.”    

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173700 https://historynewsnetwork.org/article/173700 0
American Exceptionalism and Why We Must Impeach Trump

 

It has become an article of faith for many, even among those with no faith, that the idea of American exceptionalism is at best outmoded or at worst a delusional construct of the elite to bully the rest of the world.   

   

This attitude, perhaps formed by years of unmet expectations, is as shortsighted as it is unfortunate.  People lucky enough to be native born or those who earn naturalization are the most fortunate fraction of the human race. America is the wealthiest, most powerful and on the whole the freest nation in all history. The benefits of geography, the abundance of resources, the fluidity of society and above all our commitment to self-government under the rule of law have made America exceptional. Exceptional does not mean flawless. America, like any human endeavor, is imperfect. The fact that we can see this in our country and work to remedy it does not mark us as deficient, but rather people committed to the more perfect union cited in the preamble of the constitution. 

 

To recognize the blessing of exceptionalism does not mark Americans as braggarts, but rather grateful heirs of history’s gift with the duty to maintain and improve it for future generations.  Nor is it an excuse for nationalism. Our alliances and engagement with the rest of the world are a source of strength, not as the demagogues rant a system of weakness and lost prestige. 

 

American exceptionalism has come under its most serious attack from the presidency, what would have seemed in normal times the least likely source . The current occupant of the White House has done more to destroy this country and injure its place in the world than any foreign army has ever achieved. Under the cynical and empty slogan “Make America Great Again,” he has made this country far worse. Behind the façade of the bumper stickers and the baseball caps, the president has trashed NATO, which more than anything prevented WW III. Our allies around the world are throwing aside decades of hard earned trust. He perverted our fragile diplomatic relations by extorting Ukraine to obtain nonexistent dirt on a potential political opponent. Without warning, he brutally betrayed our Kurdish allies in the field of combat, an act of treachery so brazen and despicable as to stain America for centuries. He has consistently alienated leaders of democratic countries while getting in bed with the authoritarian and despotic leaders of Russia, Turkey, North Korea, and the Philippines. He has embraced the phony populism of Poland, Hungary, Italy and most regrettably the United Kingdom, where Prime Minister Boris Johnson is his clone experiment gone awry. As the evidence of the Mueller report clearly showed, the right wing howling aside, he colluded with Russia to win office. 

 

On the home front he has defended white supremacists, openly ordered the obstruction of Congress and the courts, discriminated against transgender service members with absolutely no justification other than bigotry, violated campaign finance laws to silence a mistress, thumbed his nose at the constitution-based ban on emoluments, and allegedly committed forcible rape in a department store dressing room.  Even under the narrowest definition of high crimes and misdemeanors, he has violated the letter and spirit of his oath of office. For this and for other reasons too numerous to name, he must be impeached, convicted and removed immediately.  

 

American exceptionalism has been wounded but it is still alive. The surviving remnant of its tattered spirit must rally itself once again to extricate the malignancy that crept its way into our country. The weight of our heritage and the promise of our future demand that we act in the present to restore our exceptional place in the world and in our hearts.

 

© Greg Bailey 2019

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173693 https://historynewsnetwork.org/article/173693 0
“You furnish the pictures and I’ll furnish the war” In this and like communities, public sentiment is everything. With public sentiment, nothing can fail; without it, nothing can succeed. Consequently he who moulds public sentiment, goes deeper than he who enacts statutes or pronounces decisions. He makes statutes and decisions possible or impossible to be executed.

-Abraham Lincoln, Ottawa, Illinois, August 21, 1858, Debate with Stephen Douglas

 

Whatever is right can be achieved through the irresistible power of awakened and informed public opinion. Our object, therefore, is not to inquire whether a thing can be done, but whether it ought to be done, and if it ought to be done, to so exert the forces of publicity that public opinion will compel it to be done.

-William Randolph Hearst, unpublished editorial memorandum, date unknown

 

 

William Randolph Hearst’s journalistic credo reflected Abraham Lincoln’s wisdom, applied most famously in his January 1897 cable to the artist Frederic Remington at Havana: “Please remain [in Cuba]. You furnish the pictures and I’ll furnish the war.”

             

For the past two decades, journalism professor W. Joseph Campbell has argued in labored academese that the story of Hearst’s telegram is a myth. In an online monograph I have refuted Campbell’s inaccurate and misleading assertions, and have countered his analytical approach. Here I shall address the historiographical aspects of the debate: How can one write credible history from incomplete, ambiguous, and at times contradictory evidence? How can one test the reliability of witnesses?

 

In the absence of the actual telegram, or of confirmation by the sender or the recipient or both, historians must rely on second-hand reports. Each report must be independently and conjunctively evaluated in its respective context. Authors’ motives need to be understood, if any might influence their declarations. 

 

Most primary sources that quote or paraphrase Hearst’s cable to Remington are excerpts from personal memoirs. The passage of time often corrodes and corrupts recollections. Authors of integrity sometimes succumb to exaggeration and embellishment. With those cautionary concerns in mind, let us proceed to explicate two pertinent records.

 

Charles Michelson’s Reminiscence

 

In his 1944 autobiography The Ghost Talks, President Franklin D. Roosevelt’s press agent Charles Michelson recounted his youthful ordeal as Hearst’s New York Journal and San Francisco Examiner correspondent in Havana in 1895 and 1896:

 

All this time Hearst was plugging for war to free Cuba from the Spaniards. Fiery editorials and flaming cartoons came out daily picturing Weyler the Butcher, Weyler being the new governor-general of the island. One day the paper came in with a two-page illustration of Weyler flourishing a blood-dripping sword over the female figure supposed to represent Cuba. Just before this I had gone to a little town in western Cuba where a battle was being fought, according to reports. There wasn’t any battle. A rebel troop had marched in, had drilled in the public square, and the men had marched away again. A couple of hours later Spanish troops appeared, and there was some shooting; but as far as I could learn, the casualties were only among the civilian population. I had to show my credentials to the Spanish authorities, so they knew my identity. The night of the day on which the ghastly picture appeared, my door flew open and a Spanish secret service official told me that I was under arrest. They took me down to the water’s edge, put me in a boat, and took me over to Morro Castle, where I was locked up.

 

Through a combination of diplomacy and bribery, Michelson was released.

 

And I took the first boat I could get to Key West.

 

Later I was sent back to the Caribbean with Richard Harding Davis and Frederick Remington to join the rebels. Mr. Hearst kindly furnished us with his yacht Vamoose. That craft was a hundred and ten feet long with a ten-foot beam. It was a grand thing in the Hudson River, but I never could find a captain who would take us across the Gulf Stream to Cuba. Whenever we got to the turbulent current, something would go wrong with the machinery and the captain would insist that we go limping back to Key West. So the jeweled sword I was to present to General Gomez of the rebel forces was not delivered. It was in the course of this incident that a famous telegraphed correspondence between Remington and Mr. Hearst was supposed to have taken place. According to the story, Remington wired that he was returning, as he did not think there was going to be any war involving the United States; and Hearst is reported to have replied, “Go ahead, you furnish the pictures and I’ll furnish the war.”

 

When Hearst’s war with Spain belatedly began in 1898, he dispatched Michelson to cover it, along with Stephen Crane. Michelson continued to work for Hearst until 1918. He disagreed with Hearst’s editorial stance that opposed United States involvement in World War I, so he switched to a paper that favored entry.

 

Michelson’s memoir falls short of affirming that Hearst had sent the telegram to Remington. But he had served as a close associate of Hearst for more than twenty years — during which national media had published the undisputed story in 1901, 1906, and 1912. It seems likely to me that if Hearst had credibly repudiated its essence, Michelson would have heard and reported that also. 

 

More than that, the story was consistent with Hearst’s editorial stance and with his assignments to Michelson. The excerpt I quoted from his book doesn’t prove the telegram’s authenticity, but it supplies a morsel of positive evidence.

 

Jimmy Breslin’s Version

 

On page 4 of the New York Daily News for Sunday, February 20, 1983, legendary columnist and feature writer Jimmy Breslin wrote:

 

In my past business, I sat one night in the sports department of the old Hearst paper, the Journal-American, checking the horse-race charts. On the other side of the space there was an old man from the wire room who one night showed me ancient copies, or maybe they were facsimiles, I couldn’t tell, of telegrams that were sent to and from the Hearst paper in 1898.

 

One was from Frederic Remington, the Western artist, who was at the Hotel Inglaterra in Havana. His wire was addressed to Mr. William Randolph Hearst Sr., and it read:

 

“Everything is quiet. There is no trouble here. There will be no war. I wish to return.”

 

And the wire sent back to him from the New York Journal, as it was known then, read:

 

“The Chief says: Please remain. You furnish the pictures and I’ll furnish the war.

“Signed Willicombe, Secretary to Mr. Hearst”

 

The lead article on page 2 of the paper was headlined “Libyans score U.S.” over a report about the Libyan government’s threatened reprisals if the aircraft carrier USS Nimitzwere to enter Libyan waters as President Ronald Reagan had commanded. In bold italic type above the article appeared this note, which established the context for Breslin’s report of his encounter with the exchange of telegrams between Remington and Hearst: “Jimmy Breslin remembers the Maine and hopes he won’t have to remember the Nimitz. Page 4.”

 

From mid-1959 to early 1962, Breslin worked as a sports reporter for Hearst’s New York Journal American. His published recollection came at least 20 years later, long enough for memory to fray and fade. Not surprisingly, he got at least two details wrong. The occasion for the telegrams was in January 1897, not 1898. Joseph P. Willicombe did not become Hearst’s private secretary until the 1920s, but he held that position for the rest of his career, and his was the only name widely associated with that title. 

 

Neither error is reason to distrust the essential truth of Breslin’s statement, which otherwise rings true. It is congruent with credible reports going back to July of 1901, and it adds information that previous authors omitted. Attributing authorship to Hearst’s secretary seems likely, but to my knowledge has not been reported by earlier writers who probably had not examined copies of the actual documents. 

 

(As one who collects artifacts of postal history and written communication, including some 19th century telegrams, I would add that written messages and signatures are often puzzling to read. An unintelligible scrawl might have appeared to be Willicombe’s name to a reader predisposed to identify his name with the title Breslin knew he had held.)

 

Conclusion

 

The sentiment Hearst expressed in his telegram to Remington represented a consistent application of his philosophy, which implemented Lincoln’s axiom as a business principle. No writer who knew Hearst personally seems to have doubted it. I can see no ulterior motive in either Michelson’s or Breslin’s report that would cause me to distrust or to disbelieve either one. If these were my only sources, they would be insufficient to press my case, but in the absence of credible contrary sources, they enhance my earlier exposition.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173692 https://historynewsnetwork.org/article/173692 0
The Mysterious Assassination That Unleashed Jihadism

 

At quarter past noon on 24 November 1989, a red Chevrolet Vega approached the Sab’ al-Layl mosque in Peshawar, Pakistan. As the crowd prepared to greet the arrival, a roadside bomb ripped through the car, killing everyone inside. Peshawar at this time was plagued with violence, but this was no ordinary assassination.

 

The victim in the passenger seat was Abdallah Azzam, the spiritual leader of the so-called Afghan Arabs, the foreign fighters who travelled in the thousands to fight the Soviets in Afghanistan in the 1980s. The struggle to rid Afghanistan of Russian occupation was viewed in most of the Muslim world as a legitimate case of military jihad, or religiously sanctioned resistance war. A Palestinian cleric, Azzam had joined the Afghan jihad in 1981 and spent the decade recruiting internationally for the war. By 1989, he was a living legend and the world’s most influential jihadi ideologue. 

 

Azzam is not well known in the West today, but he is arguably the father of the jihadi movement, the cluster of transnational violent Islamist groups, such as al-Qaida and Islamic state, who describe their own activities as jihad. Azzam led the mobilization of foreign fighters to Afghanistan, thereby creating the community from which al-Qaida and other radical groups later emerged. His Islamic scholarly credentials, international contacts, and personal charisma made him a uniquely effective recruiter. Without him, the Afghan Arabs would not have been nearly as numerous.

 

He also articulated influential ideas. He notably argued that Muslims have a responsibility to defend each other, so that if one part of the Muslim world is under attack, all believers should rush to its defence. This is the ideological basis of Islamist foreign fighting, a phenomenon which later manifested itself in most conflicts in the Muslim world, from Bosnia and Chechnya in the 1990s, via Iraq and Somalia in the 2000s, to Syria in the 2010s. Moreover, he urged Islamists –people involved in activism in the name of Islam –to shift attention from domestic to international politics, thus preparing the grounds ideologically for the rise of anti-Western jihadism in the 1990s. Earlier militants had focused on toppling Muslim rulers,such as in Egypt, where the group Egyptian Islamic Jihad killed President Anwar al-Sadatin 1981, or in Syria, where a militant faction of the Muslim Brotherhood launched a failed insurgency against the Assad regime in the late 1970s. Azzam, by contrast, said it was more important to fight non-Muslim aggressors. Azzam himself never advocated international terrorism, but his insistence on the need to repel Islam’s external enemies later became a central premise in al-Qaida’s justification for attacking America.

 

Azzam’s most fateful contribution, however, was the idea that Muslims should disregard traditional authorities – be they governments, religious scholars, tribal leaders, or parents - in matters of jihad. In Azzam’s view, Islamic Law was clear: if Muslim territory is infringed upon, all Muslims have to mobilize militarily for its defence, and all ifs and buts are to be considered misguided. This opened a Pandora’s box of radicalism, creating a movement that could not be controlled. After all, how do you get people to listen to you once you have told them not to listen to anyone? 

 

Azzam felt the early effects of this problem in his lifetime, but managed to keep order in the ranks through his immense status in the community. After his assassination, however, there was nobody left to rein in youthful hotheads, and the jihadi movement entered a downward spiral of fragmentation and brutality. While the Afghan Arabs in the 1980s had only used guerrilla tactics, some of their successors turned to international terrorism, suicide bombings, and beheadings. The ultra-violence of Islamic State in recent years is only the latest iteration of this process.

 

So who killed him? Some have suggested a fellow Afghan Arab, such as Osama bin Laden or Ayman al-Zawahiri, but this seems unlikely given the reputational cost to anyone caught red-handed trying to kill the godfather of jihad. Others have blamed foreign intelligence services such as the CIA or the Mossad, but Azzam was not important enough for them. Yet others have mentioned Saudi or Jordanian intelligence, but these countries had no habit of assassinating Islamists at this time. Afghan intelligence (KhAD) had reason to kill Azzam earlier in the war, but not in late 1989. Many have accused the Afghan warlord Gulbuddin Hekmatyar, who resented Azzam’s growing affection for his archenemy Ahmed Shah Massoud, but new evidence shows that Hekmatyar and Azzam were actually close personal friends. 

 

We are left with the Pakistani Inter-Services Intelligence (ISI), which had both the capability and a motive. In the late 1980s, the Afghan Arabs had become a nuisance, criticizing Pakistan more openly and meddling in Afghan Mujahidin politics. No hard evidence pins ISI to the crime, but the circumstantial evidence is compelling. The operation was sophisticated and required personnel movement around the site before, during, and after the attack. The location and timing suggest a desire to shock the Arabs, because Azzam could easily have been liquidated quietly in a drive-by shooting. Still, we cannot put two lines under the answer, and the Azzam assassination remains the biggest murder mystery in the history of Islamism. 

 

The story of Abdallah Azzam suggests that a root cause of modern jihadism was the collapse in respect for religious authority among young Islamists in the late 1980s. Azzam initiated it, his disappearance accelerated it, and the repercussions have been devastating. This is also one of History’s many lessons in unintended consequences, for it is a fair bet that neither Azzam himself nor his assassins intended for things to turn out this way. 

 

The question now is whether the genie can be put back in the bottle and religious authority reasserted over young people seduced by jihadism. There are some signs that the excesses of Islamic State has undermined the movement’s appeal, but it is too early to tell whether it is possible to undo the damage of that mysterious blast thirty years ago.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173697 https://historynewsnetwork.org/article/173697 0
The Best Work in History Illuminates Life Now: An Interview with Angela Woollacott

 

Angela Woollacott is the Manning Clark Professor of History at the Australian National University, an elected Fellow of the Royal Historical Society, the Academy of the Social Sciences in Australia, and the Australian Academy of Humanities, and a former president of the Australian Historical Association. Hernew book Don Dunstan: The visionary politician who changed Australia (Sydney: Allen and Unwin, 2019) was supported by an Australian Research Council Discovery grant. She has published widely in the fields of Australian and British Empire history; women’s history; colonialism, race and gender; biography, transnational and political history. She is currently on the editorial advisory board for the Historical Research and Publications Unit at the Australian Department of Foreign Affairs and Trade; on the editorial advisory boards of three academic journals and has recently served on an advisory panel at the Reserve Bank of Australia for its new generation of banknotes.

What books are you reading now?

 

Daily life as an academic necessitates becoming a promiscuous and, to some extent, cursory reader. It seems that I always have several books part-read, despite my natural inclination being to finish one before starting another. The idea of reading an entire book in a leisurely way, in a comfortable armchair, often seems remote. For the course that I am currently teaching on 19th century Australian history, right now I’m dipping into Iain McCalman, Alexander Cook and Andrew Reeves (eds.), Gold: Forgotten Histories and Lost Objects of Australia (Cambridge UP, 2001). In order to develop my ideas about my next research project, I am in the midst of Tracey Banivanua Mar, Decolonisation and the Pacific: Indigenous Globalisation and the Ends of Empire (Cambridge UP, 2016). In my pile of upcoming deadlines, the book that I am reviewing for an Australian literary magazine is Margaret Simons, Penny Wong: Passion and Principle (Carlton, Vic.: Black Inc., 2019), a biography of the current Leader of the Opposition in the Australian Senate. And, of course, there is always a novel for which I wish I had more time. At the moment, it is Andrew Sean Greer, Less (Abacus, 2017), the 2018 Winner of the Pulitzer Prize for Fiction.

 

What is your favorite history book?

 

I hate to pick just one favourite, because there are so many that I admire. But, if it must be just one, a book I often tell students about is Judith C. Brown, Immodest Acts: The Life of a Lesbian Nun in Renaissance Italy (Oxford UP, 1986). In piecing together the story of Sister Benedetta Carlini, Brown demonstrates the possibilities of imaginative historical research. She shows how exciting an unlikely archival find can prove to be, and provides a model of taking a limited quantity of archival evidence, and spinning a rich historical monograph from it through contextual material and vivid writing. In quite a short book, Brown explicates early modern convent life and acts of resistance; patriarchal control and officious administration within the Catholic church; and the fabric of social life in regional Italy, including fears and superstitions people of the valleys held for those of the mountains. Sister Benedetta’s sexual relationship with another nun is the dramatic core of the narrative. Yet part of the book’s richness is that the sex cannot be understood without grappling with the role of supernatural visions in religious belief and practice. 

 

Why did you choose history as your career?

 

Looking back, it’s almost as though history found me. I was always an avid reader, a habit nurtured within my family. But as I was in the first generation in the family to have the privilege of a university education, my parents were not surprisingly pleased when I studied law, and less enthusiastic when I dropped that to pursue research in history. Nor was I any more certain than they that it could lead to remunerative work. I just kept following one opportunity after another, starting with an Honours degree in History (a fulltime, disciplinary-specific fourth year that is a peculiarity of Australian universities). Next came a research position at a museum of political history, then post-graduate study in History at the University of California, Santa Barbara, followed by a fortuitous appointment as an Assistant Professor in History at Case Western Reserve University, in Cleveland, Ohio, immediately after I completed my PhD. 

 

When I chose to specialize in History as a discipline, it followed on from an interest in Political Science. Political Science’s preoccupation with paradigms had never sat very well with me. History offered the full explanations, the fascinating stories, and the fabulous breadth of topics and questions. And it is as gratifying and rich a discipline to me now as it was when I started out—albeit a discipline that has had many twists and turns along the way.

 

What qualities do you need to be a historian?

 

Curiosity and a vivid imagination help. And a willingness to spend many hours in the archives, persistently going through box after box. Perhaps most of all, historians need to care about literature and writing. There is always the debate about whether history is a social science or a humanities field; in fact, it is both. But because we are in the humanities too, more than some of the other social sciences, we need to pay attention to the grace and flair with which we write. 

 

I’ve heard it said that historians were the shy ones at school: just wanting to hide in the library reading a good book. There may be a grain of truth in that, but we also need to be engaged with the current world, because the best work in history illuminates life now, even if not in superficially obvious ways.

 

Who was your favorite history teacher?

 

I benefitted from inspiring teachers both at school and university, and again I hate to pick just one. But I will mention one seminar in my postgraduate program that was especially stimulating. During my time as a PhD student at the University of California, Santa Barbara in the 1980s, the History Department was fortunate to have as a visitor Professor Robert Darnton of Princeton University. He came for one term and offered a seminar in Cultural History and Anthropology that was capped (I think at 12) and evenly split between History and Anthropology graduate students. Word got out quickly and enrolment filled up within days; fortunately I signed up early. Each week we read one work of history and one of anthropology, connected by theme. It was a fascinating intellectual experience, and I learnt so much. It was fun to interact with the anthropology graduate students, and wonderful to get to know Robert Darnton – including at the casual dinners most of us went to following the seminar. Cultural analysis was the new buzz in historical methodology in the 1980s (we all became afficionados of Clifford Geertz’s ‘thick description’), so it was a very timely educational experience which enriched my work. But it also opened me up to other interdisciplinary approaches, so that I became interested in the ‘linguistic turn’ when that erupted in the 1990s. I felt fortunate to have had that seminar. 

 

What is your most memorable or rewarding teaching experience?

 

Like many academics, I truly enjoy lecturing, and tutorials (Australian for discussion sections) can be wonderful when they go well. Marking (Australian for grading) is not my favourite part! When I think about memories of teaching across my career, some students spring to mind. Naturally, a few very bright and talented students stay with you, especially when they pursue academic careers and one can take pleasure in tracking their progress. But others stay in one’s mind too. There are a few whom I recall particularly because of the life experiences they shared with me—survivors of family trauma. Also I remember a student who chose to study despite enduring a terribly debilitating terminal condition, which made every aspect of study challenging; his interest in history seemed to help him keep going, which was very moving.

 

What are your hopes for history as a discipline?

 

We seem to be at a moment in the historical discipline when scholars are seeking to reconcile national historical frames with global, international, transnational and world history. I’m hoping that we can move forward fruitfully, recognizing that national frames are inevitable, and global and transnational ones are indispensable for understanding the past and the dynamics of change. 

 

On another note, as a long-term stalwart of women’s, feminist and gender history, I hope that we can maintain the vital insights that feminist methodology has given to the humanities and social sciences. Certainly conferences and journals in the field are flourishing, but I worry that women’s and gender history courses have been dropped from university curricula. We need to keep presenting these approaches and insights as a core part of undergraduate education.

 

More broadly, we need to reclaim history’s importance in the public intellectual domain. Biography and war histories do well in bookstores, but they are often most of what you find in the section labelled “History.” Historians must actively participate in the arenas of public discourse, to promote the vital role of our discipline in civic society.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I’m not a collector per se. I do own some rare books, but obtained them when I was researching particular topics. For example, years ago, I was interested in the history of the early 20th century birth control movement. I bought some books by Marie Stopes and Margaret Sanger from second-hand bookshops, and still have them – it’s a mini-library of early birth control advice! Right now, though, I’m not sure what will stay in my library and what won’t. The School of History here at the Australian National University will move into a new building in about six months. It’s shaping up to be a beautiful, striking building. But the offices are all half the size of my current office, and I’m going to have to dispense with most of my filing cabinets and around half of my books!

 

What have you found most rewarding and most frustrating about your career? 

 

Being a historian is a very rewarding and privileged life. I’m often aware of how fortunate I am to have a well-paid career pursuing my intellectual interests, and spending a lot of time reading things I find interesting. When I look at people in the corporate world, I thank my lucky stars. Apart from following my own interests, spending one’s life at a university (albeit various ones) means that there are always stimulating events to attend. 

 

Of course, teaching has its pains (grading) as well as its pleasures. The best part of teaching, for me, is supervising good PhD students. It’s so rewarding to work with mostly younger historians, passionately pursuing their own intellectual creations, and to watch their success.

 

The most frustrating thing about being an academic is the workload, and the near-impossibility of having any reasonable work-life balance. It helps to have, as I very fortunately do, a partner who is also an academic and is understanding and supportive. But the demands on us are extreme, and it is very difficult to juggle them. Teaching is enormously time-consuming, and we get virtually no professional rewards for it. Promotion is always based on research and publishing, but managing to publish when one is also teaching, supervising, sitting on committees, reviewing books and manuscripts for others, writing letters of reference etcetera, means a major overload. And email is the rock of Sisyphus!

 

How has the study of history changed in the course of your career?

 

The discipline has changed in so many ways. Subjects that were radically transformative when first emerging (such as Black and Indigenous history, women’s/feminist history, history of sexuality, postcolonial history etc.) in the 1970s-90s have become more or less mainstream. As a feminist historian, I’m a bit sad that thematic women’s and gender history courses have lost their popularity—though just today in a discussion class one student commented that she has enjoyed gender being a recurrent theme in our course this semester, rather than the one week at the end she thinks is now typical. 

 

Looking back there were moments when the field was riven by heated and personal debate—such as over the ‘linguistic turn’ and post-structuralist theory in the 1990s, and here in Australia what we call the History Wars of the same decade over the extent of frontier warfare here in the 18th– 19th centuries. Presently we don’t seem to have such exchanges, and little theoretical discussion, other than parsing the terms global, transnational, world and imperial and their implications. 

 

What is your favorite history-related saying? Have you come up with your own?

 

I quite like the oft-quoted notion that the past is a foreign country. It suggests that, no matter who you are, you need to do the research to explore the past, and to be open to surprises and discoveries.

 

What are you doing next?

 

My latest book just came out two months ago. It’s a biography of an Australian politician who was a leading progressive reformer in the 1960s – 1970s, and it’s published by Allen and Unwin, Australia’s leading trade press. I’ve never done a trade book before, and it’s been quite an exciting ride! The book went into a second printing less than a month after it came out. There have been two launches, each by a nationally-prominent political figure. I’ve been on the program of one writers’ festival and will do another. And it’s been very widely and positively reviewed, with considerable newspaper coverage and radio interviews. So I think now I will just sit back and read some books by other scholars for a while!

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173690 https://historynewsnetwork.org/article/173690 0
Going blue in the Bluegrass State? History echoes in Kentucky’s gubernatorial results

 

Conservatives dismiss it as an aberrationor maybe the natural consequence of an incompetent and unpopular incumbentLiberals spin it as the GOP’s vulnerability going into 2020.

 

But Republican Gov. Matt Bevin’s narrow defeat this month in Kentucky – which Donald Trump carried by 30 points in 2016 – could mean something else, too:

 

Perhaps deep-red regions like Eastern Kentucky aren’t as reliable for Republicans and unwinnable for Democrats as conventional wisdom suggests.

 

Mainstream perspective – that these areas are politically intransigent – means they don’t draw serious campaign attention, that voters aren’t seeing due respect from candidates. The absence perpetuates a cycle, propping up those political assumptions about rural America while proving Lee Atwater’s riff on Marshall McLuhan: Perception is reality.

 

For those living in hidden-seeming pockets of Eastern Kentucky, where I was born and raised, the perception-turned-reality is that they don’t matter, that the country’s leadership doesn’t care about them or their needs. Over the long term, these perceptions help enable damaging, all-too-real conditions of rural poverty, unemployment and poor health care.

 

I know this to be true. I grew up with people who display fierce independence, quiet generosity and a suspicion of authority, while working hard and holding strong pro-union sympathies. Such was natural given the long history of struggle against the exploitation and destruction wrought by mining companies.  Yet politicians and public officials often benefit from popular images that paint a different picture, seeking to inflame social, cultural and political divisions as a means of maintaining power. While we point fingers at convenient hillbilly stereotypes, these divisions, and the injustices they produce, become only more widespread and destructive.

 

In reality, places like this aren’t as intransigent as the conventional wisdom would have us believe. Not all that long ago, Eastern Kentucky was a Democratic stronghold.  This historical preference is punctuated by wide margins of victory for Democratic presidential candidates in 1980, 1992, and 2000. In Floyd County, in the heart of Eastern Kentucky and an area that Democratic gubernatorial candidate Andy Beshear just won, Jimmy Carter's margin of victory over Ronald Reagan was 44 points, while Clinton claimed a 53-point advantage over George H.W. Bush. Likewise, Al Gore's margin over George W. Bush was 32 points.

 

Even as far back as 1972, despite what a writer like J.D. Vance would have the public believe, voters in Breathitt County, part of Eastern Kentucky’s coal-mining country, preferred George McGovern over Richard Nixon by a whopping 18 points. McGovern, who captured a paltry 17 electoral votes, lost Kentucky overall by 29 points.

 

Data like these urge a rethinking of the binary logic of red- and blue-state politics, an order that ultimately works to silence (and disempower) local populations and communities.

For me, such results were made clear in a more personal way by a tattered portrait of John F. Kennedy that my grandmother displayed on the wall of her front porch. This image seemed to convey a faith that there were politicians out there who really cared about the working poor, the plight of coal miner and the farmer who worked the hillside fields in the hollows.

But faith can't sustain action forever, especially in the absence of any tangible results.

It all makes me wonder: Is the current slant of rural voters toward the GOP – and Eastern Kentucky’s slant in particular – less about political ideology and more a natural consequence of feeling neglected? Or, worse yet, a result of having been betrayed and lied to?

 

As the presidential election cycle continues to heat up, especially in light of events such as Bevin's defeat, perhaps there’s a reminder here: that politicians should not turn their backs on rural voters. Assuming some monolithic ideology in rural America ignores the diversity and agency of entire communities. Instead, politicians should work to reject political isolation. Speak to the issues, commit to change and, yes, campaign there along those narrow country roads, within those hollows. Seek to understand the history that has led us to the present, and give rural Americans the respect and commitment that come from fighting for them – and for their support.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173666 https://historynewsnetwork.org/article/173666 0
Coexistence and Sectarianism in the Modern Arab World

 

Every history of sectarianism is also a history of coexistence.  Every sectarian act or mobilization, after all, paradoxically calls attention to a pluralism it aims to counter or negate. In the case of the modern Arab world and the Middle East more broadly, the sectarian story has been highlighted in media, policy-circles, and academic scholarship.  Myriad antisectarian stories and narratives that also define modern Arab history have been largely ignored.

 

Part of this bias toward the study of sectarianism is understandable: the modern Arab world is afflicted by political and military upheavals that have often taken on (or have been represented as having essential) sectarian or religious overtones. Scholars and journalists who have tried toexplain and demystify current events have typically ended up imprisoned analytically.  As the best critical historical and anthropological work on the politicization of religion in the Middle East has shown, it is not simply sectarian networks and mobilizations that need to be exposed, but the ubiquitous language of sects and sectarian relations that undergirds them.  Sects are not natural.  They are produced by ideological and material effort.  

 

Some of the emphasis on sectarianism, however, is clearly politicized. For example, U.S. presidents as different as Barack Obama and Donald Trump have regurgitated the same demeaning imperial clichés about the Middle East. Both have insisted that the Middle East is haunted by allegedly endemic sectarian antagonisms and age-old tribal wars. Both have played up the idea of primordial oriental sectarianism to downplay US responsibility for creating the conditions that have encouraged a sectarian meltdown in the Middle East. 

 

In the face of such insidious notions of an age-old sectarianism, laypersons and scholars from the Arab world have often invoked a romanticized history of coexistence that too often has glossed over structures of inequality and violence. This romantic view of coexistence assumes it to be a static, idealized form of liberal equality rather than a variable state of affairs.  

 

More problematically, the term coexistence has often privileged the idea of monolithic ethnic or religious communities rather than understandingthem as dynamic arenas of struggle.  Those who adhere to deeply conservative, patriarchal notions of who and what represents any given community contend with those who insist upon more progressive understandings of what constitutes being Muslim or Christian.  To generalize about “the Christians” in the Middle East, for example, denies not only the obvious difference between a Phalangist Christian militiaman and a Christian liberation theologian; it flattens the different forms of Christian belonging, and the belonging of Christians, in and to the Middle East. 

 

In the case of the Middle East, in particular, the need to demythologize communities and their ideological underpinnings needs to go hand in hand with evoking a dynamic history of coexistence that transcends communalism. Sectarian ideology is inherently divisive.  It requires the conflation of historic religious communities with modern political ones, as if modern Maronites, Jews, or Shiis necessarily share a stronger bond with their medieval coreligionists than they do with their contemporaries of different faiths.  Sectarian ideology does not ask how religious or ethnic communities have been historically formed and transformed, how havethey policed and suppressed internal dissent, and who may represent or be represented within them. 

 

To pose such questions need not deny the historical salience of communal affiliations, but they do challenge their supposed uniformity. They constitute a crucial first step in seeing the inhabitants of the Arab world as politically, socially, and religiously diverse men and women rather than merely as“Sunnis,” “Shiis,” “Jews,” “Maronites,” “Alawis,” or, indeed as monolithic “Arabs” and “Kurds.”  Iraqis of different faiths, for example, have often made common cause that defy the alleged hold of normative sectarian identity, whether as communists or Iraqi Arab nationalists in the 1940s and 1950s, or, today, as outraged citizens fighting rampant corruption and sectarianism in their country.  

 

In fact, the contemporary fixation with the problem of sectarianism obscures one the most extraordinary stories of the modern Arab world: the way the idea of political equality between Muslims and non-Muslims, unimaginable at the turn of the nineteenth century Ottoman empire, became unremarkable across much of the Arab Mashriq by the middle of the post-Ottoman twentieth century.

 

This new age of coexistence depended at first on major reforms within the Ottoman empire that promulgated edicts of nondiscrimination and equality between 1839 and 1876.  It also depended on the antisectarian work of Arab subjects who began to think of themselves as belonging to a shared modern multireligious nation in competing secular and pietistic ways.

 

The transition from a world defined by overt religious discrimination and unequal subjecthood to the possibility of building a shared political community of multireligious citizens was—and remains—fraught by gendered, sectarian, and class limitations. The infamous anti-Christian riot in Damascus in July 1860, when Christians were massacred and their homes and churches pillaged, was clear evidence of the breakdown of the certainties of the long-established, highly stratified Ottoman Muslim imperial world.  In the new Ottoman national world that emerged in the second half of the nineteenth century, urban men took for granted their right to represent their communities and nations and to “civilize” their “ignorant” compatriots. 

 

The messiness of the story of modern ecumenical Arabnessthat encompassed Muslims, Christians,and Jews is undeniable.  But privileging a story of static sectarianism over one of dynamic coexistence downplays the significance of the fact that many Muslim Arabs protected their fellow-Damascenes in 1860.  After 1860, Arab Muslims, Christians,and Jews built, enrolled, and taught in ecumenical “national” schools across the Arab Mashriq that embraced rather than negated religious difference.  The rich heritage of the Islamic past from Andalusia to Baghdad, and a common Arabic language offereda treasure-trove of metaphors around which to build an antisectarian imagination for the future.  

 

Rather than divide the Arabs into binary and mutually exclusive “religious” and “secular” categories, as if these are the only conceptual categories that matter, much canbe gained by focusing on the ways Arabs struggled to build new ecumenical societies that fundamentally accepted the reality of religious difference and the possibility of its political transcendence. The ethno-religious nationalisms of the Balkans, the terrible fate of the Armenians at the hands of the Young Turks, the arrival of militant Zionists in a multireligious Palestine, and the puritanical Wahabis of Arabia offer fascinating counterpoints to the ecumenism of the late Ottoman and post-Ottoman Arab East.

 

The inhabitants of the modern Arab world, however, have hardly ever been masters of their own political fate.  The Ottoman empire was colonized and partitioned along sectarian lines by Britain and France after the First World War.  Nevertheless, in the post-Ottoman states of Syria, Iraq, Egypt, Lebanon, and Palestine,the anticolonial imperative to forge political communities that transcended religious difference gathered force.  So too did countervailing and often chauvinistic minoritarian, nationalist,and religious politics begin to crystallize.  While the latter politics has been well documented, it is the active will to coexist that needs to be studied today more urgently than ever before.  The persistence of a complex culture of coexistence provides a powerful antidote to the misleading and pernicious idea of the sectarian Middle East.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173667 https://historynewsnetwork.org/article/173667 0
Roundup Top 10!  

The impeachment hearings are a battle between oligarchy and democracy

by Heather Cox Richardson

Ukraine’s leaders were accustomed to wielding power by prosecuting their political opponents for corruption, and Yovanovitch’s push to end that practice earned their ire.

 

Why family separation is so central to Trump’s immigration vision

by Maddalena Marinari

Strengthening family ties has been key to overcoming nativism — and in 2020, it can do so again.

 

 

American Slavery and ‘the Relentless Unforeseen’

by Sean Wilentz

The neglect of historical understanding of the antislavery impulse, especially in its early decades, alters how we view not just our nation’s history but the nation itself.

 

 

Between the Lines of the Xinjiang Papers

by James A. Millward

The Chinese Communist Party is devouring its own and cutting itself off from reality.

 

 

Today’s problems demand Eleanor Roosevelt’s solutions

by Mary Jo Binker

It’s time to banish fear and take up constructive action.

 

 

Ten rules for succeeding in academia through upward toxicity

by Irina Dumitrescu

Universities preach meritocracy but, in reality, bend over backwards to protect toxic personalities.

 

 

The Last Time America Turned Away From the World

by John Milton Cooper

The unknown story behind Henry Cabot Lodge’s campaign against the League of Nations.

 

 

The GOP Appointees Who Defied the President

by Michael Koncewicz

Before Watergate became a story that dominated the national media in the spring of 1973, there were individuals within the Office of Management and Budget (OMB) and the IRS that took dramatic steps to block Nixon’s attempts to politicize their work.

 

 

The War on Words in Donald Trump’s White House

by Karen J. Greenberg

How to Fudge, Obfuscate, and Lie Our Way into a New Universe

 

 

Why abruptly abandoning the drug war is a bad idea for Mexico

by Aileen Teague

Long-term economic initiatives are good, but a power vacuum will make things more violent in the short term.

 

 

 

What Recognizing the Armenian Genocide Means for U.S. Global Power

by Charlie Laderman

It could spark a recognition that America First is the wrong course.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173687 https://historynewsnetwork.org/article/173687 0
Sondland Sings: Here's How Historians Are Responding Click inside the image below and scroll down to see articles and Tweets. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173146 https://historynewsnetwork.org/article/173146 0
Too Important or Too Irrelevant? Why Beijing Hesitates on Hong Kong

 

Two competing narratives possibly explain why Beijing’s authoritarian communist rulers have not so far interfered in the increasingly violent protests in Hong Kong, now six months old and heading into a deadly new phase. Whichever explanation is correct will determine how long Beijing will stay patient if the impasse drags on and the violence continues to grow.

 

The answer may also likely decide whether the ‘one country, two systems’ formula can survive intact.

 

One account, in London’s Financial Times, says that the Chinese have largely remained on the sidelines, leaving it to local police and authorities to find or force a resolution because Hong Kong is no longer largely significant to China. This theory argues that Hong Kong’s days as the mainland’s key financial base are long over. Thus, the territory can be left alone to clean up its own mess.

 

The other says just the opposite. Beijing is hesitant to intervene directly—militarily or by ending the territory’s autonomy—because Hong Kong remains too important, as a financial and as an international symbol. Any direct intervention—for example, by sending in the People’s Armed Police—would be devastating to China’s international image while it is already burdened with a slow economy and a trade war with the United States.

 

Which is right? Is Hong Kong too irrelevant or too important for China to directly intervene? There is evidence to buttress both sides.

 

At the time the territory was handed back to China in 1997, after one hundred and fifty years as a British colony, Hong Kong accounted for nearly 20% of China’s GDP. Today, that figure is less than 3% The neighboring Chinese city of Shenzhen, just across the border has surpassed Hong Kong in the size of its economy and its still soaring annual growth rate. Shenzhen and Guangzhou are already surpassing Hong Kong when it comes to start-ups and new technologies.

 

Hong Kong saw its role as the entrepot for trade with China shrink once the mainland joined the World Trade Organization after 2001. Chinese citizens now have their own stock markets, and more than 150 companies have bypassed Hong Kong to list on major American stock exchanges.

 

According to this view, China’s communist leaders under hardline ruler Xi Jining might be satisfied to let Hong Kong burn, so long as the contagion doesn’t spread north. With its liberal values and British colonial holdovers, Hong Kong was always a troublesome source of suspicion and mistrust.

 

If Hong Kong’s internal conflagration hastens it replacement by Shenzhen or Shanghai as China’s most important city, Xi might actually see that as a long-term benefit. If the protests eventually lead to an exodus of local elites with overseas passports, as well as foreigners, multinationals, and foreign media outfits that make Hong Kong their headquarters, so much the better.

 

But the competing view holds that while Hong Kong’s importance to China’s overall GDP has lessened, the territory remains crucial to the mainland’s economic well- being. 

 

Hong Kong still accounts for more than 60% of all foreign direct investment flowing into the mainland and that number has grown despite the months of protest. Hong Kong’s stock exchange is still the third largest behind New York and Tokyo, and ahead of London, and China’s markets remain closed to foreign investors. Hong Kong’s credit rating is higher, its legal system is internationally respected, and money can be freely exchanged. In China, strict capital control prevents this occurrence.

 

President Donald Trump’s trade war has heightened the importance of Hong Kong being treated globally as a legal economic entity distinct from China. Chinese officials have warned Washington that the Hong Kong Human Rights and Democracy Act of 2019, which unanimously passed the House of Representatives last month and enjoyed bi-partisan support in the Senate, is “an attempt to use Hong Kong to contain China’s development”, in the words of Yang Guang, a spokesman for the Hong Kong and Macao Affairs Office which handles the territory’s affairs in Beijing.

 

There are even opposed views on whether the continuing unrest in Hong Kong poses the threat of contagion to the mainland.

 

One theory maintains that Beijing’s rulers fear that the pro-democracy protests in Hong Kong might spark similar demonstrations in Guangdong and other Chinese cities. They are wary about making even common-sense concessions—for example allowing an independent commission to examine police brutality or restarting the stalled political reform process--for fear of sparking like demands at home.

 

Yet the opposite position also has plausibility. That is, with strict control of the mainland media and the internet, Chinese propagandist have succeeded in painting Hong Kong protests as an anti-China secessionist movement launched and financed by notorious ‘black hand’ foreign foes, namely the American CIA. Instead of sympathy, many mainland Chinese have only distain for the residents of Hong Kong, who they see as wealthy, spoiled, and lacking in love for the motherland.

 

Chinese officials lately seem to be signaling more repression and no reform as the road to resolving Hong Kong’s crisis. China’s, Vice-Premier, Han Zheng, told Hong Kong CEO, Carrie Lam, that “extreme, violent, and destructive” activities would not be tolerated. Mainland officials have called for strengthening China’s supervision over the territory, imposing ‘more patriotic education’ in Hong Kong schools, introducing stricter vetting of civil servants to ensure loyalty to Beijing and implementing a long-delayed national security law.

 

Those calls are only likely to grow louder after the violence on 11 November, which included widespread vandalism, the forced shut down of university campuses, and the police shooting of a protester in the stomach.

 

It is becoming more apparent that Beijing’s leadership is caught somewhere between—fear about allowing the unrest to continue, but paralyzed from intervening by the concern of making a tragic, perhaps fatal, mistake. 

 

Chinese communist rulers are now facing the most serious political unrest on their territory since the June 1989 pro-democracy protests in Tiananmen Square. The massacre by People’s Liberation Army troops on that occasion cost China nearly a decade of sanctions, international isolation, and restrictions on technology transfers.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173662 https://historynewsnetwork.org/article/173662 0
England’s Richard III as Murderous, Royal Thug

 

William Shakespeare’s bone-chilling play Richard III portrays England’s deformed monarch as a murderous thug, one of the great villains of world history. That portrayal is underlined yet again a new production of the play that opened last week at the Gerald Lynch Theater of John Jay College in New York The play, staged by Ireland’s DruidShakepeare company, is part of the White Light Festival sponsored annually by the city’s Lincoln Center. The play stars Aaron Monaghan in a scalding, searing performance as the duplicitous, arrogant and diabolical Richard, who marched to the throne amid a long series of bloody murders and executions conducted on his orders.

 

The blood-soaked drama, that has been staged thousands of times, is one of Shakespeare’s great plays and here, at Lincoln Center, is a splendid production of it, an absorbing staging that builds the drama and tension of the tale as it unfolds in all of its treachery and gore.

 

The play starts just after the two major houses in England have ended a long war. Richard, a young royal hobbling about on his canes, miserably dragging himself around the royal court, plots to become King. He hires some ambitious friends and has them do his dirty work for him. Slowly but surely, Richard climbs up the power ladder and takes the crown (he puts it on his own head, a la Napoleon several hundred years later). He battles the men in his kingdom, but he battles the women, too, even killing a few. He has horrific confrontations with his mother and the mother of some men he had slain. He denies culpability most of the time, putting the slayings off on someone else. When he does admit his guilt, he tries to convince the women that he was right and that, oh well, he’ll marry one of their daughters to make up for it. 

 

As the play unfolds, we meet numerous characters, good and bad, all caught up in the gory hurricane Richard has unleashed. Most of them don’t survive it.

 

The wonder of it all is how on earth RIchard did not expect the friends and relatives of those he had murdered, at some point, to come after him.

 

The play has a streamlined cast (sometimes 50 people appear in other productions). Director Garry Hynes does a laudable job of running the play and mixing solo appearances by Richard at some points and Richard in groups in others. She gives Richard a lot of humor and at times has Richard paint himself as a victim of some sort and a royal whose only goal is to united the troubled kingdom and move on peacefully with all. 

 

Richard III is not for everyone. It lasts about three hours and the plot is extremely complicated. You need a scorecard to keep track of who is backing who, who is conspiring against who and who is being butchered in the King’s name. The most famous murders in the play, of course, are those of two children, Richard’s nephews, Edward V and Richard of Shrewsbury, whom Richard III fears. He has them carted off to the Tower of London and then slain. There are so many murders going on in the play, one right after the other, that the boys’ executions sad as they are, get a bit lost in the story, mere historical road kill. You’ve got to have a stomach for blood and violence, too. Richard III makes Tony Soprano look like the President of the local Chamber of Commerce.

 

Is Richard III historically accurate? Not really. Richard suffered from scoliosis, a physical condition in which one shoulder is lower than the other and causes an odd walk. He was not a hunchback with a withered arm, as Shakespeare portrayed him, and did not drag himself around, leaning on two canes, as most of the actors who have played him, including Monaghan, do. He was involved in some royal chicanery but did not commit all of the murders that Shakespeare attributed to him (the Bard was heavily influenced by the Tudor age in which he lived), although British historian are still out on the murders of the two young boys. When Richard first appears in the story he appears very broken and, referring to himself, scowls to the audience, “thou lump of foul deformity.” He blurts out, too, in the first few minutes of the play, that he has set himself on a path of destruction to claim the crown and keep it. He does just that, too, with a nasty group of henchmen.

 

There was royal turbulence in Richard’s era and the real Richard was caught up in it. The crown changed hands four times in just 22 years and in some of those years Richard had to live outside of the country for his own protection. 

 

Hynes also does a fine job of showcasing the sweep of Shakespeare’s drama, giving the audience a deep and rich look at the royal court in that era. There are also a number of fine battle scenes at the end of the play, where Richard, in mortal danger, shouts that famous line “a horse, a horse, my kingdom for a horse.”

 

Director Hynes gets a memorable, truly memorable performance from Monaghan as Richard, but she also gets good work from a talented ensemble cast.  Some of the fine performances are by Garrett Lombard as Hastings,  Marie Mullen as Queen Margaret, Rory Nolan as Buckingham, Marty Res as Clarence,  Frank Blake as Dorset, Bosco Hogan as King Edward IV, Jane Brennan as the Duchess of York and John Olohan as Stanley. 

 

They all work to give the audience a memorable play and a fine looking at British history. 

 

At the end of the play I walked out of the theater on to the chilly, chilly streets of New York and started thinking about this chilly, chilly play.

 

PRODUCTION: The play is produced by the DruidShakepeare Company in conjunction with the White Light Festival. Sets and Costumes Francis O’Connor, Lighting: James F. Ingalls,  Sound: Gregory Clarke, Fight Choreographer: David Bolger. The play runs through November 23

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173665 https://historynewsnetwork.org/article/173665 0
What Have the Latest Impeachment Hearings Revealed?

 

When I wrote for the Journal-Courier, I had to send in my columns by Sunday evening for Tuesday publishing. I was not able to broadcast breaking news. I’m not a reporter, that’s their job. My job was to put together some writing that synthesized as much breaking news as I could. The heat was dissipating, it was time for light.

 

I believe that’s the opinion writer’s job. But often I could not say anything about what had changed since Sunday. If my synthesis was good, it would blend well with and partly explain the latest bombshell. Or perhaps my synthesis was already outdated.

 

Because I’m now self-published, but no longer printed, I enjoy many new degrees of freedom. It’s Tuesday, this is going out later today, and I can be almost up-to-the-minute about impeachment. The evidence is easy to find. Wikipedia has a lengthy narrative about “Impeachment inquiry against Donald Trump”. The Washington Postput together a timelinethrough last week.

 

During last week’s hearing, I thought the most effective defense mounted by the forever-Trumpers was that nothing happened. Whatever may have been said or done, in the end President Zelensky did not announce an investigation of the Bidens and the military aid was released. No harm, no foul. I didn’t buy it for a second, because the facts we have learned already are so overwhelming that I’ve made up my mind. But if someone retains some positive view of Trump, for whatever reason, such an overview of events is greatly reassuring.

 

That defense was bogus, and the last few days of news are killing it, because in fact a lot happened. Anyone, especially someone both politically astute and internationally vulnerable like Zelensky, would understand from the July 25 phone call that Trump offered a meeting only if he got something specific in return. If anyone could doubt that Trump was demanding specifically a Biden investigation, yesterday’s evidence about Gordon Sondland’s mobile phone call from a Kiev restaurant on July 26 demonstrates Trump’s overriding focus on investigations.

 

But the key sequence of events began much earlier. Zelensky was elected on April 21. On April 23, Rudy Giuliani tweeted: “Now Ukraine is investigating Hillary campaign and DNC conspiracy with foreign operatives including Ukrainian and others to affect 2016 election. And there's no Comey to fix the result.” That wasn’t truth, it was pressure.

 

Less than 3 weeks later, Zelensky and his advisers met on May 7to talk about the Trump-Giuliani pressure to open investigations and avoiding entanglement in the American elections. He hadn’t yet been inaugurated, which happened on May 20.

 

Fiona Hill, a top deputy at the National Security Council inside the White House, explained to Congress about discussions in the White House in May, showing they already knew that Zelensky was feeling pressure to investigate the leading Democratic candidate.

 

Zelensky and his top advisors continued talking among themselves about the pressure that was being exerted on them and what to do about it. They realized that the life-saving military aid was included in the deal Trump was offering even before August 28, when Politicopublished an article about it. Top Ukrainian officials knew already in early August. William Taylor, new acting ambassador to Ukraine after Marie Yovanovitch was fired, characterized Ukraine’s defense minister afterwards as “desperate”.

 

Trump and Mulvaney and Pompeo and who knows who else decided to release the aid on September 11, only after Democrats in Congress threatened to investigate. The whistleblower spilled the beans to Congress on September 25 and to the public the next day about why it had been withheld.

 

We know now that Zelensky was preparing to go TV, in particular on Fareed Zakharia’s show on CNN, with a statement about Trump’s investigations. As soon as military aid was resumed, he cancelled the interview, because he had never wanted to do that.

 

So we already know what happened, and it wasn’t nothing. President Zelensky was desperate for a meeting with Trump and for good reasons. Trump said, “only if you do this favor”. In Ukraine, that message was being pounded home by people who said they were direct representatives of Trump – Rudy Giuliani, Rick Perry, Gordon Sondland. The American face in front of our efforts to help those Ukrainians trying to reduce corruption had been sent home, Ambassador Yovanovitch, accomplished by the Giuliani-Trump team.

 

Zelensky’s official communications with Americans displayed the heavy weight he put on a meeting with Trump. After the aid was released, the pressure continued. On October 3, Trump said on the White House lawn: “I would say, President Zelensky, if it was me, I would start an investigation into the Bidens,” and added the Chinese for good measure.

 

But Zelensky wasn’t going to do what Trump demanded, an announcement to the world that the Ukrainian government was investigating the Bidens. The Congressional Republican overview is false, because like Trump, they don’t care about Ukraine. President-elect and then President Zelensky refused for 4 months to do anything in response to Trump’s insistence on investigations, even though he desperately desired a meeting with Trump. When he found out that military aid was being withheld, he still refused for a month to become entangled in our election.

 

The Trump-Giuliani team caused great anxiety in the Ukrainian government. But with the highest stakes involved, the political neophyte Volodomyr Zelensky said “no” to corruption.

 

Later today there will be more news, and for many days to come. No facts have come to light that cast any part of this story into doubt. The timeline gets longer and more intense with each revelation. 

 

I don’t know how many people the Republican sleight-of-hand can fool. I don’t know if the elections last week point to any turn against Trump among Southern voters. I don’t know what tomorrow’s headlines will be.

 

But I know corruption when I see it.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/blog/154280 https://historynewsnetwork.org/blog/154280 0
The Runway for Global Warming: The Suez Canal's 150th Anniversary

Smoke billows from oil tanks next to the Suez Canal, 1956

 

The grandest party of the nineteenth century took place on November 1869 and we are all still hungover. The artificial connection of the Mediterranean to the Indian Ocean was inaugurated exactly 150 years ago with great pomp, including a Verdi opera commissioned especially for the event. It stimulated international commerce and communication, but also helped unleash the process of global carbonization – the worldwide spread of fossil fuels, first coal and then oil, as well as devices and lifestyles predicated on burning them. Grasping the history of this nineteenth-century infrastructure in the Middle East is therefore crucial for charting a course for global decarbonization.

 

This is partly because physical environments are inseparable from the technologies that traverse them and the fuels they consume. An “inauguration” suggests a single event. Similarly, we tend to consider the Suez Canal as a distinct stable object. Yet both were and still are moving targets. For more than a decade before the inaugural ceremony, French entrepreneurs built up in Egypt the largest concentration of mechanical energy in recorded history anywhere in the world, for digging and dredging the canal. Deepening and broadening this waterway continued long after the ceremonial opening and actually never stopped. The canal might have not opened the floodgates to the age of the steamship and later the oil tanker but it certainly kept expanding the crack, with every enlargement allowing bigger and heavier ships to be designed. Egyptian president Abd al-Fatah al-Sisi’s 2015 launching of a second, parallel canal project, represents the last instance in a century-and-a-half-long process.

 

Historians have usually thought of the canal in contexts other than climate change. Before our present and future were colored by impending catastrophe, our view of the canal’s past was marked by the fanfare of its inauguration, an event so extravagant – the guest of honor, French empress Eugénie, was hosted in a replica of her private apartments in the Tuileries– it contributed to bankrupting Egypt. Others evaluate the canal through its relationship to a key commodity: long-staple cotton. The American Civil War and the resulting disruption of cotton cultivation in the U.S. South caused the global cotton market to shift to Egyptian growers, sparking an economic boom that financed the canal. The social sciences wove out of such global cotton threads a notion of the  capitalist world system tailored along the lines of raw and processed; as a world divided between agricultural peripheries like Egypt, sending raw materials for processing in industrial centers like England. 

 

But there was another substance, coal – the mother of all raw materials – that disrupts this scheme. Not long after the coal-fired industrial revolution in the British Isles, another revolution in agricultural “peripheries” such as Egypt was predicated on British coal. The industrialization of Egyptian agriculture relied on imported coal for cash cropping cotton which was irrigated with steam-powered pumps, loaded onto railways and then onto steamers that shipped it to Lancashire’s mils. The Suez Canal was crucial in these global coal-fired entanglements. Offering a hyphen connecting the British Empire to its Indian Crown Jewel, it significantly shortened travel times and facilitated the flow of British coal to the east, injecting this substance to economies predicated on more sustainable driving forces. The new artificial passage to India that replaced circumnavigating the Cape of Good Hope literally created the “middleness” of what would thereafter be thinkable as “Middle East”. The canal also annexed this region into the fossil fuels complex. The connection would prove all the more significant in the age of oil in the early twentieth century. Liquid fuel now flowed in tankers in the other direction, from the Middle East westwards, but again along the canal.

 

The constricted waterway of the canal gave shape to the steamship by promoting rear propellers and demoting side-wheelers that bumped against its narrow banks. Other features of the canal also changed the shape of ships; for exam­ple, there were significant price differences for coal east and west of the canal. As long as the canal’s depth forced heavy ships, when fully loaded, to replenish their supplies east of the canal, light steamers had the advantage in terms of money and coaling time. As the canal was gradually deepened, it propelled an increase in the size of steamships, just as the ships’ size encouraged its deepening. Finally, the canal fastened together an artificial archipelago of coaling stations that sprang up around mid-century in a reverse domino effect in equal distances between Gibraltar, Malta, Port Said, Aden, and Bombay. It is undoubtedly one of the key carbon fibers that underpin our current planetary crisis.

 

The hubris of altering the world to sustain the flows of maritime capitalism, imperialism, and fossil fuels was part of a new imperialist mindset focused on making previously unlivable habitats ready for human colonization. Port Said, a vast coaling station at the Mediterranean mouth of the canal (alongside Aden, another such station to the southeast) was one of the first places in the world dependent on coal-fired compressors for desalinated water. As early as the 1850s, this technological fix supported an army of canal diggers and later colonial garrisons. Thus nourishing expanding populations in arid environments came to characterize European imperialism in the region. In the twentieth century oil slipped comfortably into coal’s boots on the ground (and the U.S. into British ones) so much so that life in the Arabian Peninsula and elsewhere in the region is now unthinkable without water desalination dependent on fossil fuels.

 

Alongside proliferating coal and oil the megaproject jeopardized biodiversity and accelerated species extinction. The titanic steamers it promoted had a buoyancy problem inherent to burning their fuel supply. This caused them to rise in the water to the verge of capsizing. To compensate for this upsurge, ballast water started being used and dumped in different ports of call. That is, until the procedure was recognized as a key vector for marine pollution. Moreover, the artificial saltwater passage between the Red Sea and Mediterranean not only created the first physical connection between these bodies of water and marine ecosystems, but also standardized their salinity and temperature. Ecological barriers that kept the Mediterranean insulated were now breached and a migration to the Eastern Mediterranean of invading species from the Red Sea and Indian Ocean began en masse. Marine biologists call it Lessepsian Migration after Ferdinand de Lesseps, founder of the Suez Canal Company. Biologists recently discovered an invasive crab that traveled via the canal and now infests Israel’s desalination plants. Such underwater trails on Mediterranean sand are also among the carbon footprints of the canal, blending together several historical trends that often happen under our radar. To promote a global vision for decarbonization today requires mapping such historical pathways and the ongoing implications of the globalization of carbon energy.

 

For more by this author, read: 

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173621 https://historynewsnetwork.org/article/173621 0
The Whistleblowers of the My Lai Massacre

 

On March 16, 1968, about 200 American soldiers from Bravo and Charlie companies—part of the Americal Division’s 11th Infantry Brigade—entered the complex of South Vietnamese villages now known as My Lai, and killed 504 unarmed villagers, including elderly men, women, children, and babies. The “My Lai Massacre,” as Time magazine eventually called it, remained hidden from the public until November 1969—an 18-month cover-up that began almost immediately.

 

That it came out at all is because there was a whistleblower.

 

In fact there were three.

 

Three young soldiers were on a small OH-23 helicopter whose mission was to draw enemy fire and expose the Viet Cong’s location: twenty-five-year-old Warrant Officer Hugh Thompson, twenty-year-old crew chief Glenn Andreotta, and eighteen-year-old gunner Larry Colburn. They were acting as “bait,” Colburn later declared.

 

They witnessed a captain (Ernest Medina, commander of Charlie Company) shoot an unarmed and severely wounded Vietnamese woman with his M-16. Moments later they saw scores of Vietnamese bodies in a drainage ditch on the eastern side of the village of My Lai 4. American soldiers were standing on its edge, shooting those trying to crawl out. Thompson landed his helicopter and jumped out, demanding to know who was in charge. He was confronted by a second lieutenant (later determined to be William L. Calley).

 

A heated exchange followed. Outranked by Calley, Thompson left, believing the killing had stopped. As his helicopter ascended, however, Andreotta yelled into the intercom, “My God, he’s firing into the ditch again!” Calley had ordered his men to pull out but left a sergeant behind to shoot the wounded.

 

Thompson resolved to report the killings to the command base.

 

Turning back, he and his crewmates saw a squad of American soldiers in pursuit of a small number of Vietnamese civilians running toward an earthen bunker. Thompson set down his helicopter between the soldiers and the Vietnamese and left the engine running as he tried to find the officer in charge. Like Calley, 2nd Lieutenant Stephen Brooks refused to stop the killing. A shouting match ensued, after which Thompson stormed back to the helicopter and ordered Andreotta and Colburn to grab their machine guns while he coaxed the Vietnamese out of the bunker. If any of the American soldiers opened fire at him or the civilians, he declared, “shoot ’em!”

 

For several moments, the two sides glared at each other from across a fifty-yard divide as the propellers whirred loudly above. Brooks finally ordered his men to stand down. Thompson had meanwhile cajoled the Vietnamese out of the bunker—four adults (including two women) and five children—and herded them to the helicopter. Seeing there were too many to fit into his small craft, he radioed a Huey Gunship nearby, whose pilot agreed to transfer the Vietnamese to safety a mile or so away.

 

Thompson made one more pass over the ditch to see if anyone was alive. Andreotta spotted movement by a child and urged Thompson to land. While Thompson remained in the helicopter watching for Viet Cong, Andreotta slid down the bloody wall of the ditch and, while Colburn waited at its side, made his way through piles of bodies and dying Vietnamese begging for help to reach the child, a boy in shock and pinned beneath others but clinging to his dead mother. Andreotta picked him up and waded back to the edge of the ditch, where he lifted the boy to Colburn.

 

The three soldiers lay the youth across their laps as Thompson made a quick six-mile run to the hospital in Quang Ngai City.

 

At base headquarters, Thompson filed charges of mass killings, causing the command to order a cease-fire a little before noon that doubtless saved hundreds more lives.

 

Thompson’s allegations greatly unsettled the commander of Americal, Major General Samuel Koster, who was on the fast track to promotion and was already worried about a report that American forces had killed 128 Viet Cong but discovered only three weapons. The immediate question was whether the victims were unarmed.

 

Koster’s second in command, Brigadier General George Young, expressed doubt that Thompson witnessed mass killings and was more disturbed about his “so-called confrontation” with fellow American soldiers.

 

Rather than file an atrocity report, Koster decided to keep the matter within the division by ordering an informal investigation aimed at collecting facts and then reporting back to him. He saw no need for an official inquiry, which would involve appointing an investigating officer, taking sworn testimony of witnesses, and submitting a final report to the Staff Judge Advocate of MACV (Military Assistance Command, Vietnam). As Young later emphasized to the Army’s Criminal Investigation Division, “One must be aware that a war crime has been committed before it can be reported.”

 

That statement was not correct. MACV Directive 20-4 required reports on “all alleged or apparent war crimes” inflicted by “US military personnel upon hostile military or civilian personnel.” Mere suspicion of a war crime required a formal investigation.

 

On March 18, five officers gathered in the command post van of the architect of the assault, Lieutenant Colonel Frank Barker, to discuss Thompson’s charges. “Nobody knows about this except the five people in this room,” Young declared to Barker, Colonel Oran Henderson (Americal’s brigade commander), Lieutenant Colonel John Holladay (battalion officer at Chu Lai), and Major Frederic Watke (Thompson’s aviation commander).

 

In accordance with Koster’s order, Young instructed Henderson to interview personnel who had been on the ground or in the air at My Lai and present an oral report to Koster within seventy-two hours.

 

Later that same morning, Henderson met with Thompson for about a half hour in Barker’s van. He also met with Colburn and a Huey gunship pilot, Warrant Officer Jerry Culverhouse, who both later testified that they confirmed Thompson’s charge of indiscriminate killing. Yet Henderson later claimed that he talked only with Thompson, who, he said, mentioned nothing about mass killings or of rescuing a small group of civilians from American soldiers. On March 20 Henderson presented his oral report: no evidence of war crimes. All soldiers interviewed denied knowledge of any atrocities.

 

Shortly afterward, the Americal Division awarded medals for heroism to Thompson, Andreotta, and Colburn for risking their lives in a crossfire between U.S. and Viet Cong forces to save sixteen Vietnamese children.

 

The Americal Division had fabricated an account of bravery aimed at silencing them. Two years later, a congressional subcommittee used the awards in an attempt to undermine the credibility of Thompson and Colburn by showing that they (along with Andreotta, who had died in battle a month after the massacre) had received medals for bravery in combat when no enemy was present.

 

Furthermore, all three Eyewitness Statements, on which the awards were based, were phony, written by someone in the Americal Division. When asked in congressional committee hearings about his Eyewitness Statements on behalf of his men, Thompson denied writing them and claimed that his signature on them was a forgery. He then signed a piece of paper showing the differences in the handwriting. 

 

When I asked Colburn about his Eyewitness Statement on behalf of Thompson, he replied that he had never heard about such a document. Shown a copy of the Statement with his signature on it found in the Library of Congress, Colburn denied writing the Statement and declared that someone had forged his signature.

 

In the meantime, President Richard Nixon also sought to undermine the credibility of Thompson and others who had called My Lai a massacre. Over the Thanksgiving holidays of 1969, he met in his Key Biscayne retreat in Florida with advisers and expressed concern that the My Lai charges would accelerate the popular demand for an immediate withdrawal from Vietnam and interfere with his Vietnamization plan of phased withdrawal.

 

Nixon wanted members of Congress to discredit Thompson and other witnesses. Calley, Nixon declared, was “probably a good soldier” who might be “getting a bum rap.” The president ordered the establishment of a secret “Task Force—My Lai” to undermine press stories of a massacre as part of a “dirty tricks” campaign aimed at deflecting national attention to Viet Cong atrocities committed at Hué.

 

Evidence—and history—ultimately showed that Thompson’s original charges were correct and that an Army cover-up took place at every level of the Americal Division, all concealed from the Pentagon in Washington.

 

The truth had gradually emerged after Army and Congressional hearings, the Pulitzer-Prize-winning investigative work of Seymour Hersh, stories that appeared on television and other news media, and a series of courts martial in which both Calley and Medina confessed to wrongdoing. Calley was convicted for premeditated murder but paroled after less than four years of house arrest; Medina was acquitted in the spring of 1970, because of the two-year statute of limitations on charges less than war crimes.

 

All this started with a single whistleblower and his two crewmates.

 

On March 16, 1998, the 30th anniversary of the My Lai massacre, in a ceremony before the Vietnam Veterans Memorial in Washington, the Pentagon awarded the Soldier’s Medal to Thompson, Colburn, and Andreotta (posthumously)—the highest honor for bravery bestowed on a soldier in a non-combat situation. To emphasize the importance of moral and ethical leadership in combat, the Army incorporated a copy of Thompson’s award into its Field Manual of August 1999, highlighted in a boxed quote under the heading “W01 Thompson at My Lai.”

 

They paid the price for it, as whistleblowers usually do. Thompson died in 2006—“morally wounded and despondent,” according to Colburn, who sat at his bedside. Two years earlier, his long-time friend had been inducted into the Army Aviation Hall of Fame. Even this honor could not atone for his long ordeal: intimidation, name-calling, and blackballing by his peers; accused of treason by Americans both inside and outside the Army; flight assignments in the most dangerous areas without what Colburn thought was adequate protection; death threats in the mail and over the phone; mutilated animals dumped on his doorstep, a strained home life. 

 

Not long after Thompson’s passing, Colburn received a rising number of death threats and, until his death in 2016, more customers than usual refused to patronize his business in Atlanta.

 

For more about My Lai, read Howard Jones' book on the massacre: 

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173620 https://historynewsnetwork.org/article/173620 0
The Princess and the Press

 

In the late 1940s and early 1950s, long before Princess Diana and Kate Middleton, Princess Margaret dazzled the world with her style, beauty and royal mystique. She was young, glamorous and everything people imagined a princess to be. Official photographs and press releases from the Buckingham Palace Press Office couldn’t satisfy the demand for royal news. To satisfy a fascinated public, newspapers reported on the good, the bad and the salacious details of Princess Margaret’s life. Sometimes the press chased her, and sometimes she courted them in an effort to garner the attention that she, as the Queen’s neglected younger sister, craved. Princess Margaret might have detested the press’s intrusion at times but without their fervent coverage she would have faded into the obscurity she feared. With season three of the hit Netflix series The Crown about to premiere, viewers will get another glimpse at Princess Margaret’s lifelong struggle to make herself memorable.

 

After the dark years of World War II and the austerity that followed, photographs of Princess Margaret dressed in fabulous Dior gowns and fine jewels offered people a brief escape from everyday life. Duty compelled her to take on the role of fashion icon but a desire for attention, instilled in her by her father King George VI’s favoritism, and the need to be more than the spare heir, made the spotlight alluring. She’d never be queen but she could be an inspiration to young women who had to make do and mend. Once fabric rationing ended, those same young women ran wild copying her style in an effort to gain a touch of royal glamour. The more details people craved about the Princess, the more the press found ways to provide them, revealing both the good and bad about the Princess’s life and fame. 

 

During Princess Margaret’s trip to Italy in 1949, a reporter broke into her hotel room to report on her perfume (Tweed by Lenthéric), nail polish (a unnamed shade by Peggy Sage) and what book she was reading (Busman’s Honeymoon). During that same trip, she was photographed from a distance, as Princess Diana, Kate Middleton and Sarah Ferguson would be decades later, swimming in Capri in a pale bikini. From the angle of the shot, it appeared as if she was swimming in the nude. Although the British newspapers were prevented from printing the pictures, the foreign press ran with them. Both instances were reported in the May 16, 1949 edition of Life Magazine, complete with photos

 

The press weren’t simply unwanted intruders in the Princess’ life, they were often invited guests. In the late 1940s, the Princess’ good friend Sharman Douglas, the daughter of the American Ambassador to the Court of St. James from 1947 to 1950, was well liked by the press because she fed them tips about her and the Princess. She did this with the Princess’ knowledge and permission. If she hadn’t, then she wouldn’t have been an integral part of the Princess Margaret Set, a term coined by the media to describe the young friends of Princess Margaret’s circle. Many of the members came from aristocratic families, were devoted to the Princess, and understood the need for discretion except for when the royal gave them permission to let details slip. In those early days, the Princess’ courting of the press was an innocent flirtation but it would eventually become a more sordid relationship.

 

Every gentleman that Princess Margaret was photographed with was reported to be a potential husband until the press noticed her interest in the much older Captain Peter Townsend. It wasn’t the age difference that made the relationship scandalous, but Townsend’s divorce and status as an employee. Marrying Townsend would be considered a sin since the Church of England prohibited divorced persons from remarrying in the church while their spouse was still alive. This created a conundrum for Queen Elizabeth who wanted her sister to be happy but was, as the head of the Church of England, bound by its dictates. While the British Press was muzzled during the early years of the courtship and censors cut out articles in imported newspapers and magazines, Americans and Europeans were treated to pages of speculation. It was these articles that eventually allowed the British press to break their silence by writing a rebuttal to a foreign piece on the Princess’s relationship with the equerry. Once news of Princess Margaret and Captain Townsend broke, the public rallied around her, closely following her and Townsend’s will they or won’t they story. The Daily Mirror’s poll found that the majority of readers favored the match. Princess Margaret hoped that the public’s support would sway the powers that be into approving the marriage but it didn’t. Instead, it was a stark reminder of Princess Margaret’s lesser place in the royal family, and how the public that adored her enjoyed more control over their lives than she did.

 

As dramatically depicted in The Crown and my novel The Other Windsor Girl, Princess Margaret’s life was one of difficulty and heartbreak as much as it was teas and gowns. She might have possessed the privileges of royalty but with them came rigid restrictions on everything from her behavior to who she could marry. Buckingham Palace gave her no real role or career and denied her the chance to marry Peter and have her own family. The frustration and lack of purpose drew out the tarter side of her personality and the press began to report on that too. While Queen Elizabeth was held up as the figurehead of a new Elizabethan Age, Princess Margaret was cast as the foible to her perfect sister. The Princess initially hated the comparison but as time went on she used her black sheep image to garner the only real attention and notoriety that she could gain.

 

In later life, in an attempt to remain relevant, she fed the newspapers gossip about herself. Her behavior also became more eccentric leading to as many stories about her haughty comments or her notorious acid drop look as those about her charity work. However, the Princess’ need to draw attention to herself came at the cost of her privacy. Just as her failed relationship with Captain Townsend had played out before the public, so did the collapse of her marriage to Antony Armstrong-Jones, and the affair that eventually led to her becoming the first royal divorcee since Henry VIII. However, if it hadn’t been for Princess Margaret’s relationship with the press she might have faded into obscurity in some small corner of Kensington Palace while younger family members waited for her to die and free up a royal apartment. Instead, Princess Margaret shined in the only way she could, in the public eye. She left her mark on the royal family and carved out an image and place for herself, if not as a dutiful sister to the Queen, than as a notable woman with a sharp wit who created great headlines and a fascinating and sometimes tragic character in The Other Windsor Girl and The Crown. 

For more by the author, read her latest book:

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173622 https://historynewsnetwork.org/article/173622 0
Benjamin Franklin, Religious Revolutionary

 

In 21st century America, evangelical Christianity is a bulwark of the political right. Not only are evangelicals some of Donald Trump’s firmest supporters, but the born-again movement has translated its power into an array of churches, radio and TV networks, print media and political action committees designed to advance the fortunes of the Republican Party. Indeed, the association of evangelicalism with reactionary politics is so well known as to be axiomatic (despite notable exceptions such as African American evangelicals who are also unshakable Democrats). Surprisingly, however, during America’s first major evangelical revival — the Great Awakening of the 1730s and ’40s — one of its most important figures had little use for religious conservatism. In fact, he wasn’t a preacher at all, but the reform-minded, freethinking Philadelphia printer Benjamin Franklin.

 

Franklin’s role in the Awakening is unexpected, considering his well-documented hostility to organized religion, especially in its institutional and hidebound forms. From his youth, he found the Calvinism widely preached in New England to be “very dry, uninteresting and unedifying” and sought out an alternative system of belief. With a questing mind and relentless talent for invention, he crafted a detailed moral inventory that highlighted 13 virtues that he found essential, among them temperance, justice, cleanliness and humility. His own idea of God favored these qualities, even if that God was an austere figure who stood at some distance from the daily concerns of those who professed to believe in him. If Franklin was measured in his conception of the deity, he could be downright hostile to clerics and those who enforced church authority. In one case, he compared a group of Presbyterian ministers to “grave and dull Animals…or if you will, Rev. Asses.” Nonetheless, despite these feelings, when the evangelical revival came to Pennsylvania, he did all he could to advance the cause.

 

The key was his association with the Anglican priest and firebrand George Whitefield. Although legendary ministers such as Jonathan Edwards had already led multiple revivals in New England, Whitefield enflamed passions well beyond the region. By the time his second tour of British North America (from 1739-41) was complete, the English preacher had ignited a Christian revival throughout the colonies. As Franklin expressed it in his Pennsylvania Gazette newspaper, “the alteration in the face of religion here is altogether surprising,” as outbreaks of public piety, a surge of church attendance, and a new hunger for devotional books and pamphlets all marked the sudden rebound in Protestant zeal. Franklin himself profited by the revival, and it was due in no small measure to his personal and business alliance with Whitefield himself.

 

Franklin’s skills as an entrepreneur were unmatched in the colonies, and he found countless avenues to benefit himself and his business. He published Whitefield’s private journals, along with his sermons ; he publicized the travels of the preacher and his tour stops around the region; and he hawked memorabilia such as portraits and engravings of Whitefield to round out his growing inventory of devotional goods that capitalized on the Awakening. In so doing, Franklin not only enhanced Whitefield’s celebrity; he expanded the movement the preacher led, as Whitefield drew crowds of up to 20,000 or greater to his incredibly popular sermons. It’s hard to imagine the movement achieving the power it did without Franklin’s efforts, just as it’s hard to appreciate why Franklin did so, beyond the simple urge to make money. 

 

The preacher and the printer differed markedly in belief: Franklin promoted a system of virtue that he called a “bold and arduous project of arriving at moral Perfection” through upholding the various qualities he esteemed. Whitefield, by contrast, argued for a stringent Calvinism that demanded sinners confess to their sins and be redeemed by the “New Birth” (to be born again, in modern parlance) to have even a small chance that God would save them from eternal damnation. Franklin understood the necessity of human free will and argued for proper morals and good conduct; Whitefield saw divine predestination and human depravity as essential to his theology. All of this should have made the Anglican priest anathema to Franklin. Instead, he became one of his most reliable friends — for decades until Whitefield’s death.   

 

One reason is that Franklin stood in awe of the preacher’s abilities in the pulpit. Since Whitefield had some experience in the theatre as a youth (before rejecting it as ungodly), he was able to channel his skills into electrifying public performances in which he brought audiences, and often himself, to fits of tears. Franklin marveled over “the extraordinary influence of his oratory on his hearers” and praised the way the revival had altered the corrupt habits of his fellow Philadelphians. He saw “it was wonderful to see the change soon made in the manner of our inhabitants. From being thoughtless or indifferent about religion, it seemed as if all the world were growing religious.” If this seems surprising coming from Franklin, it shouldn’t be: the printer appreciated the upturn in public morality, regardless of the means necessary to achieve it. Part of this stemmed from the ennobling effect he saw in religious belief; the other stemmed from his fear of what would happen if it didn’t exist. As he later reflected, “If men are so wicked as we now see them with religion, what would they be if without it?”  

 

Franklin also seemed to have a genuine affection for Whitefield. Each clearly benefited from the financial arrangement they had, but more than this, the two men exchanged letters and expressed bonhomie in ways that went well beyond mere profit-seeking. Later in life, Franklin even proposed they settle a colony on the Ohio River: “What a glorious thing it would be, to settle in that fine country a large strong body of religious and industrious people!” The plan did not come to fruition, but as the years passed the bonds between them only seemed to grow stronger, even though neither wavered in his core beliefs. Whitefield regularly encouraged Franklin to convert to Christianity; the printer just as regularly rebuffed him. On one occasion, after Whitefield stayed at Franklin’s house, the preacher suggested Jesus would reward Franklin if he had done so out of a sense of Christian charity. Franklin replied, “Don’t let me be mistaken; it was not for Christ’s sake, but for your sake.”

 

In the end, Franklin’s association with Whitefield and his role in promoting the Great Awakening benefited him greatly. Not only he profit handsomely by printing the tracts of revival preachers; he was able to enjoy the fruits of this and his many other business ventures to retire early and devote his attention to the needs of public affairs and, decades later, the revolution. The irony was dual: a man who rejected the tenets of the first great evangelical revival helped spread its message and ensure its success, and in so doing the profits he earned from that revival allowed him to devote his time to politics, and eventually emerge as a champion of the separation of church and state.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173626 https://historynewsnetwork.org/article/173626 0
Robert E. Lee Wasn't a Hero, He Was a Traitor

 

There’s a fabled moment from the Battle of Fredericksburg, a gruesome Civil War battle that extinguished several thousand lives, when the commander of a rebel army looked down upon the carnage and said, “It is well that war is so terrible, or we should grow too fond of it.” That commander, of course, was Robert Lee

 

The moment is the stuff of legend. It captures Lee’s humility (he won the battle), compassion, and thoughtfulness. It casts Lee as a reluctant leader who had no choice but to serve his people, and who might have had second thoughts about doing so given the conflict’s tremendous amount of violence and bloodshed. The quote, however, is misleading. Lee was no hero. He was neither noble nor wise. Lee was a traitor who killed United States soldiers, fought for human enslavement, vastly increased the bloodshed of the Civil War, and made embarrassing tactical mistakes. 

 

1) Lee was a traitor

 

Robert Lee was the nation’s most notable traitor since Benedict Arnold. Like Arnold, Robert Lee had an exceptional record of military service before his downfall. Lee was a hero of the Mexican-American War and played a crucial role in its final, decisive campaign to take Mexico City. But when he was called on to serve again—this time against violent rebels who were occupying and attacking federal forts—Lee failed to honor his oath to defend the Constitution. He resigned from the United States Army and quickly accepted a commission in a rebel army based in Virginia. Lee could have chosen to abstain from the conflict—it was reasonable to have qualms about leading United States soldiers against American citizens—but he did not abstain. He turned against his nation and took up arms against it. How could Lee, a lifelong soldier of the United States, so quickly betray it? 

 

2) Lee fought for slavery

 

Robert Lee understood as well as any other contemporary the issue that ignited the secession crisis. Wealthy white plantation owners in the South had spent the better part of a century slowly taking over the United States government. With each new political victory, they expanded human enslavement further and further until the oligarchs of the Cotton South were the wealthiest single group of people on the planet. It was a kind of power and wealth they were willing to kill and die to protect. 

 

According to Northwest Ordinance of 1787, new lands and territories in the West were supposed to be free while largescale human enslavement remained in the South. In 1820, however, Southerners amended that rule by dividing new lands between a free North and slave South. In the 1830s, Southerners used their inflated representation in Congress to pass the Indian Removal Act, an obvious and ultimately successful effort to take fertile Indian land and transform it into productive slave plantations. The Compromise of 1850 forced Northern states to enforce fugitive slave laws, a blatant assault on the rights of Northern states to legislate against human enslavement. In 1854, Southerners moved the goal posts again and decided that residents in new states and territories could decide the slave question for themselves. Violent clashes between pro- and anti-slavery forces soon followed in Kansas

 

The South’s plans to expand slavery reached a crescendo in 1857 with the Dred Scott Decision. In the decision, the Supreme Court ruled that since the Constitution protected property and enslaved humans were considered property, territories could not make laws against slavery. 

 

The details are less important than the overall trend: in the seventy years after the Constitution was written, a small group of Southerner oligarchs took over the government and transformed the United States into a pro-slavery nation. As one young politician put it, “We shall lie pleasantly dreaming that the people of Missouri are on the verge of making their State free; and we shall awake to the reality, instead, that the Supreme Court has made Illinois a slave State.” 

 

The ensuing fury over the expansion of slave power in the federal government prompted a historic backlash. Previously divided Americans rallied behind a new political party and the young, brilliant politician quoted above. Abraham Lincoln presented a clear message: should he be elected, the federal government would no longer legislate in favor of enslavement, and would work to stop its expansion into the West.    

 

Lincoln’s election in 1860 was not simply a single political loss for slaveholding Southerners. It represented a collapse of their minority political dominance of the federal government, without which they could not maintain and expand slavery to full extent of their desires. Foiled by democracy, Southern oligarchs disavowed it and declared independence from the United States. 

 

Their rebel organization—the “Confederate States of America,” a cheap imitation of the United States government stripped of its language of equality, freedom, and justice—did not care much for states’ rights. States in the Confederacy forfeited both the right to secede from it and the right to limit or eliminate slavery. What really motivated the new CSA was not only obvious, but repeatedly declared. In their articles of secession, which explained their motivations for violent insurrection, rebel leaders in the South cited slavery. Georgia cited slavery. Mississippi cited slavery. South Carolina cited the “increasing hostility… to the institution of slavery.” Texas cited slavery. Virginia cited the “oppression of… Southern slaveholding.” Alexander Stephens, the second in command of the rebel cabal, declared in his Cornerstone Speech that they had launched the entire enterprise because the Founding Fathers had made a mistake in declaring that all people are made equal. “Our new government is founded upon exactly the opposite idea,” he said. People of African descent were supposed to be enslaved. 

 

Despite making a few cryptic comments about how he refused to fight his fellow Virginians, Lee would have understood exactly what the war was about and how it served wealthy white men like him. Lee was a slave-holding aristocrat with ties to George Washington. He was the face of Southern gentry, a kind of pseudo royalty in a land that had theoretically extinguished it. The triumph of the South would have meant the triumph not only of Lee, but everything he represented: that tiny, self-defined perfect portion at the top of a violently unequal pyramid. 

 

Yet even if Lee disavowed slavery and fought only for some vague notion of states’ rights, would that have made a difference? War is a political tool that serves a political purpose. If the purpose of the rebellion was to create a powerful, endless slave empire (it was), then do the opinions of its soldiers and commanders really matter? Each victory of Lee’s, each rebel bullet that felled a United States soldier, advanced the political cause of the CSA. Had Lee somehow defeated the United States Army, marched to the capital, killed the President, and won independence for the South, the result would have been the preservation of slavery in North America. There would have been no Thirteenth Amendment. Lincoln would not have overseen the emancipation of four million people, the largest single emancipation event in human history. Lee’s successes were the successes of the Slave South, personal feelings be damned. 

 

If you need more evidence of Lee’s personal feelings on enslavement, however, note that when his rebel forces marched into Pennsylvania, they kidnapped black people and sold them into bondage. Contemporaries referred to these kidnappings as “slave hunts.” 

 

3) Lee was not a military genius

 

Despite a mythology around Lee being the Napoleon of America, Lee blundered his way to a surrender. To be fair to Lee, his early victories were impressive. Lee earned command of the largest rebel army in 1862 and quickly put his experience to work. His interventions at the end of the Peninsula Campaign and his aggressive flanking movements at the Battle of Second Manassas ensured that the United States Army could not achieve a quick victory over rebel forces. At Fredericksburg, Lee also demonstrated a keen understanding of how to establish a strong defensive position, and foiled another US offensive. Lee’s shining moment came later at Chancellorsville, when he again maneuvered his smaller but more mobile force to flank and rout the US Army. Yet Lee’s broader strategy was deeply flawed, and ended with his most infamous blunder. 

 

Lee should have recognized that the objective of his army was not to defeat the larger United States forces that he faced. Rather, he needed to simply prevent those armies from taking Richmond, the city that housed the rebel government, until the United States government lost support for the war and sued for peace. New military technology that greatly favored defenders would have bolstered this strategy. But Lee opted for a different strategy, taking his army and striking northward into areas that the United States government still controlled. 

 

It’s tempting to think that Lee’s strategy was sound and could have delivered a decisive blow, but it’s far more likely that he was starting to believe that his men truly were superior and that his army was essentially unstoppable, as many supporters in the South were openly speculating. Even the Battle of Antietam, an aggressive invasion that ended in a terrible rebel loss, did not dissuade Lee from this thinking. After Chancellorsville, Lee marched his army into Pennsylvania where he ran into the United States Army at the town of Gettysburg. After a few days of fighting into a stalemate, Lee decided against withdrawing as he had done at Antietam. Instead, he doubled down on his aggressive strategy and ordered a direct assault over open terrain straight into the heart of the US Army’s lines. The result—several thousand casualties—was devastating. It was a crushing blow and a terrible military decision from which Lee and his men never fully recovered. The loss also bolstered support for the war effort and Lincoln in the North, almost guaranteeing that the United States would not stop short of a total victory. 

 

4) Lee, not Grant, was responsible for the staggering losses of the Civil War

 

The Civil War dragged on even after Lee’s horrific loss at Gettysburg. Even after it was clear that the rebels were in trouble, with white women in the South rioting for bread, conscripted men deserting, and thousands of enslaved people self-emancipating, Lee and his men dug in and continued to fight. Only after going back on the defensive—that is, digging in on hills and building massive networks of trenches and fortifications—did Lee start to achieve lopsided results again. Civil War enthusiasts often point to the resulting carnage as evidence that Ulysses S. Grant, the new General of the entire United States Army, did not care about the terrible losses and should be criticized for how he threw wave after wave of men at entrenched rebel positions. In reality, however, the situation was completely of Lee’s making. 

 

As Grant doggedly pursued Lee’s forces, he did his best to flush Lee into an open field for a decisive battle, like at Antietam or Gettysburg. Lee refused to accept, however, knowing that a crushing loss likely awaited him. Lee also could have abandoned the area around the rebel capital and allowed the United States to achieve a moral and political victory. Both of these options would have drastically reduced the loss of life on both sides and ended the war earlier. Lee chose neither option. Rather, he maneuvered his forces in such a way that they always had a secure, defensive position, daring Grant to sacrifice more men. When Grant did this and overran the rebel positions, Lee pulled back and repeated the process. The result was the most gruesome period of the war. It was not uncommon for dead bodies to be stacked upon each other after waves of attacks and counterattacks clashed at the same position. At the Wilderness, the forest caught fire, trapping wounded men from both sides in the inferno. Their comrades listened helplessly to the screams as the men in the forest burned alive.  

 

To his credit, when the war was truly lost—the rebel capital sacked (burned by retreating rebel soldiers), the infrastructure of the South in ruins, and Lee’s army chased one hundred miles into the west—Lee chose not to engage in guerrilla warfare and surrendered, though the decision was likely based on image more than a concern for human life. He showed up to Grant’s camp, after all, dressed in a new uniform and riding a white horse. So ended the military career of Robert Lee, a man responsible for the death of more United States soldiers than any single commander in history.  

 

***

 

So why, after all of this, do some Americans still celebrate Lee? Well, many white Southerners refused to accept the outcome of the Civil War. After years of terrorism, local political coups, wholesale massacres, and lynchings, white Southerners were able to retake power in the South. While they erected monuments to war criminals like Nathan Bedford Forrest to send a clear message to would-be civil rights activists, white southerners also needed someone who represented the “greatness” of the Old South, someone of whom they could be proud. They turned to Robert Lee. 

 

But Lee was not great. In fact, he represented the very worst of the Old South, a man willing to betray his republic and slaughter his countrymen to preserve a violent, unfree society that elevated him and just a handful of others like him. He was the gentle face of a brutal system. And for all his acclaim, Lee was not a military genius. He was a flawed aristocrat who fell in love with the mythology of his own invincibility. 

 

After the war, Robert Lee lived out the remainder of his days. He was neither arrested nor hanged. But it is up to us how we remember him. Memory is often the trial that evil men never received. Perhaps we should take a page from the United States Army of the Civil War, which needed to decide what to do with the slave plantation it seized from the Lee family. Ultimately, the Army decided to use Lee’s land as a cemetery, transforming the land from a site of human enslavement to a final resting place for United States soldiers who died to make men free. You can visit that cemetery today. After all, who hasn’t heard of Arlington Cemetery?

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173624 https://historynewsnetwork.org/article/173624 0
The History of Personal Hygiene: An Interview with Peter Ward

 

You explain in the book that what first attracted you to the subject matter was an interview you had with your grandfather in the 1970s. Why did you find now was the right time to write and publish this particular book? 

 

The Clean Body is a synthesis. It draws together a literature in several languages from the past 3 centuries on many aspects of the history of personal hygiene across the western world.  The synthesis is a form of writing that sums up the major findings of research on a subject and explores its primary structures or patterns, commenting on its controversies and suggesting new pathways for inquiry.  By creating benchmarks and signposts, syntheses fulfill an important function in historical writing, particularly helpful because the field has become increasingly specialized since the mid 20th century.  Syntheses thus provide an opportunity to sum up and distil the understandings we have arrived at over time.  

 

They also encourage academic historians to address wider, non-specialist audiences.  Though we tend to think of our scholarly colleagues as our most attentive readers, we also have a larger potential following among the book reading public, and it’s more easily reached through syntheses than through specialized works.  Historians are members of a shrinking minority within the university community that still has the capacity to communicate with the general reader, and I think we have a responsibility to do so.  A broadly shared sense of the past is an important feature of civil society.  In addition, through publishing for a more general audience we can help to link the reading public with our universities and colleges, where much historical research occurs.  

 

But writing a synthesis isn’t for everyone.  It’s an activity best suited to those who’ve laboured long in the vineyards of the past. Many years ago one of my colleagues, a fine economist whose late career interests drew him to economic history, remarked to me: “the trouble with history, Peter, is you have to know so much!”  And it’s true.  Historical knowledge and understanding are cumulative; it takes a long time to master a subject to the extent needed to write a mature synthesis. 

 

For these several reasons, this was the right time to write The Clean Body.  Publishing a synthesis was an attractive possibility because no one had previously attempted to do so on such a broad scale, because the subject emerged from some of my long-standing academic interests and because, approaching retirement as I was, turning a longstanding a curiosity into a book seemed more important than ever.  

  

You used a variety of sources in your research, including ones in English, French, German and Italian. What were a few of the “Crown Jewels” or most valuable sources that you found? 

 

As a synthesis The Clean Body is necessarily based on the work of others, in this case many others.  In the historian’s jargon, it relies principally on secondary rather than primary sources. Of them I found some of the major work of the French historians Alain Corbin, Georges Vigarello and Jean-Pierre Goubert foundational.  First published during the mid 1980s, and each in its own way, Corbin’s The Foul and the Fragrant: Odor and the French Social Imagination, Vigarello’s Concepts of Cleanliness: Changing Attitudes in France since the Middle Ages and Goubert’s The Conquest of Water: The Advent of Health in the Industrial Age broke conceptual ground by exploring the deeper cultural meanings of popular ideas and everyday practices.

 

I also drew heavily on a rich literature written in Italian that’s not well known outside Italy.  In the English-speaking world, historical understandings of modern western Europe have long been shaped primarily by the British and French experiences, and in the French instance, often through works in translation, including those by Corbin, Vigarello and Goubert just noted.  For English speakers and readers in particular, most other European national histories remain to some extent in the shadows, in part because  of limited second language skills, in part because translated studies aren’t all that common.  Though I came late to Italian and I’d like to be more fluent than I am, I now have access to an large body of scholarship that hasn’t been as effectively integrated into western European historical narratives as I believe it should be, and I’ve attempted this task in The Clean Body.  I leave it to others to decide whether or not I’ve succeeded.

 

Why did you find it important to focus on both Europe and North America rather than one or the other? 

 

To put it simply, I think wholes are more important than their parts.  One of the leading features of the making of modern societies has been a gradual convergence of technologies, systems, beliefs and understandings.  Despite the many national differences in the rich world today, most nations also hold many fundamentals in common; in many respects, as well, what they share is more important than what distinguishes them from each other.  To choose an obvious example, urban transportation systems across the western world have enough similarities that it’s possible for us to find our way around cities that we’ve never visited before.  Thus, considering western Europe and North America together was a way for me to focus on the most important features of the personal hygiene transition.  

 

At the same time, the panoramic view also highlights important distinctions.  The pathways to contemporary body care practices varied from one country to the next, and within individual countries as well, just as general conditions of material life among them differed.  In turn, these differences influenced the direction, the shape and the timing of the new hygiene’s progress.  Though the ultimate destination was common to all, the journey varied in important ways from one community to another, and the contrasts help to highlight some of the major cultural changes experienced en route.

 

The Clean Body spans a vast time range. Why did you choose the time frame of four centuries? What would you say was the biggest change from then to now?

 

When planning this project I faced a choice between a survey that spanned the 2 millennia from the classical era to the present and one that emphasized the modern era, broadly defined. A lively and highly enjoyable popular history of bathing from Roman times to the recent past was published in 2007, and in my view it fulfills the first need very well.  At the same time, it is anecdotal, descriptive more than explanatory, and has much more to say about clean bodies than clean clothes, limitations that I’ve tried to overcome.  

 

I chose the tighter time frame – the past 4 centuries – because it was long enough to explore the slow diffusion of change in personal hygiene habits, and also because it let me emphasize contrasts between 19th and 20th century practices and those of earlier times. It also allowed me to address what I saw as a major omission in much of what has been published on the subject.  To me, most previous works seemed to lose energy with the arrival of the 20th century, as if the cleanliness revolution had run its course by the eve of World War I.  But this view misses an important part of the story because the transformation of body care habits has continued to our own time.  In particular, hygiene practices were democratized during the second half of the century, becoming a mass phenomenon. Meanwhile, beauty replaced hygiene as the main social imperative behind bathing and wearing clean clothes.

 

In the book you discuss how the idea of cleanliness as a social ideal rose with consumerism and the influence of advertising. Do you think Western cultures would have such a high standard for cleanliness today if we did not live in societies centred around consumerism? 

 

Probably not – though I know that historians should always be wary when answering questions about what hasn’t happened.  After all, our habits have been exposed to the language of persuasion for well over a century and it’s been highly influential. But we should also be careful not to confuse current practices with high standards of cleanliness.  Many of today’s body care practices have little to do with being clean, even though they’re conducted in the name of cleanliness. In addition, the meaning of ‘clean’ has changed substantially over the past 4 centuries, especially after World War II.  Since then the concept of cleanliness has become deeply influenced, even absorbed, by the beauty care industry and the resulting conflation of beauty with cleanliness clouds the issue.

 

Were there any differences in your approach to The Clean Body than your other works such as White Canada Forever: Popular Attitudes and Public Policy toward Orientals in British Columbia; Courtship, Love and Marriage in Nineteenth-Century English Canada and Birth Weight and Economic Growth: Women’s Living Standards in the Industrializing West? 

 

My major books have differed substantially from one another, both conceptually and methodologically.  The first, White Canada Forever, was a revision and extension of my doctoral thesis. History dissertations require the author to demonstrate a mastery of archival and secondary research on a novel topic and skill in writing a sustained work of historical analysis. In this respect my thesis, and the book that it led to, were no different from most that follow this trajectory.  I was interested in exploring the Canadian response to Asian immigration as an expression of popular racial attitudes and as a set of public policies that formalized racial discrimination.  The book was also an example of historical writing from a national perspective even though it focused primarily on the westernmost province in the country.  I relied primarily on national and provincial government records and newspaper sources for this project.

 

In contrast, Courtship, Love and Marriage was an attempt to examine the history of family formation in 19th century Canada.  What interested me most was the interplay between the various structuralfeatures of the marriage market (religious beliefs, legal requirements, demographic and economic factors, courting customs) and growth of romantic intimacy within couples as they moved toward marriage.  In this case most of my research was done in family papers, especially diaries and letters.

 

In my third book, Birth Weight and Economic Growth, I went for a walk with the econometric historians, combining quantitative techniques with the tools and sources of the social historian.  The book rests on large samples of clinical data drawn from 19th and 20th century maternity hospital records in 5 European and North American cities.  The approach employs the basic statistical methods of the social sciences and is rigorously comparative.  My goal was to use a common biological marker of maternal and infant wellbeing to assess the impact of industrialization on women, whose welfare in the past had long been largely ignored.

 

What surprised you the most in your research and writing process? 

 

Looking back over the course of the project, I’m struck by the length of time over which the personal hygiene revolution unfolded.  It took the better part of 2 centuries for the transformation to reach its conclusion. There were no great discoveries or sudden turning points in this process, just a slow evolution of understandings, practices, technologies and living standards.  Initially adopted by small groups of privileged people, these customs gradually touched the lives of greater and greater numbers as beliefs about body care evolved and changing circumstances made it easier to be clean. The central theme of this story has much less to do with innovation than with diffusion, the spread of habits across social as much as national boundaries and, in the end, their central place in fashioning the modern man and woman.

 

What do you most want readers to take away from The Clean Body? 

 

Two things, one historical, one personal.  First the historical.  The Clean Body is a history of habits. It’s a history of the mundane, the everyday, a history without great events, great ideas and great actors.  Its chief importance lies in the fact that it deals with some of life’s most commonplace activities.  But commonplace doesn’t mean trivial.  Time use studies from the late 20th century suggest that, everywhere in the western world, most adults now devote about an hour every day to grooming themselves.  In these same countries households spend 5% or more of their annual incomes on personal hygiene.  And, at a national level, keeping clean accounts for roughly the same proportion of GDP.  So the history of the unremarkable and the ordinary have an importance of their own, one that can easily surpass the history of greatness in any of its many forms.

 

As to the personal, though I suspect the idea is rather passé, I’ve always considered historical writing a form of literature and good writing an essential part of the historian’s task.  By good I don’t mean florid or elaborate or preciously obscure.  I value clear, direct and lively writing because it communicates effectively.  And at its best it can also reveal a play of language that brings the reader an aesthetic pleasure quite apart from the formal meaning of the text.  I’d be delighted if The Clean Body pleased some readers in this way as well.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173625 https://historynewsnetwork.org/article/173625 0
The Used Bookstore Find that Inspired A History of Dress and Fashion

 

What books are you reading now?

 

I have such a mix going on right now. I have a book to review: Ken Kersch’s Conservatives and the Constitution: Imagining Constitutional Restoration in the Heyday of American Liberalism. He’s arguing that conservatives of various stripes created a coherent and popular tale of the meaning and history of the constitution in the years after World War II when liberalism was dominant. He points out that the intellectuals among these conservatives made a point of writing for a popular audience and still do, and since most academic scholars make no effort to that day, it’s a lesson in public influence to me as well. It’s a history book by a political scientist and it’s always interesting to me to see how people reach across fields (or don’t). I was impressed by the introduction and how complicated he can make a sentence, and still make it perfectly clear. 

 

I also am reading April is the Cruelest Month by Louise Penny, one of a mystery series set in a small town in Quebec. Since I speak French and am married to a hockey fan, I am enjoying those elements, as well as the author’s love of good food. Although I am a bit worried about all the people who must be murdered in one small town for the series to continue. You’d think they’d start to get nervous and move out just to be on the safe side. Then I’m working through a stack of vintage sewing pattern books. I’m a new knitter and I’m intrigued by the techniques, and their history. While writing The Lost Art of Dress I realized we had lost so many sewing and drafting skills, and now I see the same thing happened with knitting. 

 

One pattern book that I date to the 1940s has entire dresses and suits made from sock yarn, which is tiny, and you realize the immense skill and patience they had to make such things. Today, we tend to use much larger yarns for sweaters--3 or 4 times as thick--and it’s very rare that anyone tries to knit a dress, much less one out of tiny yarn. As you can imagine, the results are often far less sophisticated. Also, the vintage patterns clearly assumed a certain amount of knitting know-how from the start, which is also true of sewing patterns from the period. Women simply knew more about creating with their hands, and their hands knew more. 

 

Why did you choose history as your career?

 

I was in an interdisciplinary program as an undergrad. Part history, part literature, part philosophy. I really enjoy literature too, but I couldn’t see studying it as a scholar. I remember reading the title of some dissertation in English Literature that studied punctuation in a play of Shakespeare. I thought, hmmmm… not really my speed, probably this not the field for me. I wrote a senior thesis on Pullman town, which I got to visit. George Pullman meant so well, but he ended up sparking the biggest railroad strike anyone has ever seen. I was already intrigued with what values and beliefs people at on. I’m still working on that: how people invoke the power of the state in the name of their values. 

 

What qualities do you need to be a historian?

 

Curiosity. Most of my research projects started, by reading some source, scratching my head and thinking, what the heck is going on here? That and the ability to be perfectly happy spending some time alone following your nose. Although that can become a bit of a problem because you need to learn when to stop following your nose and tell yourself that you know enough. Otherwise, you can’t finish writing a book or an article. Since The Lost Art of Dress was written for a trade press and for non-academics, I realized that we ought to write more as journalists do: to a large audience, without the 3 qualifying adverbs for every verb that so many scholars insist upon. Of course, teaching takes a different set of skills than research, but sometimes it captures the same spirit as writing for a large audience—you have to figure out how to explain something like “due process” to people who have never heard the phrase. And you have to do it quickly and clearly. 

 

What is your most memorable or rewarding teaching experience?

 

One of the memorable experiences was taking a first-year class to the Abraham Lincoln Presidential Library and Museum in Springfield. It’s a fascinating place that makes the most of technology. Since we only hear praise for Lincoln these days, some of the students were shocked to walk into one passageway that had actors reading some of the nasty things people wrote and said about Lincoln and his wife, some of posted up on the walls too. I remember a young woman turned to me in surprise, and said, “They HATED him.” The look on her face. Since the class focused on the difference between popular history and academic history, it was a great teaching moment. The legend of Lincoln makes him beloved, but it is so different from the reality that he lived. 

 

What are your hopes for history as a discipline?

 

That we keep making the effort to write and speak on our relevance to today’s issues. It’s not just our research although one of the reasons I am writing a book on the Cincinnati Bible War of 1869 is to remind people that the religious-minded as much as the secular have a lot to lose if they allow the state to prefer one religion over another. It’s the teaching too. I remember walking into a classroom to teach Era of the US Civil War and Reconstruction and the whole Confederate statute controversy was on the front page of the papers. 

 

I can bring in any local or national paper these days and one of the articles will touch on issues we wrestle with in my class on Crime, Heredity and Insanity in American History. I ended that class last term by reminding everyone they were all citizens of some country, called on to make decisions about the policies of their criminal justice system, that none of those decisions would be easy, but at least from this course they had an idea what the costs, what the ramifications would be. I don’t live in an ivory tower and neither do they.  

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

The Lost Art of Dress started with a find in a used book store. An old dress and sewing textbook from the 1950s which taught both the art of dress and the craft of sewing. I started looking for the “further readings” listed at the back of each chapter. And once I started looking for them, I found old sewing magazines too, and they repeated the same lessons as the textbooks. It turned out there are so few large library collections of sewing publications that I ended up collecting them myself. Since I am also a dressmaker, I find them endlessly inspiring as well. 

 

The earliest ones date from the 1880s with the floor-length skirts and the tight bodies, but my favorites are sewing magazines from the mid- to late-1930s. Yes, the world was in the midst of a horrible depression and careening towards global war and all I can think is, “This dress looks fabulous!” Partly, it’s the relief of getting away from the 1920s look which put belts at the waistline and repressed all womanly curves; partly, it’s the sheer intelligence of much of the design. I think the designers knew women were going to have to have very small wardrobes for a while, so they made each dress into something worthy wearing over and over again. 

 

I also have collected hundreds of vintage patterns in order to study both design and construction and I have made a lot of them. I do steer away from collecting actual garments, although some of my readers have sent me some, but a real costume collection would need to have proper humidity and temperature controls. And there is no way that I am up to that.  

 

What have you found most rewarding and most frustrating about your career? 

 

So many sources, too little time! 

 

How has the study of history changed in the course of your career?

 

Speaking of sources. The available databases of 19th-Century sources are so amazing. Work that took forever on microfilm in the basement of some library can now be done quickly anywhere you have an internet connection. Books that were rare and now right there online. It will make work both harder and easier; searching is so much easier, and now there are lots of possible quantitative research possibilities, but we may forget the limitations too. I have to keep reminding my students that these things are amazing, but that only some of the evidence from the past is online. 

 

Of course, then there is the problem of students Googling everything because they think that is THE Answer. I tell them: Think of Googling as walking into a hoarder’s garage: you may stumble across the whole run of Encyclopedia Britannica, or you may find a dead cat. 

 

What is your favorite history-related saying? Have you come up with your own?

 

“The 11th Commandment: Know Thy Place” which I made up in order to get across to students how different it was to be born into an early modern society where who you could presume to be and what you would presume to do were usually limited by who your daddy was or what color or sex you were. Historians call it an ascriptive society, but I think students grasp the commandment better. It’s always difficult to get across the assumptions people had in the past; why certain ideas would rarely occur to anyone or appear absurd when they seem perfectly natural to us. 

 

What are you doing next?

 

I am continuing my work on two tracks: one law, one fashion. I need to finish my book on the Cincinnati Bible War which happened in 1869 when the city’s school board ended daily Bible reading in the public schools. All hell broke loose in the press, the Catholics got blamed, and conservative Protestants sued to keep the Bible in the schools. Scholars say that when the Ohio Supreme Court ruled for the school board, secularism triumphed. But that’s wrong. The ruling invoked Christ and “Christian republicanism” as well as James Madison. The idea was that religious belief cannot be compelled. It reflected the arguments of the anti-Bible attorney, Stanley Matthews, who had converted to Calvinism after four of his children died in a terrible scarlet fever epidemic. 

 

I’m also working up a plan to write a history of American fashion from the colonial era onward, fashion as a reflection of its times.  

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173627 https://historynewsnetwork.org/article/173627 0
Which Would You Prefer―Nuclear War or Climate Catastrophe?

 

To:      The people of the world

From:  The Joint Public Relations Department of the Great Powers

 

The world owes an enormous debt of gratitude to Donald Trump, Vladimir Putin, Xi Jinping, Narendra Modi, Boris Johnson, and other heroic rulers of our glorious nations. Not only are they hard at work making their respective countries great again, but they are providing you, the people of the world, with a choice between two opportunities for mass death and destruction.

 

Throughout the broad sweep of history, leaders of competing territories and eventually nations labored at fostering human annihilation, but, given the rudimentary state of their technology, were only partially successful. Yes, they did manage to slaughter vast numbers of people through repeated massacres and constant wars. The Thirty Years War of 1618-1648, for example, resulted in more than 8 million casualties, a substantial portion of Europe’s population.  And, of course, World Wars I and II, supplemented by a hearty dose of genocide along the way, did a remarkably good job of ravaging populations, crippling tens of millions of survivors, and blasting much of world civilization to rubble.  Even so, despite the best efforts of national rulers and the never-ending glory they derived from these events, large numbers of people somehow survived.

 

Therefore, in August 1945, the rulers of the great powers took a great leap forward with their development―and immediate use―of a new, advanced implement for mass destruction: nuclear weapons.  Harry TrumanWinston Churchill, and Joseph Stalin were all eager to employ atomic bombs against the people of Japan.  Upon receiving the news that the U.S. atomic bombing of Hiroshima had successfully obliterated the population of that city, Truman rejoiced and called the action “the greatest thing in history.”

 

Efforts to enhance national grandeur followed during subsequent decades, as the rulers of the great powers (and some pathetic imitators) engaged in an enormous nuclear arms race.  Determined to achieve military supremacy, they spared no expense, employed Nazi scientists and slave labor, and set off vast nuclear explosions on the lands of colonized people and in their own countries.  By the 1980s, about 70,000 nuclear weapons were under their command―more than enough to destroy the world many times over.  Heartened by their national strength, our rulers threw down the gauntlet to their enemies and predicted that their nations would emerge victorious in a nuclear war.

 

But, alas, the public, failing to appreciate these valiant efforts, grew restive―indeed, disturbingly unpatriotic.  Accordingly, they began to sabotage these advances by demanding that their governments step back from the brink of nuclear war, forgo nuclear buildups, and adopt nuclear arms control and disarmament treaties. The popular clamor became so great that even Ronald Reagan―a longtime supporter of nuclear supremacy and “winnable” nuclear wars―crumpled.  Championing nuclear disarmament, he began declaring that “a nuclear war cannot be won and must never be fought.”  National glory had been sacrificed on the altar of a cowardly quest for human survival.

 

Fortunately, those days are long past.  In the United States, President Trump is determined to restore America’s greatness by scrapping nuclear arms control and disarmament agreements, spending $1.7 trillion on refurbishing the entire U.S. nuclear weapons complex, and threatening to eradicate other nations through nuclear war. Meanwhile, the president’s good friends in Moscow, Beijing, London, Paris, New Delhi, and elsewhere are busy spurring on their own national nuclear weapons buildups. As they rightly insist:  The only way to stop a bad nation with the Bomb is with a good nation with the Bomb.

 

Nor is that all! Recently, our rulers have opened up a second opportunity for a planetary destruction:  climate catastrophe.  Some scientists, never satisfied with leaving the running of public affairs to their wise rulers, have claimed that, thanks to the burning of fossil fuels, rising temperatures are melting the polar icecaps, heightening sea levels, and causing massive hurricanes and floods, desertification, agricultural collapse, and enormous wildfires.  As a result, they say, human and other life forms are on their way to extinction.  

 

These scientists―and the deluded people who give them any credence―are much like the critics of nuclear weapons:  skeptics, nay-sayers, and traitorously indifferent to national grandeur.  By contrast, our rulers understand that any curbing of the use of fossil fuels—or, for that matter, any cutbacks in the sale of the products that make our countries great―would interfere with corporate profits, undermine business growth and expansion, and represent a retreat from the national glory that is their due.  Consequently, even if by some remote chance we are entering a period of climate disruption, our rulers will refuse to give way before these unpatriotic attacks.  As courageous leaders, they will never retreat before the prospect of your mass death and destruction.

 

We are sure that you, as loyal citizens, are as enthusiastic as we are about this staunch defense of national glory.  So, if you notice anyone challenging this approach, please notify your local Homeland Security office.  Meanwhile, rest assured, our governments will also be closely monitoring these malcontents and subversives!

 

Naturally, your rulers would love to have your feedback.  Therefore, we are submitting to you this question:  Which would you prefer―destruction by nuclear war or destruction by climate catastrophe?  Nuclear war will end your existence fairly quickly through blast or fire, although your death would be slower and more agonizing if you survived long enough to die of radiation sickness or starvation.  On the other hand, climate catastrophe has appealing variety, for you could die by fire, water, or hunger.  Or you might simply roast slowly thanks to unbearable temperatures.

 

We’d appreciate receiving your opinion on this matter.  After all, providing you with this kind of choice is a vital part of making our nations great again!

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173623 https://historynewsnetwork.org/article/173623 0
The 7 Wonders of the World – Digitally Reconstructed  

The Wonders of the Ancient World remain cornerstones of human culture, and the concept is regularly referred to in casual conversation as well as in the academic sphere. But how many people can actually name all seven wonders – and then, how many can imagine how they might actually have figured before the turn of the common era?

 

For a start, six of the ancient wonders have vanished – and that’s if they existed in the first place. The original list of wonders was compiled as a kind of travel guide by Antipater of Sidon, a 2nd-century-BCE Greek poet, and it seems unlikely that he himself would have had the means to visit all these landmarks (and, presumably, others that didn’t make the shortlist) in his lifetime. Some of his choices – such as the Hanging Gardens of Babylon – were probably based on the writings of others, which may not have been reliable or correctly interpreted.

 

A team of researchers and 3D artists over at Budget Direct have worked with the existing sources we have for the missing wonders, and created digital reconstructions of how they would have looked like (they’ve also touched up the sole surviving wonder, the Great Pyramid of Giza, to restore it to its former glory). They sourced written details about the materials, measurements, and architectural features of the landmarks – as well as their cultural backgrounds. Drawings and secondary visual sources were also passed along to the artists, who “looked closely at locations, dimensions, plans, statues, ornamentations, patterns, textures, colors, materials and finish” while they worked – sometimes making judgement calls between conflicting sources. 

 

The results are a leap through time to a different age of bucket-list tourism. 

 

1. Colossus of Rhodes

The Colossus of Rhodes, Reconstructed, courtesy of Budget Direct Travel Insurance

 

 

First up is the Colossus of Rhodes, an immense 108ft statue raised to mark the city’s victory over the Cypriot army – and also to warn other would-be invaders of the home army’s might. They even melted down the iron and bronze weaponry left behind by the vanquished Cypriots to use for the shell, although the inside was weighed down with stones.

 

The marble pedestal beneath the Colossus (which was the likeness of the peoples’ patron god, Helios) pushed the total height up to 157ft, possibly in a position for ships to pass between his legs. Well, nobody said it was tasteful. Anyway, the monument stood for just 54 years before being toppled not by an invading navy, but by a massive earthquake. It remained a powerful tourist attraction even in recline, until Muslim caliph Muawiyah I took the island in 693CE and sold the metal Colossus for scrap.

 

 

2. Great Pyramid of Giza

The Great Pyramid of Giza, Reconstructed, courtesy of Budget Direct Travel Insurance

The Great Pyramid of Giza was constructed around 2589-2566 BCE, and remained the tallest human-made structure on Earth until the 14thcentury CE. Nobody knows exactly how the Egyptians built Pharaoh Khufu’s 481ft pyramid, but it seems likely that skilled workers travelled to Giza from across the land to live in well-kept camps onsite. Strong arms and minds using now-forgotten knowledge were well fueled with good food and highly-motivated to create structures of national pride which established Ancient Egypt as the powerful culture we now celebrate.

 

Today, the unrenovated pyramid is recognizable by its stone step structure. But the creators of this digital reconstruction have replaced the flat, polished white limestone blocks that originally formed the façade of the pyramid – many of these stones were loosened by an earthquake in 1303, giving it the familiar jagged look.

 

 

3. Hanging Gardens of Babylon

Hanging Gardens of Babylon, Reconstructed, courtesy of Budget Direct Travel Insurance

The Hanging Gardens of Babylon may never have existed and, if they did, they may have existed300 miles north of Babylon (and thus 250 miles north of modern Baghdad) in Ninevah. Dr Stephanie Dalley of Oxford University claims that Nebuchadnezzar, king of Babylonia’s claims to having created the gardens could have been an attempt to beef-up his legacy. "After his death legends inflated Nebuchadnezzar's achievements, giving him an undeserved reputation as a world conqueror," says Dalley.

 

Indeed, Babylonian texts make no mention of the gardens. But other writers describe them as a paradise of plants, sculptures, and water features, apparently served by an impressive machinery drawing water to a height of 65ft from the river below.

 

 

4. Lighthouse of Alexandria

Lighthouse of Alexandria, Reconstructed, courtesy of Budget Direct Travel Insurance

The prototypical lighthouse was built by Sostratus of Cnidus around 280BCE. It was probably the second highest structure in the world at the time – after the Giza pyramids, in collective first place. It was formed of a square shape, on top of which was an octagon, and with a cylindrical tower at the top (some sources suggest it was crowned by a statue of Alexander the Great). 

 

The lighthouse fell into ruin between the 12th and 15th centuries BCE, at which point the Ottoman prince Sultan Cem built a fort on the foundations. An archeological dive in 1994 discovered masonry blocks as well as statues of Ptolemy II and his wife, Arsinoe off the coast of Pharos Island. Experts believe they tumbled from the lighthouse complex during a 14th-century earthquake.

 

 

5. Mausoleum at Halicarnassus

Mausoleum at Halicarnassus, Reconstructed, courtesy of Budget Direct Travel Insurance

Mausolus, the king who would give the concept of the mausoleum his name, was the ruler of Caria in ancient Asia Minor. He had many great buildings erected during his reign, and his final resting place was exemplar of numerous architectural trends. 

 

Borrowing from Greek, Near Eastern, and Egyptian design principles, the structure was composed of Anatolian and Pentelic marble and featured Ionic colonnade with a stepped pyramidal roof, topped with a statue of Mausolus as Hercules riding a chariot.

 

 

6. Statue of Zeus at Olympia

Statue of Zeus at Olympia, Reconstructed, courtesy of Budget Direct Travel Insurance

The chryselephantine (ivory and gold) statue of Zeus was created by the sculptor Phidias around 430 BCE. You can see the scepter in his left hand has an eagle perched on top, while the statue to his right represents Nike (the goddess of victory). 

 

The throne was made of cedarwood and ebony, and ornamented with jewels, ivory, and gold. It took eight years to make, but was probably consumed by fire within just a few decades.

 

 

7. Temple of Artemis at Ephesus

Temple of Artemis at Ephesus, Reconstructed, courtesy of Budget Direct Travel Insurance

Despite all the ruins witnessed among the wonders above, it’s a wonder that the Temple of Artemis is not standing today. This tribute to the Greek goddess of chastity recovered from multiple disasters: first burned down by Herostratus – who torched it to make himself famous– it was resurrected only to be wrecked by passing Goths. Rebuilt again, the temple was then destroyed once and for all by a Christian mob. The remains can still be seen today.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173590 https://historynewsnetwork.org/article/173590 0
The History Briefing on the 30th Anniversary of the Fall of the Berlin Wall Saturday, November 9th was the 30th anniversary of one of the most symbolic moments in modern history. The collapse of the Berlin Wall was supposed to signal the end of the Cold War and usher in an era of lasting peace and the spread of democracy. On this historic date, many of us look back at photos to celebrate the event as a victory for the US and the ideals it stands for. However, after the wall fell there were lasting effects that still inform modern German politics today. The wall had both historical as well as cultural significance. Here’s what historians have had to say on this significant date.

 

James Carroll, an author and columnist for TomDispatch, wrote a compelling piece that traces the nationalist sentiments and fear of nuclear war that led to the construction of the wall in 1961. He contended that the anniversary of the fall of the Berlin Wall should serve as a reminder of how the physical symbol of the Iron Curtain came to be. The wall was built to divide the East and the West as they neared an ultimatum; war was seen as inevitable. Carroll points out that the narrative of American wars defined as campaigns of “good versus evil” was applied to the wall and continues to be applied to American conflicts today. His argument adds historical context and an important lesson to the news of the anniversary: we are entering a new Cold War and we would do well to remember the lessons of the Berlin Wall. War can’t always be the answer, and the actions of individuals who want peace can eventually force leaders to get out the way and let them have it.

 

Elena Souris, a research associate for the political reform program at New America, wrote an intriguing narrative in the Made by History section of the Washington Post connecting the fall of the wall to the increasingly polarized state of German politics today. She argued that the reunification process was largely western-led; that has resulted in disproportionate representation in government. When a quick reunification process was decided on, East Germans were left without the infrastructure and business education to keep up with the development of West Germany. Souris highlights some surprising statistics that show this historical underrepresentation: in the elections of 1990, 19 percent of the new parliament was from the eastern half of the country despite constituting 25 percent of the population. She also connects the rise of far-right political parties in the last few years to the effects of reunification. Germans from the east typically make less than those from the west and in 2015, eastern states had poverty levels higher than the national average. As a result, Souris contends that the recent success of the far-right, anti-immigrant Alternative for Germany party in the east can partly be attributed to easterners’ dissatisfaction with their lack of representation over the past three decades. Her article adds to the news story of the anniversary by showing its lingering results are not all feel-good stories for the people who have actually lived through it.

 

Dr. Bryn Upton, a professor of history at McDaniel College, presented an interesting case for how the wall should be a lesson for governments today – a lesson on how border walls don’t work. He attests to the continued desire to migrate from East Germany to West Germany despite the wall, pointing out how over 5,000 people still attempted to escape around the wall despite the obvious danger. Upton details the exact cost of the wall, both in terms of cost but also through the debasement of human life and separation of families. The collapse of the wall was met with messages of support from US presidents. Upton argues, then, that the physical barriers we’ve seen built up along the southern border of the US starkly contrast the campaign to bring down the Berlin Wall in the first place. It adds to the news of the anniversary by showing how anniversaries of historical events, like this one, should directly inform and influence the decisions that we make today. 

 

Albinko Hasic, a PhD student at Syracuse University, wrote an article detailing the tension-filled process of opening the border and destroying the wall between East and West Germany 30 years ago. He goes through the events as they unfolded, from the press conference held by East German official Gunter Schabowski that announced that permanent relocations from East Berlin to West Berlin were possible, to the surge of people that forced guards at the border to make a decision: fire on the crowd or let them through. Hasic explains what these moments meant to people in the moment versus how they have had lasting effects on the world to today. He brings context to the fact that while an anniversary such as this one is typically remembered as the symbolic end to the Cold War, it realistically means so much more. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173618 https://historynewsnetwork.org/article/173618 0
The History Briefing on Whistleblowers: Historical Perspective on the Ukraine Scandal Impeachment has dominated the news cycle in recent weeks. As Republicans continue to defend President Trump and other administration officials, many conservatives have gone on the offensive and attacked the whistleblower that started it all. 

Recently, Senator Rand Paul stoked controversy when he demanded that the whistleblower come forward under threat of public exposure of his or her identity.

But this isn’t the first time whistleblowers have made waves in American politics. Since the whistleblower’s revelation came to light, historians have discussed the parallels between the Ukraine whistleblower and informants from past scandals.

In September, President Trump publicly condemned the DOJ whistleblower as “close to a spy” and insinuated that he or she should be executed. The article “Whistleblower or spy? What the history of Cold War espionage can teach the US” contrasts this modern controversy and the history of espionage in Cold War America.

According to Cold War historian Marc Favreau, Trump’s accusation of treason is of particular note due to its inherent irony. Instead of “serving as a conduit of information in conjunction with a foreign regime,” the whistleblower merely followed established executive branch protocols in order to alert government officials that the president himself was doing just that. The article argues that this methodology precludes the use of the term “spy,” as an operative engaging in espionage would never risk exposure in this way. 

Favreau continues with a critique of Trump’s allusion to execution as a potential punishment, citing the controversy surrounding America’s only instance of spy execution—that of Soviet operatives Ethel and Julius Rosenberg in the 1950s. The death sentence served to the Rosenbergs was unprecedented in American history, and the decision remains controversial today. Although the Soviets were quick to execute citizens accused of espionage after holding them captive in the KGB dungeon in Moscow, execution was rarely an option for spies discovered in America.  

Favreau concludes that although Trump and his supporters clearly have an agenda in condemning the whistleblower in such a dramatic way, the president and his administration “will find no support for their cause in the history of espionage.”

Another piece, published in The American Prospect in October, presents a broad history of whistleblowing in America. Brittany Gibson writes in her piece “All the President’s Whistleblowers” that the history of government whistleblowing is “fraught with charges of espionage, inadequate protections, and real hardships for those who speak out.” She cites scholar Allison Stanger to highlight the Obama administration’s abuse of the Espionage Act of 1917 as a means of harshly punishing government whistleblowers. 

Gibson uses individuals such as John Kiriakou, who blew the whistle on waterboarding practices perpetrated by the Central Intelligence Agency, and Edward Snowden, who highlighted abuses of power in the National Security Agency, as examples of this. Although both were charged under the Espionage Act, she explains that they “also changed the national conversation…and in some cases changed laws on these respective government actions.” 

To Gibson, these high-profile cases are a far cry from the Ukraine scandal. Unlike Snowden and Kiriakou, who braved government retaliation in order to make their disclosures through non-protected channels, Trump’s whistleblower stuck to protocol established by the Whistleblower Protection Act of 1989 (WPA) and the Whistleblower Protection Enhancement Act of 2012 (WPEA). The individual in question was also careful not to release any classified information in his statement, which was specifically meant for public consumption. Gibson concludes that while the whistleblower should not be subject to retaliation through the Espionage Act, history suggests there’s no guarantee that they won’t be. 

A third article recently published in the Washington Examiner points out a likely parallel between Trump’s whistleblower and FBI Associate Director Mark Felt (otherwise known as “Deep Throat” of Watergate fame). In the piece, whistleblower attorney and Watergate historian Mark Zaid argues that the DOJ whistleblower will likely follow in Felt’s footsteps and remain anonymous for decades to come. 

“Our ideal ending,” Zaid told the Examiner, “is that the identity of the whistleblower is never known and the individual continues on with their personal and professional life.” Later on in the article, Zaid expounds upon Deep Throat’s commitment to privacy: “Basically his identity remained ‘secret’ until he decided otherwise. That was his right.”

It is unclear whether, like Deep Throat, the Trump whistleblower will be able to maintain his privacy. If the past is any indication, however, his disclosures have the potential to alter the course of American history.  

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173543 https://historynewsnetwork.org/article/173543 0
The History Briefing on California Wildfires and Climate Change: How Historians Contextualized the News Last month, as parts of California were engulfed in wildfires, the Ronald Reagan Presidential Library narrowly escaped damage. Many historians tweeted their gratitude that the thousands of historical documents housed in the Library were safe. The events that so closely tied climate change and history provide a valuable opportunity to revisit how environmental history can help us understand the ongoing climate crisis.

Historians help us understand wildfires through several different lenses. First, they point out to us previous instances where wildfire policy has been insufficient. Work done by the National History Center explains that American fire policy is rooted in a series of fires in the 1910s, dubbed the “Big Blowup” and the Great Fire of 1910. Over the course of two days, the fire raged across three million acres of virgin timberland in northern Idaho and western Montana. The fire, however, solidified support, funding and a mission for the young Forest Service, but it also laid a misdirected course for fire management

A policy of strict fire suppression was established and lasted well into the 1960s. At first, it worked. But, over time, the absence of fire in these woodland areas led to drier forests, which eventually gave way to more catastrophic fires. Add in a changing climate—with longer, hotter, drier summer — and it’s clear that this strategy cannot be sustained. 

Stephen J. Pyne is a former firefighter, self-described “pyromantic” and expert historian on wildland fires. In his recent article, Winter Isn’t Coming. Prepare for the Pyrocene, Pyne writes that the causes of the fires are not only about the changing climate, or acidifying oceans, but “it’s about how we live on the land. Land use is the other half of the modern dialectic of fire on Earth, and when a people shift to fossil-fuels, they alter the way they inhabit landscapes.” We aren’t suffering from a surplus of bad fires, but actually a famine of good fires. According to Pyne, our ignorance of fire is leading us into a Fire Age, akin to the Ice Ages of the Pleistocene. To stop this, we must re-recognize our symbiosis with fire and find a way to adapt the way we live on land. 

Pyne’s plan for managing wildfires is in stark contrast to President Donald Trump’s. In 2018, President Trump raised many eyebrows and angered many with his naive tweets about the wildfires that devastated California that year. “There is no reason for these massive, deadly and costly forest fires in California except that forest management is so poor. Billions of dollars are given each year, with so many lives lost, all because of gross mismanagement of the forests. Remedy now, or no more Fed payments!” 

Pyne criticized Trump’s remarks, writing that Trump’s claim that the fires result because California hasn’t removed enough trees is wrong and misguided. Further, Pyne wrote that logging is more often a cause of fires than a cure. Ironically, the current California fires are not even located in forests. 

It’s easy to think that if we simply cut down all the trees, a fire would have nothing to burn, but it’s not that simple. Pyne writes that “while all fuel is biomass, not all biomass is available as fuel” When we cut down trees, we leave the little things: branches, leaves, etc. These burn quickly. Think of it like you are having a campfire: If you wish a fire to flash and roar, put in pine needles, dry grass, and kindling. Add a freshly cut green log and the fire will go out.

History can tell us a lot about previous societies have dealt with environmental upheaval. Princeton historians John Haldon and Lee Mordechai, featured in the article “Historians to climate researchers: Let’s talk,” tell us that in examining climate change-related problems, like the California wildfires, we have to look at the convergence of science and history. Societies have been wrecked by their environment (like Pompeii), yet others have learned how to accommodate and even flourish (like societies in the flood plains of the Nile River delta). 

“If I would have to summarize what history has to contribute: it adds nuance to our interpretation of past events,” said Mordechai. In researching the collapse of Mayan society, scientific research told them that the cause was from severe drought, but with the added nuance of history, Haldon and Mordechai  realized that that is not the case. Historical research told them that warfare and socioeconomic policies that sought to divide the rich even more from the poor were more to blame for the civilization’s demise. “Disasters serve, in a way, to emphasize differences in our human society. [After a hazardous event], rich people suffer less.” Natural disasters exacerbate tensions in society but don’t necessarily lead to its demise.

Historical documents give us a backdrop to look at how societies were afflicted by disaster. They unlock the society’s cultural logic. It’s important that we have this background so that we can learn and find new ways for us to withstand environmental upheaval. 

In an effort to bring historians, archaeologists, and paleoclimatologists together, Haldon launched the Climate Change and History Research Institute at Princeton that funds field research, public lectures, and workshops all to look into treating climate change from all angles. The work of the CCHRI goes toward interdisciplinary projects that investigate the impact of climatic changes across the last two millennia on societies, and what we can learn from societal change. 

History cannot be left out of any conversation on fighting climate change. The stories of different societies and their battles give us important details that inform our decisions. If we ignore the ways previous societies adjusted to climate change, we will never be able to solve the crisis. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173435 https://historynewsnetwork.org/article/173435 0
Cracked Foundations: The Case for Reparations On a hot summer day in 1619, a Dutch slave ship carrying “twenty and odd negroes” made landfall on the shores of Virginia. That fateful day marked the origin of what would later evolve into America’s infamous “peculiar institution”—plantation slavery.  Four centuries later, the social, cultural, and economic implications of America’s slave regime remain both widespread and profoundly harmful.

Congress and the Supreme Court have previously attempted to counteract the disadvantages faced by African Americans since the abolition of slavery in 1864. Regardless, a multitude of socioeconomic obstacles from income inequality to de facto segregation still persist. Although there is currently no consensus on how best to address this issue, voters and lawmakers alike have proposed the creation of a formal reparations program as a potential solution.

As the 2020 Democratic primary approaches, the prospect of reparations for descendants of African slaves has become a hot button issue. Back in April, New Jersey Senator and presidential hopeful Cory Booker introduced a bill to the Senate that would establish a commission to explore slavery reparations. Several other Democratic candidates, including Kamala Harris (D-CA) and Elizabeth Warren (D-MA), co-sponsored the bill in a show of unity on the issue. 

Republican lawmakers, meanwhile, generally disapprove of reparations. Majority Leader Mitch McConnell stated he doesn’t support reparations for “something that happened 150 years ago.” Others have raised logistical concerns, arguing that it would be “too difficult” to identify who specifically would qualify for federal compensation.

While 63% of Americans agree that the implications of slavery still affect black people in America today, only 29% of those polled in a September survey support formal reparations legislation. Support for the idea is even lower among Republicans, with a staggering 92% opposed as of July.

As the issue will likely grow in prominence as the Democratic primary continues, it’s important to understand that the concept of compensation for historically disadvantaged minorities is nothing new; in fact, history suggests that reparations are an American tradition.

On two separate occasions forty years apart, Congress compensated  Japanese Americans who were detained in World War II-era internment camps. The first piece of legislation—the Japanese American Evacuation Claims Act of 1948—offered compensation for property the government confiscated from Japanese residents during World War II. Over 26,000 claimants received about $37,000 each.

The second, passed by Congress in 1988, extended a formal apology and awarded $20,000 to each surviving victim of Japanese American internment. A total of $1.6 billion was divided between over 82,000 eligible individuals. These repayment programs prove—at least to some degree—that reparation legislation can be successful.

Another twentieth-century program for government-sponsored reparations was geared towards Native Americans. Congress formed the Indian Claims Commission after World War II in an effort to provide monetary compensation for land confiscated from native tribes following colonial settlement.  

Unfortunately, this program was not as effective as its beneficiaries had hoped. Less than $1,000 was granted to each individual Native American; $1.3 billion was distributed in total. But despite this insufficient outcome, the Commission set a precedent for the allocation of federal funds to counteract systemic inequality.

Individual states have also attempted reparations programs. In 1994, Florida became the first state to enact a reparations law as an apology for an outbreak of racial violence in 1923. Each individual affected by the massacre, in which a white mob stormed the mostly-black town of Rosewood and burned it to the ground, was awarded a decidedly small sum of $3,333.33.

“They didn’t get a whole lot of money, but at least Florida acknowledged they had made a mistake,” said recipient Lizzie Jenkins, whose mother had been driven out of her home by the mob.

Two decades later in 2015, the city of Chicago agreed to compensate 57 victims of police brutality, the majority of whom were African American men from the South Side. This initiative was part of a $5.5 million reparations measure geared towards soothing racial tensions in the city. The city also provided counseling services to the victims on top of monetary compensation.

These comparatively modest programs, while not far-reaching in scope, could provide a blueprint for more ambitious reparations programs in the future. Smaller programs designed by state governments could make it easier to identify who specifically is eligible for reparations and to what extent. Addressing these issues within the states could also set important precedents for subsequent federal programs, the creation of which is currently stymied by a lack of adequate examples to follow. When it comes to righting historical wrongs, any action is better than no action.   

It’s time to make amends. We owe it to those who, hundreds of years ago, had no choice but to lay the groundwork for our nation with their blood, sweat and tears. After all, a cracked foundation requires more than just a quick fix. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173542 https://historynewsnetwork.org/article/173542 0
The Impeachment Primer: 40+ Articles About Impeachment by Historians

 

This week, public hearings began in the House of Representative's impeachment inquiry. In the nearly two months since House Speaker Nancy Pelosi formally announced the inquiry, historians have been busy writing articles that contextualize previous impeachment inquries in American history, the constituional history of impeachment, the history of whistleblowers, the history behind Trump's criticisms of impeachment, and the global history that informs impeachment. Finally, some historians have advocated specific approaches to impeachment and predicted where it might be headed. Below is a detailed list of articles written by historians about impeachment for the History News Network and other publications. 

 

Impeachment in American History

Donald Trump joins presidents John Tyler, Andrew Johnson, Richard Nixon, and Bill Clinton as yet another president facing impeachment. In the wake of Nancy Pelosi’s announcement, historians were quick to draw comparisons between them, and ponder how past impeachments could inform our current moment.

 

What To Know About the History of Impeachment by Peter Charles Hoffer

This article discusses when and how impeachment first made its way into American History and how it has shaped the American Presidency through the years.

 

Video of the Week: The Historical Context of Impeachment

In this video from MSNBC, Chris Jansing and presidential historian Jon Meacham discuss the broad historical context of impeachment.

 

What To Know About the History of Emoluments and Impeachment by Harlow Giles Unger

This article discusses how the British initially shaped Impeachment in the United States, but how it has changed over the course of history. 

 

‘He lies like a dog’: The first effort to impeach a president was led by his own party by Ronald G. Shafer

Unbeknownst to most Americans, John Tyler was actually the first president to face potential impeachment.

 

Why Donald Trump is much more dangerous than Andrew Johnson by Sidney M. Milkis and Daniel J. Tichenor

This article draws on similarities between both Johnson’s and Trump’s rhetoric and administration, showing how these similarities could have played a role in both of their impeachents. 

 

Historians Jon Meacham, Mark Summers, Keri Leigh Merritt, Michael Ross, Brenda Wineapple, and Benjamin Railton Featured in Article on Andrew Johnson and Impeachment By David Crary

In this article from the Associated Press, historians dive deep into the impeachment of Andrew Johnson, drawing many similarities between Trump and Johnson .

 

That time Nixon released doctored transcripts during Watergate by Gillian Brockwell

This article provides a brief history of Watergate and how it led to Nixon’s impeachment. 

 

TRUMP INSISTS YOU CAN'T TRUST THE PRESS. HE'S BORROWING FROM NIXON'S PLAYBOOK. by Steve Hochstadt

President Trump’s lack of trust in the media is not a new phenomena, as seen by similar portrayals of the press during the Nixon Era. 

 

“The System Worked!” Watergate’s Lesson in an Age of Trump by Michael A. Genovese

Are our political institutions strong enough to protect us from yet another dangerous president? Michael A. Genovese investigates.

 

I Voted to Impeach Nixon. I’d Do the Same for Trump. By Elizabeth Holtzman

In this Op-Ed, Holtzman argues why Nixon deserved to be impeached and why Trump’s similar actions as President warrant his impeachment. 

 

Impeachment, Constituional History, and the Founders

To the founders, the process of impeachment was to prevent officials from abusing their political power. The founders feared a president could use their executive power to escape punishment. Their words and actions provide valuable context to consider when reading about the impeachment proceedings relating to President Trump. Here’s how historians and journalists have contextualized the impeachment proceedings.

 

This is why the impeachment clause exists by Adam Serwer

In this article from the atlantic, Adam Serwer tells us that The Framers underestimated the extent to which a demagogue might convince his supporters that the president and the people are one and the same.

 

What To Know About the History of Emoluments and Impeachment by Harlow Giles Unger

In his HNN article, Harlow Unger breaks down the concept of emoluments into digestible terms and examines how emoluments impacted prominent early Americans.

 

The Founders’ furious impeachment debate – and Benjamin Franklin’s modest proposal by Harlow Giles Unger

Historian Harlow Giles Unger gives us a glimpse into the debate about impeachment at the Constitutional Convention. 

 

What we get wrong about Ben Franklin’s ‘a republic, if you can keep it’ by Zara Anishanslin

Zara Anishanslin analyzes Benjamin Franklin’s famous quote in the context of Nancy Pelosi and Neil Gorsuch quoting the line publicly in the wrong context that it was intended for.

 

Impeachment’s role in history: part legal creature, but mostly political by Brent Kendall

Brent Kendall argues that impeachment has been defined by legal procedures in the Constitution balanced with “political calculation.”

 

Why Originalism should apply to impeachment by Donald J. Fraser

Donald J. Fraser supports the idea of originalism to apply to the impeachment inquiry. Originalism is the idea that “the Constitution should be interpreted in accordance with its original meaning”. 

 

Nancy Pelosi Retweets Op Ed By Historian Jordan E. Taylor by Jordan E. Taylor 

Jordan E Taylor discusses how the first generation of American political leaders understood the danger of foreign involvement in their elections because they lived through it.

 

The History of Whistleblowers

Much of the news regarding the pending impeachment inquiry into President Donald Trump has focused on a whistleblower's report regarding the President’s dealings with Ukraine. Whistleblowing has a rich history in American politics, especially under the context of impeachment. The following articles explore examples of such history, their potential implications for the current inquiry, and the prevailing rhetoric on today’s whistleblower:

 

The History Briefing on Whistleblowers: How Historians Covered Breaking News by Sam Mastriani 

HNN intern Sam Mastriani compiles an overview of historians’ responses to the whistleblower in the Trump impeachment inquiry.

 

Whistleblower or Spy? What the History of Cold War Espionage Can Teach Us by Marc Favreau

Marc Favreau cuts through the Whitehouse’s rhetoric comparing today’s whistleblower with a spy, drawing a line between whistleblowing and espionage through an examination of Cold War history.

 

Before the Trump Impeachment Inquiry, These Were American History's Most Famous Whistle-Blowers by Olivia Waxman

Olivia Waxman explores the history of the term “whistle-blower”, highlights some of the United States’ most famous whistle-blowers, and outlines the reality of how non-glamorous being a whistle-blower actually is.

 

Intelligence Whistleblowers Often Pay a Severe Price by Jennifer M. Pacella

Jennifer Pacella explores the history of retaliation against whistleblowers, specifically referencing the likes of Edward Snowden and Chelsea Manning.

 

U.S. Whistleblowers First Got Government Protection in 1777 by Christopher Klein

Christopher Klein demonstrates the longstanding importance of whistleblowers in American history by discussing early American laws protecting them.

 

Breaking Down Republican Lawmakers’ Defenses of Trump by Steve Hochstadt

Steve Hochstadt describes the defenses Republican Lawmakers have made for President Trump, and the inability of these Lawmakers to take an active stand against what he sees as corruption. 

 

Why America Needs Whistle-Blowers -by Allison Stanger

Allison Stanger explains how Whistleblowers perform an American duty in protecting and serving their country, especially the Whistleblower reporting President Trump

 

 

The History Behind Trump’s Defenses

President Trump has pushed back against the impeachment inquiry, calling it a “witch hunt” and attacking the whistleblower whose report triggered the inquiry. Trump himself has denied any wrongdoing altogether, tweeting “there is no quid pro quo!” Congressional Republicans have backed him up, despite his comparison of his treatment to a lynching. How should we understand what exactly quid pro quo is, and how do we place Trump’s responses in the context of the history of impeachment? The articles below attempt to do so. 

 

The Return of the ‘Witch Hunt’ Analogy by Tony Fels

The ‘Witch Hunt’ analogy  has its history dating back the Salem witch trials of 1692 and even made an appearance in the impeachment of Bill Clinton in the 1990s, but historians help us understand if the term was used effectively.

 

So you want to talk about lynching? Understand this first. by Michele Norris

After referring to his legal impeachment proceeding as a “lynching,” drawing to some of the darkest annals of U.S. history, historians point to the brutal history around the term and its meaning in light of President Trump’s comment.

 

Whistleblower or Spy? What the History of Cold War Espionage Can Teach US by Marc Favreau

President Donald Trump has called the source of the whistleblower complaint “close to a spy,” and even the modern history of espionage serves as a guide to Trump’s accusation. 

 

A History Of 'Quid Pro Quo' 

The central question to President Donald J. Trump’s impeachment proceeding is whether or not he engaged in a quid pro quo with the leader of Ukraine, thus one must comprehend the true meaning to the phrase “quid pro quo” as Trump refuses to believe that any quid pro quo ever happened.

 

The Impeachment Inquiry Reveals Trump’s Narcissism and Republican’s Compliance by Steve Hochstadt

This impeachment inquiry is history, and what each Republican politician does or doesn’t do, says or doesn’t say, will define their legacies.

 

Why Donald Trump is much more dangerous than Andrew Johnson by Sidney M. Milkis and Daniel J. Tichenor

Trump’s combative response to impeachment proceedings inevitably brings to mind President Andrew Johnson, another famous impeachment target whose contemporaries viewed him as emotionally volatile, publicly coarse and recklessly hostile to presidential and constitutional norms.

 

Down the Rabbit Hole With Donald Trump by Tom Engelhardt

One way to explain the Trumpian world we’ve been plunged into is through Lewis Carroll’s Alice’s Adventures in Wonderland. 

 

 

What Historians Advocate and Predict Might Happen Next

Beyond adding context to current events, many historians have explicitly advocated specific strategies and actions. Others have predicted what Americans can expect in the weeks and months to come. These perspectives show the variety of ways historians understand politics and anticipate future developments. 

 

Why We Must Impeach by Sean Wilentz

In this powerful article for Rolling Stone, Sean Wilentz argues that the Founding Fathers created impeachment for this exact purpose. He gives compelling reasons why utilizing this tool is essential for American democracy. 

 

What Does History Tell Us About Impeachment and Presidential Scandal? Here's 7 Historians' Analysis, Kyla Sommers

Analysis by historians Andrew Hartman, Jennifer Mercieca, Eladio B. Bobadilla, David Walsh, Varsha Venkat, Derek Litvak, and Rick Shenkman.

 

Breaking Down Republican Lawmaker's Defenses of Trump by Steve Hochstadt

Steve Hochstadt analyzes the different ways Republicans are defending Trump and argues this reveals the party's corruption. 

 

Could Trump Lose the Republican Nomination? Here's the History of Primary Challenges to Incumbent Presidents by Olivia Waxman

Olivia Waxman discusses previous presidents who faced a primary challenger amid declining approval ratings and controversy. 

 

What If Mike Pence is the 2020 Republican Nominee? by Ronald L. Feinman

If Donald Trump is removed from office and Mike Pence is the 2020 presidential nominee, what can we expect? What can Lyndon B. Johnson, Gerald Ford, and Calvin Coolidge teach us? 

 

What If Donald Trump Resigned? by Vaughn Davis Bornet

Dr. Bornet argues the easiest path forward for the nation would be if Donald Trump just resigned. 

 

As The Trump Administration Demonstrates, We Have Undervalued Virtues And Values by Walter G. Moss

Dr. Moss argues that to defeat Trump in the 2020 election, Democrats must agree upon and advocate central values and to contrast with Trump's corruption. 

 

 

World History and Impeachment

 

The Founder fathers didn’t invent impeachment: all political leaders have abused their power and tried to avoid accountability.  Here’s how Donald Trump’s impeachment fits within broader historical patterns. 

 

Trump and the Divine Rights of Kings by Ed Simon Donald Trump and King Charles I are political leaders in vastly different capacities.  But how do their abuses of power relate to removal from office?

 

Parallel Lives of Donald Trump by Lance Morrow

Plutarch - a roman historian and essayist - was known for his comprehensive comparisons.  Using his model, Morrow Trump compares to other world leaders dating back to Julius Caesar.   

 

Who Will Be America's Brutus? By Michael Genovese

Julius Caesar was assassinated because Brutus - a respected leader with behind the scenes knowledge - decided he was becoming too powerful.  Will an American leader take the same responsibility and provide the evidence to impeach Trump?

 

The Surprising History of Impeachment by Frank Bowman

Frank Bowman III, author of “High Crimes and Misdemeanors: A History of Impeachment for the Age of Trump,” sat down with CNN to discuss how the history of impeachment-even as far

back as the Magna Carta, informs Trump’s impeachment proceedings today.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173582 https://historynewsnetwork.org/article/173582 0
Sondland Sings: Here's How Historians Are Responding Click inside the image below and scroll down to see articles and Tweets. 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173146 https://historynewsnetwork.org/article/173146 0
Roundup Top 10!  

How America Ends

by Yoni Appelbaum

A tectonic demographic shift is under way. Can the country hold together?

 

Don’t Expect Polls to Change Republican Minds

by Nicole Hemmer

When it comes to impeachment (and pretty much everything else), the G.O.P. is no longer driven by public opinion.

 

 

The gravest danger to American democracy isn’t an excess of vitriol—it’s the false promise of civility.

by Adam Serwer

"The idea that we’re currently experiencing something like the nadir of American civility ignores the turmoil that has traditionally characterized the nation’s politics, and the comparatively low level of political violence today despite the animosity of the moment." Serwer cites historian Manisha Sinha.

 

 

The Greatest Scam in History

by Naomi Oreskes

Scientists working on the issue have often told me that, once upon a time, they assumed, if they did their jobs, politicians would act upon the information. That, of course, hasn’t happened. Why?

 

 

The five ways Republicans will crack down on voting rights in 2020

by Carol Anderson

Given what’s at stake next year, the effort to prevent people voting will be fierce. We’ve been here before – and we can stop it.

 

 

The problem with ‘OK, boomer’

by Holly Scott

Generational divides distract from deeper questions of power. Boomers should remember that from the 1960s.

 

 

From Nixon to Trump, the historical arc of presidential misconduct is deeply troubling

by James M. Banner Jr.

Since the early 1970s, the behavior of American presidents has worsened in alarming ways.

 

 

History Has a Race Problem, and It’s Existential

by Allison Miller

White people dominate the study of history, as students and as those who earn PhDs.

 

 

How racial segregation exacerbates flooding in Baton Rouge

by William Horne

Strategies of segregation and secession to hoard resources are leaving the whole metropolitan area unprepared for rising waters.

 

 

 

How we fail our Chinese students

by Jonathan Zimmerman

If Chinese students spend several years in the United States and decide they don’t like democracy, we must not be making a strong enough case for it.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173617 https://historynewsnetwork.org/article/173617 0
Why Televised Hearings Mattered During Watergate But May Not Today

John Dean

 

I started a continuing legal education program with John Dean in 2011. We have done over one-hundred-and-fifty programs across the nation since then. 

 

Our first program was about obstruction of justice and how Dean, as Nixon’s White House Counsel, navigated the stormy waters when he turned on the president and became history’s most important whistleblower. Unlike the current whistleblower, Dean had been involved in the cover-up, but ultimately decided he had to end the criminal activity in the White House, with no assurance of anonymity and with the almost certain expectation that he was blowing himself up in the process.

 

Dean was placed in the witness protection program but became one of the most recognized figures of his time. When he testified before the Senate Select Committee for the entire third week of June 1973, all three networks carried his testimony, gavel-to-gavel. John Lennon and Yoko Ono showed up to watch Dean. Almost all of America tuned-in and some reports  estimated there were 80 million viewers. In our history we have had only a handful of such mega-TV events: the Kennedy assassination weekend, the Apollo landing on the moon, and the attacks on 9-11 come to mind.

 

Yet is wasn’t until a year after Dean testified that Nixon resigned. The process of Nixon’s take-down was slow but steady, a phenomenon that many historians refer to as a “drip, drip, drip.” Nixon’s credibility kept absorbing one body shock after another: first the Senate hearings, then the discovery of the taping system, the firing of the Special Prosecutor Archibald Cox, the discovery of a gap in the tapes, the indictment of top Administration officials, and finally, the Supreme Court ruling in late July 1974 that tapes had to be turned over.

 

One tape dealt the knockout blow. It became known as the “smoking gun tape,” because it showed the president a week after the break-in, on June 23, 1972, ordering his chief of staff to call in the CIA and instruct them to tell the FBI to end its investigation into Watergate, as it might uncover CIA operations. This both killed Nixon’s lie that he knew nothing of the cover-up and displayed the kind of abuse of presidential power that the founders worried about when they agreed to insert an impeachment outlet in Article II of the Constitution. The president was seeking to use his office for his own personal political protection.

 

In the current situation, the “smoking gun” has already been produced in the form of the partial transcript of President Trump’s July conversation with President Zelensky. Despite what Trump says, it shows him bargaining for political dirt on his adversary through the misuse of his presidential powers.

 

The whistleblower today has been backed up by others who had direct knowledge, making his or her account now superfluous. John Dean had no such back-up from others; he had to wait a year for his testimony to be fully corroborated by the tapes themselves.

 

Because the nation also had to wait almost a year to determine who to believe, there was time for Dean’s testimony in June 1973 to take hold and sink in. Nixon had won reelection by a landslide in November 1972 and his approval rating was nearing 70% in January 1973, when he kept his promise to end America’s involvement in the Vietnam War. But with the burglars’ trial before Judge John Sirica in January 1973 and the unanimous Senate vote to investigate campaign activities from the 1972 election on February 7, 1973, things began to spiral.

 

In April 1973 when Nixon fired his top advisors and attorney general, Dean was also let go. Nixon’s approval rating fell to 48%. Then John Dean testified in June and Nixon’s popularity fell as low as 31% by August. Nonetheless, despite the widespread opinion that Nixon was somehow culpable in the break-in, only 26% thought he should be impeached and forced to resign; 61% did not. The calls for impeachment and removal didn’t reach a clear majority until the “smoking gun tape” was produced in late July 1974. Then the number rose to 57%. By that time Nixon’s approval rating had sunk to 24%.

 

What this tells us is that the televised hearings started the path downward in June 1973, though it took a year of continuing scandal to wipe out Nixon’s support. Importantly, Dean’s testimony that he thought he might have been taped in one instance led Senate investigators to Alexander Butterfield and the revelation that the taping system existed. It was the fight for the tapes that confounded Nixon and his defenders and ultimately led to his resignation.

 

The current impeachment inquiry will be Watergate in reverse. The whistleblower has already surfaced the “smoking gun” transcript. Others have already verified the whistleblower’s allegations. The televised hearings, while they will add public understanding of what happened, are coming after years of scandal and revelations about Mr. Trump. Between the expected impeachment by the House at year-end and the trial in the Senate early next winter, there will be little time to let the pot boil as it did in Watergate. And it is doubtful, given the depth of the partisan divide and media bias and agendas, that the polls on impeachment and removal will shift a great deal with the televised hearings.

 

The public knows what Trump did. They generally know it was wrong. Trump supporters simply don’t care. Even with the attempt to have foreign interference in our elections all but confirmed, the President’s followers are unmoved.

 

Perhaps I will be proved wrong, but I think the televised hearings will not significantly move the needle of public opinion as happened during Watergate. The House will impeach. The Senate will not convict. And those Republicans who stand behind the President will have to answer to voters when their time comes. So will President Trump.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173588 https://historynewsnetwork.org/article/173588 0
Muslims Should Reject Indian Supreme Court's Land Offer India has just set an example of how a nation can retreat into darkness, with its Supreme Court ruling on Saturday that a Hindu temple be built to honor fictitious God Ram on the disputed site of a historic 400-year-old Muslim mosque, named after the first Mughal emperor.

 

The top court agreed with Hindus that a structure existed under the mosque, i.e. the Babri Masjib was not built on vacant land. It decided to give Muslims land elsewhere to construct a mosque, a consolation prize that seems akin to giving the United States back to the Indians and resettling the Americans in Hawaii. 

 

Indian Muslims should reject the land offer and not build any mosque at all in protest. To defy Hindu extremists, they should start holding weekly Friday congregations under open skies whenever possible.

 

They also should forcefully assert their equal rights under India's constitution, but must shun violence. Mahatma Gandhi's non-violence creed is the most powerful weapon civilians ever invented. 

 

The verdict capped years of legal wrangling, which reached India's highest court in 2011 when Hindus appealed a lower court order that the site be shared by the two religions. In 2018, a three-judge panel declined to send the case to a constitution bench, prompting the chief justice to form a body to decide who owns the property.

 

The five-member bench, headed by Chief Justice Ranjan Gogoi, ordered the government on Saturday to set up in three months a trust to manage the site on behalf of Ram Lalla Virajman, the child deity, worshiped by Hindus at the site in Ayodha, Uttar Pradesh 

 

Now the Hindu ultra-nationalist government will form a board to erect the Ram temple where once stood the Babri Masjid. This action ends India's secular tradition by patronizing one religion against another. India, a nation of 1.3 billion people with 200 million Muslims, is to treat all religions equally under the constitution.

 

The verdict is faulty because it ordered the government to construct a temple, even though the court admitted there is no evidence that one ever existed at the site. The court overstepped its jurisdiction, too; its task is to decide the ownership. What will be done with the property that is no business of the court — that will be decided by the owner.

 

Hindus, including Prime Minister Narendra Modi's supporters, claim that Emperor Babur, whose dynasty ruled India from 1526 to 1857, built the mosque on top of a temple at the birthplace of Ram, a character in the Ramayana, an epic written by sage Valmiki at 200 BCE. An archaeological survey found no evidence of this, only that an unspecified structure existed before the mosque. 

 

Court Errs on Evidence

In deciding the case in favor of Hindus, the justices accepted infant Ram as a perpetual minor under law, an ill-conceived legal doctrine without precedence outside India. It is beyond a modern man's imagination how learned jurists of an ancient civilization can accord legal status to a fictitious deity. Only the dark mind of India seems to be at work.

 

This judgment has irreparably damaged the Supreme Court and undermined minority confidence in the judiciary. The court made a mistake by deciding the case based on whether Hindus worshiped at the site, which has been used as a mosque since 1528. It ignored the main instrument of ownership — the title to the land. 

 

The court said “there is clear evidence to indicate that the worship by the Hindus in the outer courtyard continued unimpeded in spite of the setting up of a grill-brick wall in 1857,” which means the Hindus were in possession of a part of the land and hence have a valid ownership claim. 

 

By this logic, Muslims in the United States can rightfully claim ownership of many churches, where they held weekly Friday prayers. No one in the United States can claim ownership of a house just by virtue of using it for some time.

 

The court contradicted itself in saying that titles can’t be decided on faith when it actually ruled in favor of Hindus because of the evidence that Hindus continuously worshiped at the site for a long period. 

 

The pendulum of justice cannot swing according to political wisdom or pragmatism. Justice upholds legal principles. The court should have given the land to its valid title holder. Even better would have been for the apex court to stay above the fray and send the case back to the lower court to settle the ownership dispute.

 

Like the U.S. Supreme Court, the Supreme Court of India should be minimally interventionist, hearing only the cases that involve serious constitutional issues. Indian justices decide as many as 700 cases a year, against 70 by their U.S. peers.

 

The unanimous landmark verdict is pure politics, and it has voided respect for history, which India's founding leaders treasured as mythical Cobras protect their pots of gold. This ruling relegates India's Muslims into perpetual discrimination based on religion, an idea Hindu ultra-nationalist Hindu guru V. D. Savarkar espoused a century ago.

 

Secularism Fails in India

Muslims prayed at the mosque for generations until 1949, when Hindu activists placed idols of Ram inside the complex. The mosque was demolished in 1992 by Hindu mobs triggering nationwide religious violence that killed about 2,000 people, most of them Muslims.

 

This destruction of the mosque highlighted the failure of secularism in India, and divided the country along religious lines, giving politicians an opportunity to appeal to base instincts of Hindu masses to win elections.

 

Modi's political organization, the Bharatiya Janata Party, which wishes to create a Muslim-free India, vowed decades ago to build the temple at Ayodhya. Behind this movement has been the Vishwa Hindu Parishad, a militant umbrella group with a tax-exempt affiliate in the United States, which works to reconvert Muslims and Christians into Hindu.

 

The verdict gave a major victory to the 69-year-old Modi, who was blamed for the Muslim massacre in 2002 and has been under fire since August for scrapping special autonomy for Muslim-majority Kashmir, a picturesque Himalayan region.

 

Since Modi came to power in 2014, India has passed laws against Muslims, and several states are planning to deny government jobs to people with more than two children. Muslim birth rate exceeds Hindu, and India has a phobia that Muslims will outnumber Hindus and reestablish the Muslim empire. Muslim history has been removed from school textbooks. Modi is considering a law to grant refugee status to everybody but Muslims

 

With the verdict, India is now a Hindu nation, which means that Muslims are free to leave if they so choose, or stay as subservient to Hindus. This is how Hindus want to avenge their humiliation under Muslim rule for a thousand years.

 

The court decision will create chilling effects among Muslims and intensify simmering Hindu-Muslim tensions inside India and out. Moderate Muslims in Bangladesh and Pakistan, for example, will stumble to cite India as a model to fight extremism. Muslims rather will point to extremism of Buddhas and Hindus as well as Christians as an existentialist threat to their religion and identify.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173587 https://historynewsnetwork.org/article/173587 0
The Whistleblower Should Remain Anonymous

Frank Willis

 

On the night of June 17, 1972 security guard Frank Wills noticed a piece of duct tape on a door in the Watergate complex in Washington. D.C. The tape was probably placed there by tenants moving in or out earlier in the day. Wills removed the tape and resumed his rounds. A half an hour later he returned and saw a new piece of tape holding open the latch. Wills walked to a telephone in the lobby and called the police. He accompanied the police as they searched room by room for the intruders until they found five burglars in the offices of the Democratic National Committee. 

 

Without Wills the Watergate break-in and the scandal brought down President Richard Nixon two years later might never have happened. His simple act of noticing the tape changed the course of American history. The 24-year-old native of Savannah, Georgia became a celebrity and to many a hero almost overnight. Wills played himself in the film version of “All the President’s Men.”  He was interviewed on talk shows and the national news and featured in newspapers and magazines. 

 

For a time Frank Wills was a household name yet the spotlight eventually went out, leaving Wills to wrestle with the after effects of what for him was a toxic celebrity. When the cameras went away Wills drifted from job to job, moved often, and failed at selling Dick Gregory’s fad diet products. He finally settled in South Carolina to take care of his mother after she suffered a stroke. The former security guard was twice arrested for shoplifting small items. In his occasional reappearance in the media,  he expressed some bitterness that he did not receive more credit for uncovering Watergate. Frank Wills died of a brain tumor in 2000 at age 52. 

 

47 years after Willis noticed the tape, a whistleblower has exposed a far bigger scandal. He or she reported that White House officials attempted to extort Ukraine by withholding congressionally approved military aid in exchange for a public investigation into the imaginary corruption of a political rival. The whistleblower reported this to the proper authorities under the law created to encourage and protect such reporting. Lawmakers enacted these protections to ensure that employees do not have to sacrifice their careers to do the right thing. 

  

Frank Wills and the whistleblower exposed the highest level of crimes committed against the American people in very different ways, Wills inadvertently and the whistleblower deliberately. Without them neither scandal would probably ever come to light. Wills’ involvement ended with the phone call to the police and the whistleblower’s involvement ended with the report to his or her superiors. There was never a need to call Wills as a witness in the Watergate or impeachment proceedings. There is now no need to call the whistleblower to Congress after the allegations of the report have been repeatedly verified by witnesses and the White House’s own documents and public statements from the president and the chief of staff. 

 

The whistleblower can and should take a quiet, private pride for their actions and continue in their career of public service unmolested. But, of course, it is never that simple under the current administration. Rather than defend itself against the obvious crimes it committed it has attacked the whistleblower as a liar, a political opponent, a traitor and whatever else pops into their head or tweets. The administration is trying to expose the identity of the whistleblower, an act as unnecessary as it is illegal. It would destroy the whistleblower’s life as intentionally as Wills’ life was destroyed carelessly. What happened to Frank Wills could happen to the whistleblower. It is imperative that his or her identity be forever protected.  We do not need to destroy his or her life with the double-edged sword of celebrity.

 

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173589 https://historynewsnetwork.org/article/173589 0
The Battle of Midway Movie is Mostly Terrific

The battle of Midway, in the summer of 1942, was the key battle in the war in the Pacific during World War II. Most of the Japanese fleet and its legendary aircraft carriers were destroyed or immobilized in a long-range bombing mission by the U.S. Navy from a group of aircraft carriers, some of them, such as the Yorktown, pretty battered from previous fighting. The victory near Midway island prevented any other Japanese assaults on Pearl Harbor or the American west coast. It was a horrific battle, full of dive bombers, carriers, explosions and a sky full of American and Japanese fighter planes. It was one of the great battles of U.S. history, even world history.

 

The battle in the Pacific is the subject of a new film, Midway, that opened on Friday. Military buffs will adore it, but the average person will sit through, liking certain parts very much and frowning at others. As far as computerization and special effects, the film is just sensational. The air battles are s vivid, full of bombs, bullets, the fearsome roar of plane engines and fires. You think you are in the copilot’s seat of a plane. I saw the film in one of the new Dolby sound theaters, so the seats vibrated when the fighting on screen began. It is a highly enjoyable experience.

 

Director Roland Emmerich’s movie Midway has a number of problems, though, and it stands in the shadow of the gripping 1976 Midway that starred Charlton Heston and Henry Fonda. That film had better human stories in it, particularly with Heston and his son, a fighter pilot. It was a taut drama about a battle everybody knew about, and yet it was fresh and invigorating. This new Midway drones along for about three quarters of its length, until it catches fire and ends in a patriotic glow. 

 

All of the characters in the new Midway needs some depth to them and the actors need to get a better handle on them. Woody Harrelson, as Admiral Chester Nimitz, goes through the film half asleep and the others stars do not do much better. The heroic Dick Best (Ed Skein) is just plain irritating. Other major characters in the film are Dennis Quaid as William “Bull” Halsey, Patrick Wilson as intel chief Ed Layton, Aaron Eckhart as Jimmy Doolittle, and Itsushi Toyokawa as Japanese Admiral Yamamoto.

 

The movie suffers from a confused structure. This is a movie about the battle of Midway, right? So why does it start with the battle of Pearl Harbor and then move to Jimmy Doolittle’s bombing raid on Tokyo? They are a prelude to MIdway, but not much else. There is a lot of strategic planning by the Americans in the film, but not the Japanese. It is never clear why the Japanese want to take Midway at all. It is also not clear how the U.S. broke the Japanese code, which was the key to victory. 

 

There is a lot about the leadership and last-minute hospitalization of Bull Halsey, but little on Admiral Raymond Spruance, who replaced him and did a spectacular job (the 1976 movie properly had Spruance as one of its stars). There is a lot of bombing and air battles in the movie, but at times it is difficult t tell who is bombing who. That needs to be clearer.

 

Until the end of the film, written by Wes Tooke, there is little emotion shown by the main characters on the American side (the film moves back and forth between the Americans and the Japanese, much like Tora! Tora! Tora!).You wonder why these guys are fighting the war anyway. One real loss was in the relationship between the men in the Navy and their women. These relationships are one dimensional and stereotypical. Director Emmerich should have done more with them.

 

Midway is far better than the recent staid and slow Pearl Harbor that starred Ben Affleck. I just wish that Midway was a sturdier historical movie and explained the battle, and that part of World War II, better.

 

Despite its drawbacks, Midway is a rip-roaring military saga and a testament to the men who won it. The Americas are seen as brave and heroic in the film, but so are the Japanese. The Japanese are portrayed as men who fought for a cause they believed in just as the Americans did. At the end of the movie the filmmakers pay tribute to ALL the men who fought at Midway.

 

The very end of the movie is wonderful. You see the real photos of the real heroes of the battle, along with biographies of them. The awards they won and Medals they later wore are incredibly impressive (Doolittle won the Congressional Medal of Honor, as an example). It stirs the patriotic blood in you, not just for America’s victor at Midway, but for the fighting our troops, men and women, have done in all of our history.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173591 https://historynewsnetwork.org/article/173591 0
The Brave Jewish D’Artagnan Who Fenced for Germany in the 1936 Olympics

 

If you mention the 1936 Olympic games in Berlin to most people, their eyes will light up and they will tell you two things – 1) it was the games that celebrated Adolf Hitler’s Nazi rule and 2) it was the games where American track superstar Jesse Owens won four Gold medals. Lost in all the historical hoopla over those games, with their enormous red sea of German flags, was the harrowing story of young fencer Helene Mayer, the only Jew on the German squad, whom Hitler’s Nazis practically begged to join the team. She became a footnote to history, a footnote with a sword.

 

Helene, born in 1910 outside of Frankfurt, was one of the greatest fencers to ever live, a woman named to the top 100 Women Athletes of the 20thcentury by Sports Illustratedmagazine. She was defeating boys at 10 and at just 13 won the first of her six German national championships.

 

She stumbled into the spotlight in 1936. Not only did the International Olympic Committee threatened to pull its Games out of Berlin if there were no Jews on the German squad, but the United States threatened to boycott the games if no German Jews were in it.

 

Hitler, who had promised Germany a “Jew free” Olympics, under enormous pressure, was forced to relent and Helen was invited to try out for the fencing squad. She made the team, the only Jew on the entire German squad.

 

The fascinating story of the fencer is told in a fascinating play, Games, by Henry Naylor, that is running at the Soho Playhouse, 15 Van Dam Street, in New York. It is a must see not just for athletic fans, but for anybody. It is a story of Helene determination to represent her country and win a Gold Medal while at the same time shouting out her Jewishness to the world in the face of Der Fuehrer.

 

Naylor impressive play stars two women, Lindsay Ryan as Mayer and Renita Lewis as Jewish high jumper Gretel  Bergmann, who was also invited to try out but did not make the German team. Both give devastating portrayals of the two women athletes who found themselves standing in the vortex of history that summer in 1936.

 

The play opens with Helene waving her (invisible) sword at the audience and yelling “En Garde!” Then it goes bac to her childhood and explains how her wealthy doctor a Jew, married her Lutheran mother. The doctor got her into a fencing academy as a child and she came world renown. In addition to her six German fencing championships, she won nine U.S. championships, the world championship in 1937 and am Olympic Gold medal in 1928 and two silver medals later.

 

She fled the Nazis in 1935 and worked at Mills College, in California. While in the U.S., her father died and the Olympics approached again. She knew that she was in for a political whirlwind if she accepted the Nazi invite to try out for the team, but did so anyway. She had insisted throughout her life that fencing, all sports, were above politics and that she had to go to the Berlin Olympics for that reason. She rarely talked about the Jewish discrimination in Germany, even though it affected her directly and, later, resulted in numerous members of her family being put in Nazi labor camps and a concentration camp. She was criticized for not taking a stand against the Nazis by many Jews, but never wavered from her non-political stand. She was also criticized later for giving the arm out and up Nazi salute and shouting “Heil Hitler!” as all the German winners did, when she accepted her Silver Medal.

 

The playwright tells her story neatly but, at the same time, tells the equally intriguing story of high jumper Bergmann, who, like Mayer, lost her athletic German sports affiliations and was forced to flee the country and live in London, where she set world records in her sport. Telling both stories shows that Mayer was not singled out. To Hitler, she, like Bergmann, was just another miserable Jew whom he needed on his team to keep his much sought-after Olympics in Berlin.

 

The play is so strong because playwright Naylor shows that all of the terrible things that happened to Mayer because she was a Jew happened to all the other Jews in Germany, too. Her story is representative of all their stories, sad as they were.

 

The play’s director, Darren Lee Cole, does a fine job of making the two-actress play seem like a story that is unfolding all over the world as you watch the drama in the small theater. He also does good work in keeping the two actresses focused on their roles.

 

Did Helene’s participation in the 1936 Olympic Games ease the hatred of Jews by Hitler and millions of Germans? Of course not. He murdered six million of Jews. One of the reasons Games resonates today is that ever since those long-ago Olympics in 1936 Jews have faced constant discrimination and persecution, unfairly so, and face it today, too. When is there a night on television when there is not coverage of some idiot painting a swastika on some building?

 

One note: as patrons are being seated, the theater plays old German songs for a good twenty minutes. They make you shiver, just shiver.

 

PRODUCTION: The play is produced by the Soho Playhouse. Sets: Carter Ford.  The play is directed by Darren Lee Cole. It runs through November 24.

]]>
Fri, 06 Dec 2019 08:25:14 +0000 https://historynewsnetwork.org/article/173592 https://historynewsnetwork.org/article/173592 0