History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Mon, 23 Sep 2019 00:24:46 +0000 Mon, 23 Sep 2019 00:24:46 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://hnn.us/site/feed Does Our Climate Crisis Make the 2020 Election Our Most Crucial One Ever?

 

Friday’s Global Youth Strike for Climate, Saturday’s UN Youth Climate Summit, this Monday’s UN Climate Action Summit. Clamoring calls for climate action surround us this September weekend. In next year’s presidential election, no issue will be more dire. Who can doubt that four more years of President Trump and Mitch McConnell will all but ensure future climate catastrophes? Thus, the election is likely our most crucial one ever.

 

But, before addressing that conclusion, let’s briefly consider our most important past elections, what was at stake in them, and what might have happened if they turned out differently. Then we’ll examine the planet-threatening danger of our present climate condition, which has been greatly worsened by Trump. Finally, we’ll look at possible outcomes resulting from our past and present situations, including this weekend’s outpouring of youth climate concern. 

 

Although historians differ on our most crucial presidential elections, four of them seem especially significant: those of 1800, 1860, 1932, and 1940. In 2007, Harvard historian Jill Lepore wrote that the 1800 election pitting John Adams against Thomas Jefferson “is the most important election in American history.” In 2012, another scholar, David Mayhew of Yale, considering various criteria, selected the election of 1860 as the most important one, followed by that of 1932. Finally, FDR scholar Paul Sparrow considers the 1940 election our most important: “If you step back and look at its impact on the world, no election was more important.”   

 

But why these four? Regarding 1800, Lepore notes that the election “marked the first transition of power from one party to another. It led to the passage, in 1804, of the Twelfth Amendment, separating the election of Presidents and Vice-Presidents. . . . It might have—and should have—spelled the end of the Electoral College.” Mayhew concurs that 1800 “settled the point that the incumbent party would accept a loss and hand power over to the opposition--thus ensuring that we would continue to have elections. He also notes that Jefferson’s victory “ended Federalist domination.”

 

Mayhew does not include the 1800 contest as one of the two most important, perhaps because hiscriteria for deciding our most important elections includes “What if the other guy had won?” One could argue that the election of Adams to a second term would not have been a catastrophe. After all, he had been Washington’s vice-president for two terms and then president for four years, and presidential scholars rank him fairly highly--the Siena College Research Institute, Presidential Expert Poll of 2018, for example lists him as 14th best among 44 presidents.  

 

Lincoln’s 1860 election was followed by the Civil War and the end of slavery. If any of Lincoln’s three main opponentshad won, secession, the civil war, and the abolition of slavery would likely not have immediately followed. But who can say for how long? The continuation of American slavery into the twentieth century, for example, would have been highly unlikely. If Lincoln had not been elected in 1860, it is impossible to know what post-1860 U. S. history might have looked like (see here for some informed speculation on the question.) 

 

In 1932, Franklin Roosevelt (FDR) was elected in the midst of the Great Depression and thereafter created New Deal Policies to counter its effects. We cannot know precisely what Herbert Hoover would have done if reelected in 1932. Kenneth Whyte’s recent biography Hoover indicates that he was a more complex figure than often portrayed and not just a rigid defender of laissez-faire economic policies. 

 

But the 1932 election was not just important for providing the gateway to the New Deal policies that had such a lasting effect on American life, but also because it made possible the beginnings of the Roosevelt administration that lasted until 1945 and continued to affect U. S. politics ever since. Without his success in 1932, FDR might not have been the Democratic candidate in 1940, the year of our fourth important election. 

 

Again, as in earlier elections, we cannot know what would have happened if FDR would have been defeated in 1940. His chief opponent, Republican Wendell Wilkie, was no isolationist, and the likelihood of the U. S. becoming involved at some point in WWII against Germany and Japan would have remained. (Another interesting possibility, as Philip Roth explored in The Plot Against America, is what would have happened if Charles Lindbergh had been the Republican candidate and then president.)  

 

Thus, it is largely due to the uncertainty of what would have happened if Jefferson, Lincoln, and FDR had not been elected that it is difficult to compare and contrast the elections of 1800, 1860, 1932, and 1940 to the upcoming one in 2020. In addition, we do not yet know who the Democratic candidate will be. And although it seems highly likely that Trump will be the 2020 Republican candidate, even that is not a certainty. Hence, my contention that the 2020 election is only “likely to be” our most crucial one ever. 

 

And “likely” primarily because of our present climate condition. The environmental harm Trump has heretofore done is incalculable (valuable efforts to keep track of this appalling record can be found here and here), and a continuation of such policies in a second term would be highly likely. Conversely, all major Democratic candidates have pledged to pursue policies to help us prevent climate disasters.

 

Of essence then is the vital importance of our climate crisis. To describe it adequately in a brief essay is almost impossible. But the opening words of David Wallace-Wells’ The Uninhabitable Earth (2019) are a start: “It is worse, much worse, than you think. The slowness of climate change is a fairy tale.” Moreover, the recent words of writer Jonathan Franzenare alarmist, but not improbable: “If you’re younger than sixty, you have a good chance of witnessing the radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought.” 

 

Consider the observations made by the 13 federal agencies in the U.S. Global Change Research Program (USGCRP)’s 2018 report to the U. S. Congress and President Trump. “High temperature extremes and heavy precipitation events are increasing. Glaciers and snow cover are shrinking, and sea ice is retreating. Seas are warming, rising, and becoming more acidic, and marine species are moving to new locations toward cooler waters. Flooding is becoming more frequent along the U.S. coastline. Growing seasons are lengthening, and wildfires are increasing.” Moreover, unless there are “substantial and sustained reductions in global greenhouse gas emissions . . . substantial net damage to the U.S. economy [will occur] throughout this century, especially in the absence of increased adaptation efforts. With continued growth in emissions at historic rates, annual losses in some economic sectors are projected to reach hundreds of billions of dollars by the end of the century.”

 

Since the above report was issued in November 2018, the unrelenting effects of climate change have continued to pummel us--and not just us in the USA, but globally. July 2019 was the hottest month recorded in history, with record-breaking heat in such countries as Germany, France, Belgium, and the Netherlands. Ice melted in record-breaking amounts in Greenland, contributing to rising sea levels. On September 1, Hurricane Dorian began destroying parts of the Bahamas, leaving some 70,000 people homeless. In September, Time devoted a whole issue to climate,  describing it as “thebiggest crisis facing our planet.” It dealt with it not just as a U. S. issue, but, as it should be, a global one. Various articles addressed the crisis as it affected such areas as Africa, Latin America, Antarctica, the Pacific Islands, and Jacobabad, Pakistan “the hottest city on earth.”

 

Of course, President Trump does not bear the main responsibility for all the negative climate news, but it is difficult to think of any other human who has more impeded global efforts to address the crisis. Even the business-friendly Forbes Magazine opines thatTrump’s climate-change policies have been nightmarish—see its essay “Trump Ignores The Impacts of Climate Change at His Peril—And Ours.”

 

In April 2017, Bill McKibben--founder of 350.org, which has become a leading voice in the environmental movement--predicted that the effects of Trump’s policies “will be felt . . . over decades and centuries and millenniums. More ice will melt, and that will cut the planet’s reflectivity, amplifying the warming; more permafrost will thaw, and that will push more methane into the atmosphere, trapping yet more heat. The species that go extinct as a result of the warming won’t mostly die in the next four years, but they will die. The nations that will be submerged won’t sink beneath the waves on his watch, but they will sink.”

 

In his recent lead essay in the Time issue devoted to climate, McKibben imagines what positive developments could happen before 2050. They begin with the defeat of Trump in 2020, and the election of a new president. "In her Inaugural Address, she pledged to immediately put America back in the [2015] Paris Agreement [which Trump had withdrawn from]—but then she added, “We know by now that Paris is nowhere near enough. . . . So we’re going to make the changes we need to make, and we’re going to make them fast.”

 

Wishful thinking? Yes, but we need hope. In the same Time issue, former Vice President Al Gore also expresses hope and writes that it “stems largely from the recent, unprecedented groundswell of youth activism that has raised public consciousness to new levels and is pushing political leaders to develop bold and ambitious ideas to confront this challenge.” Youth activism like the kind currently being globally displayed also encourages McKibben, but he realizes how tough the struggle ahead will be.

 

The next major battle, and perhaps the most important one of our lifetime, will be the 2020 presidential and congressional elections. Its results could mark a major turning point in our climate struggle.

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173103 https://historynewsnetwork.org/article/173103 0
The Two Internationalisms

The leaders of the three countries in NAFTA: Mexico, the United States, and Canada. 

 

In recent years, internationalism―cooperation among nations for promotion of the common good―has acquired a bad reputation.

 

Of course, internationalism has long been anathema to the political Right, where a primitive tribalism and its successor, nationalism, have flourished for many years. Focusing on their nation’s supposed superiority to others, a long line of rightwing demagogues, including Adolf Hitler (“Deutschland Über Alles”) and Donald Trump (“America First”), have stirred up xenophobia, racism, and militarism, often with some success in public opinion and at the polls.  Numerous nationalist imitators have either secured public office or are hungering for it in many parts of the world.

 

But what is new in recent years is the critique of internationalism on the political Left. For centuries, internationalism was a staple of the progressive, avant garde outlook.  Enlightenment thinkers promoted ideas of cosmopolitanism and the unity of humanity, critics of war and imperialism championed the development of international law, and socialists campaigned for replacing chauvinism with international working class solidarity.  In the aftermath of two devastating world wars, liberal reformers roundly condemned the narrow nationalist policies of the past and placed their hopes for a peaceful and humane future in two world organizations: the League of Nations and the United Nations.

 

A key reason for the decline of support for this internationalist vision on the political Left is the belief that internationalism has served as a cloak for great power militarism and imperialism.  In fact, there is some justification for this belief, as the U.S. government, while professing support for “democracy” and other noble aims, has all too often used its immense military, economic, and political power in world affairs with less laudatory motives, especially economic gain and control of foreign lands.

 

And much the same can be said about other powerful nations.  In their global operations during much of the twentieth century, were the British and French really concerned about advancing human rights and “civilization,” the Germans about spreading “kultur,” and the Russians about liberating the working class?  Or were they merely continuing the pattern―though not the rhetoric―of their nationalist predecessors?

 

To continue this subterfuge, starting in 1945 they all publicly pledged to follow the guidelines of a different kind of global approach, cooperative internationalism, as championed by the United Nations.  But, when it came to the crunch, they proved more interested in advancing their economies and political holdings than in developing international law and a cooperative world order.  As a result, while pretending to honor the lofty aims of the United Nations, they provided it with very limited power and resources.  In this fashion, they not only used the United Nations as a fig leaf behind which their overseas military intervention and imperialism continued, but ended up convincing many people, all across the political spectrum, that the United Nations was ineffectual and, more broadly, that cooperative internationalism didn’t work.

 

But, of course, cooperative internationalism could work, if the governments of the major powers―and, at the grassroots level, their populations―demanded it.  A fully empowered United Nations could prevent international aggression, as well as enforce disarmament agreements and sharp cutbacks in the outrageous level of world military spending.  It could also address the climate catastrophe, the refugee crisis, the destructive policies of multinational corporations, and worldwide violations of human rights.  Does anyone, aside from the most zealous nationalist, really believe that these problems can be solved by any individual nation or even by a small group of nations?

 

Fortunately, there are organizations that recognize that, in dealing with these and other global problems, the world need not be limited to a choice between overheated nationalism and hypocritical internationalism.  In the United States, these include the United Nations Association (which works to strengthen that global organization so that it can do the job for which it was created) and Citizens for Global Solutions (which champions the transformation of the United Nations into a democratic federation of nations). Numerous small countries, religions, and humanitarian organizations also promote the development of a more cooperative international order.

 

If the people of the world are to stave off the global catastrophes that now loom before them, they are going to have to break loose from the limitations of their nations’ traditional policies in world affairs.  Above all, they need to cast off their lingering tribalism, recognize their common humanity, and begin working for the good of all.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173098 https://historynewsnetwork.org/article/173098 0
Emoluments: An American Political Tradition

 

Since early September, members of Congress have been heatedly debating whether to begin impeachment proceedings against President Donald J. Trump for violating a constitutional ban on “emoluments”—or, in simpler English, profiting from his office. If impeached, he would become only the third of forty-five presidents to suffer such a humiliation. Although thirteen presidents faced threats of impeachment, Congress only impeached Andrew Johnson (1868) and Bill Clinton (1998). 

 

If, as charged by some in Congress, President Trump did in fact receive emoluments, he would only have adhered to an all-but-hallowed American tradition that dates back to the nation’s origins—and even earlier.

 

As early as 1778, Patrick Henry exposed profiteering by Quartermaster General Thomas Mifflin, the Philadelphia merchant who created winter-long food shortages at Gen. George Washington’s Valley Forge encampment by hoarding foodstuffs in his company’s Philadelphia warehouses. Although 11,000 Continental Army suffered near-starvation, Pennsylvanians elected Mifflin their state’s first governor in 1780.

 

At the same time that Patrick Henry was exposing Mifflin’s profiteering, polemicist Thomas Paine, then secretary to the Committee for Foreign Affairs, was exposing rampant profiteering by members of the Continental Congress. “To what degree of corruption must we sink,” the author of Common Sense demanded to know, “if our delegates and ambassadors are to be admitted on a private partnership in trade? Why not as well go halves with every quartermaster and commissary in the Army?” 

 

Treasury Superintendent Robert Morris—arguably America’s richest man—reacted with fury.  “In becoming a delegate,” he shouted at Paine, “I did not relinquish my right of forming mercantile connections.” Morris’s business partner Gouverneur Morris—an unrelated delegate in Congress and a business partner of Robert Morris—moved to dismiss Paine, and Congress fired him, in effect embedding profiteering by public officials into American politics.

 

Far from any popular outrage at such official profiteering, Americans often cheer and, in some cases, re-elect public officials convicted of such crimes. Governors of 13 different states have been convicted of illegally profiting from their offices. In Illinois alone, one U.S. senator, six representatives, and five governors have been convicted of illegal profiteering, along with fifteen judges and twenty city and state officials of varying ranks. In Connecticut, Bridgeport voters re-elected their mayor for a seventh term after he had served six years in federal prison.

 

In what may be unique in American history, though, a sitting President now faces three suits charging him with violating the emoluments clauses in Articles I and II of the Constitution. But like many constitutional prohibitions, the clauses are vague. For one thing, both clauses fail to define emoluments clearly. Nor do they specify who can sue a President for alleged violations or which courts should hear the charges: civil, criminal, state, or federal. Private individuals filed one of the suits against Mr. Trump, members of Congress a second, and the District of Columbia joined the state of Maryland in filing a third.

 

As for defining emoluments in the past, there was no objection, for example, to Lafayette’s giving his friend President Washington a key to the Bastille prison in Paris Lafayette was a French Commanding General, and the key, was—and remains--a treasured artifact. It now hangs in a gilded glass case in Washington’s mansion at Mount Vernon, Virginia. Strictly speaking, Washington clearly violated the constitutional ban on accepting “any present” from any “king, prince, or foreign state.” 

 

He may, however, have committed far more serious violations as President by personally managing his 7,400-acre Mount Vernon plantation and collecting revenues from the sale of its grains, fish, whisky, and other products. Thomas Jefferson, our third President, was guilty of similar violations while President, selling thousands of barrels of corn, wheat, hogs, and cattle—and buying and selling slaves--at his 5,000-acre plantation at Monticello, outside Charlottesville, Virginia.         

 

Presidential enjoyment of emoluments did not stop with Jefferson, however. In 1869, scandal enveloped the Ulysses S. Grant administration, when cabinet officers appointed 40 relatives to government posts and engaged in widespread gold speculation. And in the next century, after President Warren Harding had died in 1923, his Secretary of Interior Albert Fall went to jail for accepting bribes for leasing two government oil reserves to private oil companies. It was unclear whether the President had shared in the “emoluments.”

 

“The President,” says one of the Constitution’s two emoluments clauses, “shall not receive…any other emolument [besides his salary] from the United States, or any of them.” By today’s standards, our first, third, and the other presidents who managed and earned money from their farms and businesses while in office should have placed their holdings in blind trusts under independent trustee supervision. In the modern era, Presidents George H.W. Bush, Ronald Reagan, Bill Clinton, and George W. Bush all put their assets into blind trusts and relinquished control in favor of unrelated third parties. 

 

President Donald Trump’s refusal to follow suit provoked three lawsuits. Two of them charge the President with violating both the domestic and foreign emoluments clauses by accepting payments from the U.S. and foreign governments at his various Trump hotels and resorts. A third suit—filed by members of Congress—charges the President with violating the foreign emoluments clause by receiving foreign-government payments for lodging and meals at Trump hotels and resorts, along with licensing fees and other moneys. 

 

President Trump has boasted of foreign leaders flocking to his hotels—especially the Trump International Hotel, seven minutes by taxi from the White House in Washington, D.C.  The Kuwait embassy, for example, held lavish independence-day parties there in 2017 and 2018, while Bahrain’s government and the Azerbaijan embassy each hosted elaborate events there. All would appear to violate the foreign emoluments clause to prevent foreign influence on federal officials. 

 

The President himself has spent untold amounts of government money on weekend and vacation visits to luxury Trump resorts in Florida and New Jersey with his family and his huge staff. In addition, most members of his cabinet and, according to The New York Times, about half the nearly 200 Republican members of Congress “have been seen at or spent money at Trump-branded properties,”spending “close to $20 million since 2015.”

 

Mr. Trump has also admitted that the American military has repeatedly paid for uniformed military flight crews to stay and play at the five-star luxury Trump Turnberry hotel in Scotland, where the Saudi royal family has invested millions partnering with Mr. Trump’s company. The Saudi partnership, along with the visits by American military flight crews would appear to violate both emoluments clauses. The Pentagon disbursed $184,000 to Turnberry between August 2017 and July 2019 and $17 million to nearby Prestwick airport since January 2017.

 

Nor is Mr. Trump shy about his apparent violations, even promoting his properties in official pronouncements. In anticipation of the next summit meeting of the Group of 7 world leaders in 2020, he announced that his luxury golf resort in Doral, Florida, is a “great place” with “tremendous acreage” to host the meeting. “We haven’t found anything that could even come close to competing with it,” the President crowed.

 

The President, of course, is not only convinced that he is innocent of constitutional violations, be is confident he will never face trial. Given the army of lawyers he commands and the all-but-endless delays in the discovery process in federal courts, he may be right. 

 

The Emoluments Clauses of the U.S. Constitution

The Foreign Emoluments Clause: Sect. 9., Clause 8: No title of nobility shall be granted by the United States:  And no person holding any office of profit or trust under them shall, without the consent of the Congress, accept of any present emolument, office, or title, of any kind whatever, from any king, prince, or foreign state.

 

The Domestic Emoluments Clause:Article II, Sect.1, cl. 7: The President shall at stated times, receive for his services a compensation, which shall neither be increased nor diminished during the period for which he shall have been elected and he shall not receive within that period any other emolument from the United States, or any of them.

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173102 https://historynewsnetwork.org/article/173102 0
Dan Rather Claims the Press and Faith Will Get Us Past Trump's Transgressions. Is He Right?

 

“I wonder if you can tell us what goes through your mind when people who love this country and believe in you say, regretfully, that you should resign or be impeached?”  CBS News’ Chief White House Correspondent, Dan Rather, was dignified and direct in his press conference questioning of President Richard Nixon in October 1973 during Watergate.

 

His poise impressed me as an observant twenty-year old at Washington University in St. Louis forging a career path toward politics.  Now sixty-six, a former U.S. Senate aide, I asked Rather last evening whether Trump-era reporters would learn from watching video of the fraught and frequent Nixon-Rather encounters.

 

MSNBC’s Katy Tur interviewed Rather at Brooklyn’s St. Joseph’s College in a Greenlight Bookstore-sponsored showcase for the paperback edition of his current New York Times bestseller -- collected essays on “What Unites Us” as Americans. The Fort Greene-based Greenlight justmarkedits first decade as a cultural hub for discussions of classical, mainstream and experimental literature.  

 

“Thank you for the compliment inherent in your question, but the answer is no,” Rather told me.  “There is no shortage of courage among journalists now.  There was nothing special about what I did at that time.  I just tried to ask direct questions.”

 

I stood face to face with a legend -- who paused, perhaps to move on.  Six crowd members lined up in the aisle behind me were eager to ask their own questions, but, heart-pounding, I held on to the microphone, hoping for more, since Tur and Rather had not yet dismissed me.  I sensed that humility served Rather well in public.  Then he looked me in the eye and continued.

 

“There is a need to ask the President a direct question and to ask that same question again and again if necessary, until he either answers that question or makes clear that he won’t answer it.  The necessity of a follow-up question is something most people don’t understand,” he added, “but today’s journalists are better educated, better trained and better prepared than I was, or were those of my generation.”

 

He revealed that they face uphill battles for fact-checking the 45th President.

 

“In a large news organization, the corporate superstructure will respond to the power of the President, not to the individual reporter” when the White House Press Office complains.  It’s often done quietly behind the scenes, perhaps over a wine-filled lunch with network executives, who will then take the reporter aside in the newsroom to explain that “there’s a controversy about what you said to the President; we’d better calm this down, so don’t ask the tough questions.”

 

Rather faced this pressure when Nixon resigned his office in August 1974.  Ordered, as network colleague Daniel Schorr disclosed, to “go soft on Nixon” during on air post-speech summations, Rather lauded the “finest hour” of an impeached President implicated in a criminal conspiracy, whose televised nationwide remarks had “a touch of class and even majesty, showing Nixon’s respect and appreciation for the constitutional system that had functioned so magnificently” throughout the Watergate crisis. 

 

Chief Congressional Correspondent Roger Mudd, Rather’s rival in vain for the coveted CBS Evening News Anchor role when icon Walter Cronkite retired, by contrast verbally tore into Nixon, who in “what I would have to think was not a very satisfactory speech took no responsibility whatsoever for the realities that brought him down.”

 

Because “there was no accounting of how we got there and why he has to leave that oval room,” Mudd added, “the American public is left to conclude that it was craven politicians in the Congress who, after a year’s consideration of this matter, collapsed in their defense of the President and thereby forced him to resign.” 

 

That primetime showing may explain why CBS anointed Rather as the sainted Cronkite’s successor while Mudd, well-respected in Washington, fled to NBC.

 

Rather’s current book refers to the 2003 Iraq War buildup to illustrate the pervasive nature of that “go soft” imperative. 

 

“In times of strong patriotic fervor,” he writes, “asking a question can be spun as unpatriotic.  And the Bush administration, with its allies in the conservative press, were not hesitant to hang a ‘bias’ sign on those seen as confrontational, or even skeptical of the story line the administration was putting out” in the aftermath of the 9/11 terrorist attacks. “It wasn’t overt but there was a feeling that we shouldn’t be making too many waves.”  

 

President Trump’s Twitter assaults against his political opponents, our diversity and patriotic dissent likewise threaten democratic ideals that Rather holds dear.

 

“I wrote the book at a perilous time,” the author declared. “I’m mad about people in powerful positions exploiting our divisions.  What I can do is remind people that more holds us together than separates us.  I can try to spark a conversation in which we start listening to one another.”  He urged that individuals should “reach out to ‘the other’” near home, at work, in school.

 

The book’s introduction states that such earnest expressions emerged from frequent night flight reflections throughout Rather’s career. Crossing woven threads became a metaphor for Rather’s air travels to and from our nation’s corners and through its heartland as he stared through the windows.

 

While the House Judiciary Committee’s impeachment inquiry and the ongoing criminal investigations by federal, state and local authorities seem well-suited to President Trump’s transgressions, those accountability measures rest on Rather’s faith in democracy. The nation is resilient, he reminds us.

 

Dignified and direct as he was in the Watergate glare forty-six years before, Dan Rather onstage at St. Joseph’s drove home that uplifting belief – a welcome tonic as one hundred concerned citizens seeking answers showed with their standing ovation.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173101 https://historynewsnetwork.org/article/173101 0
The Long History of Activism Preceding The Hong Kong Democracy Protests

 

All this summer, Hong Kong has been consumed by protests and there’s no end in sight. The protests started in full force back in June, mainly as a backlash against an extradition bill that would have allowed the Hong Kong government to send anyone to China for any so-called wrong-doing. Along the way, the protestors have become enraged at the police for repeatedly firing tear gas, rubber bullets, and bean bags, as well as beating people with batons. This unrest seems so unlike Hong Kong, but in fact has been brewing for decades.

 

The unrest is a continuation from the 2014 Umbrella Movement, in which people stood up against changes to Hong Kong’s Basic Law, or mini-constitution, which was supposed to guarantee the territory 50 years of political autonomy after Britain handed Hong Kong back to China in 1997. The 2014 protests spanned two and a half months and paralyzed the main financial district as students set up tent cities in the middle of a major thoroughfare. Those protestors dispersed after the government cleared the streets and threatened to arrest anyone who stayed behind. The government also arrested and imprisoned the main leaders of the Umbrella Movement, including two professors at Hong Kong’s most prestigious universities, as well as teenaged student leader, Joshua Wong.

 

But the problems in Hong Kong stem from way before 2014. Back in 1922, Hong Kong Chinese seamen demanded 40% higher wages to bring them closer pay equity with their non-Chinese counterparts. The shipping companies—run mainly by the British—refused to pay them more, so the Chinese seamen went on strike. The Hong Kong government declared the strike illegal, but the strike still disrupted life in Hong Kong because goods to the colony were all but cut off by the strike. So 52 days into the strike, the Hong Kong government negotiated with the Seamen’s Union and the two parties agreed to a raise between 15-30%. 

 

Another sea-related demonstration took place in 1966 when the Hong Kong government proposed an increase in fares for the Star Ferry, a means of transportation used by workers and professionals alike. What started out as a demonstration against the increased fares turned into a riot against the colonial government. The protestors felt that the government was out of touch with the people and the police force was too corrupt. Police used tear gas and batons on the protestors, leaving one dead and four injured. Over 200 protestors were arrested and imprisoned.  The following year, leftist riots broke out against the colonial government, resulting in 51 deaths, over 800 injuries, and almost 5,000 arrests. 

 

These previous cases of social unrest, as well as the 2014 Umbrella Movement and mass demonstrations to protest violations against the Basic Law and “one country, two systems” in 2003 and 2012, all have something in common: a government out of touch with the people and the people without a means to elect their government leader. 

 

In the years since the Handover, the disparity between rich and poor has grown to become the widest in the world. What’s more, the political system in Hong Kong is framed to maintain this balance of power. Hong Kong’s last governor, Chris Patten, tried to push through election reforms before 1997, but the system was already rigged in favor of the wealthy. Still, Hong Kong residents were promised universal suffrage to elect their chief executive and their legislators, but in the last decade Beijing has changed those terms so it can oversee which candidates are chosen to run for chief executive. Once those candidates are chosen, only 1200 Hong Kong people can cast votes. These 1200 voters are not common people; many are members of the business community. 

 

The people who have been coming out every week this summer—and now almost every day—encompass a wide cross-section of the territory, including students, the elderly, mothers, lawyers and judges, and medical professionals who feel the current government structure ignores them. Late last month, three protestors went missing after a crackdown at a subway station and many fear they were killed. News of their disappearance coincided with Carrie Lam, the Chief Executive of Hong Kong, finally announcing in early September that she would withdraw the extradition bill. But Hong Kong residents feel it’s too little, too late. 

 

Before the extradition bill was withdrawn, the protestors had five demands for the government. Now they’re asking for these four things: an independent inquiry to look into the rampant use of police violence this summer; the right for Hong Kong voters to directly elect their elected officials, including their chief executive, or governor; amnesty for all the arrested protestors; and the government’s removal of the word “riot” from all language describing the protests this summer.

 

For the last 100 years, Hong Kong has seen mass protests against the government. The difference between now and then is that it’s now a first world, developed city that’s an international financial hub with a fair judicial system. But the people still aren’t able to vote for their government leaders and for that reason have little to no say in government policies. So as it stands, it won’t make much difference if Carrie Lam stays or resigns. In the case of the latter, Beijing will just appoint another chief executive that will uphold the current system.

             

Hong Kong has gotten out of binds before, but no one wants to make predictions as to how this will end or if Hong Kong will ever be free from a system in which the people’s only say in political matters is through mass demonstrations. No one knows the answers. But some things seem pretty obvious. When young people can feel confident that they have a choice in their chief executive, only then can Hong Kong start to repair these very old wounds. The question is whether the government and its supporters will be able to part with their power.

 

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173100 https://historynewsnetwork.org/article/173100 0
A Family History of the Red Scare

 

Who gets to decide what it means to be an American? It's a question of some urgency these days, and one that Pulitzer Prize-winning journalist David Maraniss addresses in A Good American Family: My Father and the Red Scare as he recounts the historical experiences of his father, Elliott. The elder Maraniss was a devoted husband, kind father and dedicated newspaperman, whose patriotic bona fides came under fire during the Red Scare of the early 1950s. The book is an interesting addition to Cold War historiography, combining analysis of major features of the Red Scare and a heartfelt family story. 

 

The book opens on March 12, 1952, in room 740 of the Federal Building in Detroit, Michigan, as Elliott appears before the House Committee on Un-American Activities (HUAC), which had come to the Motor City ostensibly to root out alleged Communist subversives in the automobile industry.  Elliott was fired from his position at the Detroit Times for his refusal to “name names” and answer several questions posed by members of the committee. Afterwards, he was fired from his position at the Detroit Times and blacklisted from any meaningful work in journalism for several years.  However, this book is not simply an account of a man wrongly accused and punished for his radicalism.  Rather, Maraniss attempts to understand and reconcile his memory of his father as a good man and patriotic American with the fact that he was also a member of the American Communist Party (CPUSA) and in many ways an apologist for Joseph Stalin’s tyrannical regime. 

 

The son of Jewish immigrants who fled Odessa, Ukraine, in 1890 to escape the Russian Empire’s rising anti-Semitism, Elliott grew up in Brooklyn where he attended Abraham Lincoln High School.  The school’s principal—a fascinating New York University alum who wrote his Ph.D. thesis on Baruch Spinoza—helped stoke Elliott’s idealism by encouraging him to find inspiration in Ralph Waldo Emerson’s “heart-stirring, untraditional, iconoclastic words about initiative, conformity, consistency, truth-telling, prayer, and independence.”  After graduating in 1936, Elliott followed a fellow Lincoln High alum, Arthur Miller, to the University of Michigan where he wrote for the school’s paper, the Michigan Daily, and became active in radical politics along with his future wife, Mary Cummins.  Maraniss describes his parents as young idealists: “They loved the promise of America, were disoriented by the economic collapse of the U.S. economy during the Depression, were seeking answers to the chaos of the world, and at the same time wanted to believe in a virtuous, peace-seeking, equality-minded Soviet Union.”   While he is fulsome in his praise of his parents’ idealism, Maraniss simply cannot understand how they could have also supported the Soviet system: “They thought they were working toward a true and open American democracy even as they were rationalizing the actions of what was in fact a ruthlessly totalitarian foreign power.”  

 

The CPUSA’s craven servility to Stalin’s agenda in the 1930s is well known. American communists ignored, refuted, or attempted to justify Stalin’s horrific crimes such as the forced-famine in Ukraine and the Great Purges. In foreign policy, the CPUSA adhered to the Comintern-led Popular Front strategy by which Stalin ordered all communist parties around the world to work within any government to oppose Mussolini and Hitler’s aggression, an approach Elliott enthusiastically supported.   When the party abandoned its stated anti-fascist policy and defended the Nazi-Soviet Pact of 1939, many American communists quit the party.  Elliott did not.  He embraced the new Stalinist line and advocated American neutrality in World War II.  He even defended the 1939 Soviet invasion of Finland that crushed the Scandinavian democracy.  To his credit, Maraniss does not hide his father’s staggering willingness to justify Stalin’s machinations, and quotes an editorial Elliott co-authored describing the war as a “clash of rival imperialisms.” In a later edition, Elliot added: “This is not a war against fascism, it is not a people’s war and does not attack the vital causes of war.”  Trying to explain the inexplicable, Maraniss simply says:  “… he was stubborn with ignorance.”  

 

The Japanese attack on Pearl Harbor apparently changed his mind. Two weeks after the attack, Elliott enthusiastically volunteered to serve in the U.S. Army, which Maraniss contends demonstrates his father’s patriotism. Though he concedes Elliott’s commitment to the war effort came only after Hitler launched Operation Barbarossa against the Soviet Union in June 1941, he defends his father’s motivations:  “It was not that my father felt more strongly about the Soviet Union than his own country.  In all his previous writings, he showed a deep belief in America and the American promise.  He was a patriot in his own way.”  Unfortunately, he provides no insights into Elliott’s reaction to Barbarossa or the way he rationalized yet another shift in his views on the fascist challenge. 

 

Elliott’s military service got off to a rocky start.  The Military Intelligence Division of the War Department investigated him and concluded that he was “communistic” and potentially “disloyal.” (This was hardly a rash assessment. Subsequent revelations from the Venona decryptions and Soviet archives demonstrated the CPUSA’s many ties to Soviet intelligence, a fact Maraniss fails to note.)  Elliott did, however, serve with distinction, commanding an African American salvage unit in the segregated army, which suited his commitment to racial equality. Throughout the chapter on World War II, we hear Elliott’s views on the war and on the United States through the many letters he sent home to Mary and their newborn son:  “Jimmie [the author’s older brother], you know that I love you and your Mother very much.  I wouldn’t be a very good father, nor much of a man, if I didn’t stand up and fight against those Japs and Nazis.  That is why I am in the army and that is why I am going far away for a long time. Your mother, who is not only very beautiful but also very brave and very intelligent, understands all this.  And I know that you will understand too, Jimmie.” 

 

Returning to the U.S. after the war, Elliott resumed his work for the CPUSA, secretly writing and editing two of the party’s periodicals while working for the Detroit Times, a breach of journalistic ethics. His commitment to the CPUSA was confirmed by Beatrice Baldwin, an FBI informant who provided Elliott’s name to HUAC and landed him in front of the committee in 1952.  Maraniss devotes considerable attention to the backgrounds of HUAC’s members, especially Chairman John Stephens Wood (D—GA), a committed segregationist who had flirted with membership in the KKK.  Maraniss explores Wood’s shady past not to justify his father’s actions, but to raise the intriguing question about who gets to decide who is American and who is un-American. Although he never provides a clear answer, Maraniss clearly suggests that Wood and the other members of HUAC were hardly representative of American virtues.  

 

Elliott’s testimony (which Maraniss includes in full) was not especially dramatic.  The committee peppered him with questions that he often refused to answer, citing his Fifth Amendment rights, a tacit admission of guilt in the eyes of many Americans during the Cold War.  He was immediately fired from the Detroit Times and was unable to find work as a journalist for five years, supporting his family by taking whatever odd jobs he could find.  

 

In 1957, as fears of the Red Scare dissipated, Elliott moved his family to Madison, Wisconsin, where he worked as a reporter and editor for the Capital Times for many years, earning the respect and admiration of his colleagues, family, and friends for his work. Maraniss proudly shares the anecdote of Ben Bradlee, the legendary editor of the Washington Post, saying:  “There’s Elliott Maraniss, a great editor.” As for his father’s ideology, Maraniss explains, “His politics changed, from radical to classic liberal, but not his values or belief in America—a generous spirit that he had carried with him since his days at Abraham Lincoln High School and that he expressed so powerfully in his letters to my mother during the war.”  

 

Throughout the book, Maraniss struggles to explain just how his father could have been so deeply committed to admirable causes like racial and economic justice, free and open inquiry, and peaceful international relations while simultaneously following the dictates of Stalin.  He speculates that loyalty to friends and family, naiveté, and unbridled idealism, could have been factors, but ultimately he finds no satisfactory answer, in large part because his father never explained why he believed and acted as he did.  In the end, Maraniss and his readers are left wondering how a man of such vision could have been so blind for so long. 

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173099 https://historynewsnetwork.org/article/173099 0
Thomas Paine’s Common Sense for Our Time

 

Thomas Paine’s Common Sense, the pamphlet of the American Revolution, was so influential in rallying  people to support independence because it so clearly stated what most Americans believed.  In that same tradition here is a reflection on American politics that may be useful now for our own summer soldiers and sunshine patriots. It is particularly fitting because some of the complaints that the patriots of that era registered against the British Parliament can be made against our own politicians.  Since in some ways we suffer from similar fears, we may also need to be reminded that according to Thomas Paine that, “though the flame of liberty may sometimes cease to shine, the coal can never expire.”

 

The patriots who rebelled against British rule were not simply nationalists with a dislike of foreigners.  Some European revolutionaries in the twentieth century were like that, but that was not the original American tradition.  Our early patriots wanted justice, not an ethnic ghetto. The colonists thought the British Parliament, and the King who was the head of the patronage machine that ran Parliament, were not interested in their affairs- a common colonial complaint, then and now.  The American colonists no longer in need for British protection against intrusions from the French in Canada, now wanted more from British politics than increased taxation without input from them on how much it would be and how it would be spent.  They thought British politics was corrupt because their political parties institutionalized the selfish interests of particular groups in their own society.  In eighteenth century British politics, the Whigs represented the interests of the big business class, somewhat like our own Republicans, and the Tories represented the interests of those with hereditary wealth who formed, to a large extent, the leisure class and the cultural elite, somewhat like our own Democrats. However, in both cases these were leaders without a political base since the mass of people could not vote. No one actually represented the great mass of working-class people.   Nevertheless, both of their political parties claimed that they represented everyone’s interests.

 

The ideal in America after the Revolution, which was also the ideal of the British public then though not well practiced, was that the notables of the community would represent their communities in politics. Thus, there would be no political parties, no professional politicians, no patronage machines, and no passing of laws that served only special interests.  Though there would be state militias, there would be no large standing army ready for use in unnecessary wars.  It was truly a utopian vision.

 

While the right to vote has been greatly expanded over what it was in the eighteenth century in both Britain and America, politicians in both countries still treat politics as a profession - the profession of running for office.  Since, at least for some of them, their major talent is not developing policy but running election campaigns. As a result, many of them have become very dependent on others including lobbyists who supply them with policy ideas.  

 

Much of modern American political strategy, whether Democratic or Republican, depends on a disinterested public.  Many politicians rely on this apathy by communicating in depth with the public only as a last resort, while developing their policy ideas by continually communicating with lobbyists.  Poor civic education for the mass of the population, and mass media that emphasize entertainment even when it comes to covering political campaigns, contribute to the apathy of the public.  As part of the policy process followed by many politicians, triangulation to achieve a compromise between interest groups often results in merely finding a mid-point between the desires of the one percent and everyone else, not the golden mean of Aristotelian ethics. 

 

Let’s look at an extreme example from our political history.  After the American Civil War, the moralists of the Republican Party in Lincoln’s tradition (the mugwumps) broke ranks with the party hacks but the party hacks won.  In fact, a major reason early Republicans were not very good at creating laws to foster common interests between the poor whites and the poor blacks of the South during the period of Reconstruction was that so many of the political hacks in Congress were more interested in representing the business lobbies primarily concerned with money-making investments in the South.  The Democrats who had abandoned support of  slavery were no better because they too were considered a party of professional politicians with self-serving interests.   There were politicians with more noble interests in both parties, but their influence tended to be outweighed by the influence of the party hacks.

 

What modern political leaders, both Democratic and Republican, lack is the courage of their convictions, unlike the Founding Fathers and more contemporary leaders like Andrew Jackson, Abraham Lincoln and Franklin D. Roosevelt.  Like all people they had their faults, but at least they had some political courage.  This is different from those presidents who entered politics for the perks of office, and for the money making opportunities this provided.     

 

In contemporary politics, neither party has offered any significant plan for reforming our economic system, even to the degree fostered by the Progressives at the end of the nineteenth century who briefly fostered reform in both the Democratic and the Republican parties.  Things may change with the 2020 presidential election, but only time will tell.  Until now neither party has offered any comprehensive plan for an economic system in which elites are expected to justify being worthy of their positions of wealth and power while everyone else receives the leftovers of the economy.   Up to now those with wealth and power continue to use their positions to remain in power.       

 

Democracy should include citizen input in setting political agendas and choosing candidates whose platforms they legitimize through their vote.  They should also have some influence on the oversight of these elected officials, making sure they keep their promises, and ensuring that they don’t engage in even more serious ethical violations. From the beginning and at least informally, since it took time for the right to vote to become broadly based, this has been the American ideal.  Now many people are disheartened and don’t bother to vote, which many professional politicians usually find quite satisfactory for their purposes.  The reasons include gerrymandering. In doing so the success of one party or another in an oddly shaped district is almost guaranteed. As a result, a sense of hopelessness due to citizen apathy tends to guarantee the success of political machines, and failure by the mass media and communal organizations at informing the public about what policy options are available and how they are relevant to their lives occurs. In depth discussion is what is needed, not polling with pre-selected questions that fail to communicate with the public in depth.

 

These American political ideals take for granted that the basis for political legitimacy in a democracy should be concern for the res publica, the common good from which the word republic derives.  Lawmakers should always be concerned with the consequences of the laws they pass, including their side effects. 

 

We have forgotten the true nature of democracy; a state where government reflects the will of the people, before, during, and after elections.  A democracy where politics is the prerogative of elites comes easy, which is why effective democracy has been so rare, and monarchy has been so common.   The original ideal of American politics, both during the Federalist era when leaders were expected to have some aristocratic qualities patterned after British gentlemen, and in later, more democratic times, was that politicians should be the notables of the community, representing their poorer neighbors, not professional politicians out to make money for themselves. 

 

I still wonder why union leaders, late in their careers, almost never run for political office.  If they do not constitute what it means to be representatives of the community, then there are few who fit that category.  A mass of apathetic citizens who treat politics as a source of entertainment rather than an opportunity to investigate policy is one reason such potential leaders know they will find little support from their fellow citizens.  Additionally, the sheer anonymity of American society compared to the generation of the Founding Fathers makes in-depth communal discussion that much more difficult, the simplifications offered by the mass media notwithstanding. The American political system is out of sync compared to the hopes and ideals of the generation that gave us the American Constitution.  Thomas Paine would not be pleased, though he might have said if by some quirk he could foretell the future, “I told you so.”

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173095 https://historynewsnetwork.org/article/173095 0
The Separation of Church and State is About Freedom For Religion not Freedom From Religion

 

The United States has always been a largely Protestant country so it is no surprise that many Americans know little about the history of the Catholic Church. For example, did you know that the term “Roman Catholic” is rarely used in most Catholic countries, since the term was originally a Protestant slur? Or did you know that “Catholic” actually means universal, all-inclusive? In fact, the term implies a level of internationalism, even globalism that Protestants have long been uncomfortable with.

 

The universality of the Church is an important part of Catholic identity and that universality has long scared the secular state, including Catholic states like France. The separation of church and state is often reinterpreted by secularists as freedom from religion when it really has always been primarily about religion’s freedom from state control, i.e. the Catholic Church’s fight for independence from secular control.

 

The state, whether secular or simply seeking to control religion, has long tried to blur the line between the spiritual and the temporal. In Catholic history, it goes back to the Council of Nicaea in 325 AD when Emperor Constantine used his fame from legalizing Christianity to preside over the first formulation of the core doctrines of what would become Catholic and Eastern Orthodox Christianity. Many clerics died because soldiers of the Byzantine Empire attempted to force clerics to give religious power to the State.

 

In fact, the breach between Catholicism and Eastern Orthodoxy was more about the separation of the Church and state then about cultural differences, unlike what is commonly taught. After the Byzantine (Eastern Roman) Empire recaptured Italy from the Germanic invaders that had captured it after the fall of the Western Roman Empire in the sixth century, its emperors tried to force the Pope, who in the interim had become the chief spiritual official in the western part of the old empire, to be as obsequious to secular power as the Patriarch of Constantinople.

 

Pope Martin I was actually abducted by high-ranking Byzantine officials in 653 because he refused to kowtow to secular power. Clerics were killed and the public order disrupted because any separation of Church and state was anathema to the secular power. The local resistance to the Empire’s demands led to an awakening of a primordial form of identity that became the core of the Catholic culture, particularly giving rise to the identity of the Italian Catholics, my ancestors.

 

In the end, the Empire forced Pope Stephen II in the 750’s to decide between being a stooge of what was increasingly viewed as a foreign power or the vassal of a new power, the Franks. The Pope chose the latter, since the Franks recognized his autonomy and spiritual independence to some extent, unlike the Byzantines. Of course, the Byzantines took away all the land under their effective control in Southern Italy and Sicily from his spiritual control because he defied their absolutist secular control. And people say that the State has so much to fear from the Church!

 

The Franks proved to be a little less controlling than the Byzantines, but not by much. The Franks had the Pope declare their king to be Emperor, equal to the Byzantines, which was not in the circumstances unwelcome but after this, it became clear that the Emperor considered himself the overlord of the Pope. When the Frankish Empire became fragmented, the various claimants to the Empire put even more pressure of the Popes. One Pope, Formosus, was even exhumed after his death and put on trial on trumped charges by the supporters of one of these later Frankish rivals, in what became known as the infamous Cadaver Synod of 897.

 

The Church’s most unique rules like clerical celibacy were created in an attempt to untangle the church from secular power and temporal concerns. To stop bishops from passing their position onto their sons and prevent a Pope from creating a dynasty, clerical celibacy was firmly established and the Church created a Conclave that elects the Pope. Further, the Church worried that local counts would install family members to be Pope and important bishopsand pass it on generationally. 

 

The Church acquired territory not as a planned contrivance, but because it was forced upon the Church by necessity. When the Western Roman Empire collapsed and the Byzantines reconquered Italy, the Byzantine Empire actually ignored the day-to-day governance of central Italy by the 7th-century. This forced the Pope to take up the effective governance of the area around Rome. Under pressure from groups like the Lombards and Franks, it became necessary in the 8th-century toacquireterritory in order to raise and manage resources that would create some autonomy from the direct authority of powerful secular rulers. Today, the Pope only controls a small piece of land called Vatican City, but having territorial sovereignty even at this small scale is essential due to the constant threat of secular control.

 

In the United States, there are groups like Americans United for the separation of Church and State, led in large part by Protestant (Unitarian) clerics that claim that the church is a threat to the secular order. Yet, history shows that the State is more of a threat to the Church than vice-versa. In fact, James Madison wrote separation of church and state in order to protect Catholics from discrimination because his slave-owning friend from Maryland could not vote or hold office in his won state because of his religion.

 

So, the next time someone tells you that the State is powerless without legalized discrimination against religion like we have in New Jersey (where the State can give money to rebuild any building from a natural disaster except religious buildings, for example), tell them about the actual history of the separation of Church and State. There are secularists who want to purge the public square of religious values, in order to pave the way for the exclusive purview of their specific set of secular values that are directly contradicted by religious values. It is these secularists who seek to monopolize the marketplace of ideas, not the church, or at least not the Church, which is actually still a minority religion in terms of numbers and power. The separation of Church and state is really about a pluralism of values and is in fact quite liberal.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173096 https://historynewsnetwork.org/article/173096 0
Why Stephen Colbert's Late Night Monologue Effectively Recapped the Latest Democratic Debates Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

After greeting the crowd at Texas Southern University, Julián Castro opened the Democratic debate last week with this important insight: “There will be life after Donald Trump. But the truth is that our problems didn’t start just with Donald Trump, and we won’t solve them by embracing old ideas.” All the Democratic candidates agree on Castro’s analysis of the past, that our American problems which need to be solved have been developing for a long time. They agree that we will go on, perhaps to a bright future, after Trump is gone. The fundamental disagreements among the candidates center on Castro’s rejection of “old ideas”: how much progressive change is the right amount in this election?

 

Joe Biden represents the most moderate positions, although his ideas are hardly old. In fact, he has had to repudiate many of his old ideas during this campaign: working with segregationists in Congress was a good thing; Obamacare as it was enacted was good enough; harsh sentencing did less to control crime than to put a generation of mostly African Americans behind bars. Politicians from the 1960s have had to change many fundamental ideas, but are very bad at admitting that positions they took long ago are not right for today.

 

Castro, and many of the other candidates who appeared at the 3rd debate, as well as others who still believe they have a chance, criticize Biden, hoping to peel off the moderate Democratic voters who support him. On health care, which has taken center stage as the crucial issue of 2020, Castro magnified a minor difference with Biden, but took what has become the moderate position, arguing for the retention of private health insurance plans: “If they choose to hold on to strong, solid private health insurance, I believe they should be able to do.” He claimed to be fulfilling the legacy of Barack Obama, a key clue that he stands with the more moderate candidates.

 

At the other end of the field, Elizabeth Warren and Bernie Sanders want to eliminate private health plans entirely in favor of Medicare for All. Sanders and Warren hold up the private insurance industry for ridicule as siphoning off billions of dollars in profit. Their differences between them lie less in policy than in approach. Warren has plans for structural reform in favor of the neglected little guy, while Sanders thinks instead of a revolution against the oligarchy.

 

Many of the more moderate Democratic candidates have already fallen by the wayside: John Hickenlooper, Steve Bullock, Seth Moulton, Kirsten Gillibrand, Bill de Blasio, John Delaney. The latest poll from yesterday, like all last week’s polls, show Biden in the lead, but the very progressive Sanders and Warren combined have significantly more support. Among the rest, only Kamala Harris, Pete Buttigieg, and Beto O’Rourke consistently get more than 2%. The field is thankfully shrinking, and will gradually become more manageable. The election is still nearly 14 months away.

 

As a prelude to the actual debate, ABC chose a sentence from each candidate’s earlier speeches to play in the order that the candidates were ranked. I found it notable that all these excerpts except Biden’s (“I will be a president for every American.”) talked about “we”. Who knows how that came about? Did someone pick these clips to demonstrate the fundamental unity among all Democratic candidates? I don’t know if we’ll ever find out.

 

That message of unity is my “takeaway” from the campaign so far. The cohesion and shared values are hard to see, though. The nature of a campaign is that everyone is competing with everyone, and against everyone. The media compulsion to broadcast conflict shapes the whole process, for candidates and for us all. That was apparent in the moderators’ questions: instead of asking “what do you believe?” or “what would you do?”, they demanded discussion of disagreements.

 

To see how the media shapes our impressions of the campaign, it is instructive to see two attempts to summarize the debate in a few clips, by ABC News, as fact, and by Stephen Colbert, for laughs.

 

Right after the debate, ABC produced 4 minutes of “Moments That Mattered”. The selection was a serious exercise in media repackaging. Every heated exchange was included: Biden and Sanders arguing about health care; Castro castigating Biden about the small differences in their health care plans and about his memory, and all of the other conflicts involving Castro; Klobuchar versus Sanders about health care. Harris was shown criticizing Trump, Booker only got to talk about his early electoral failures, Buttigieg only to complain about the emphasis on conflict. The more extreme proposals were highlighted: Yang’s philanthropic offer of $1000 a month to some needy families; Beto O’Rourke saying he would take away assault rifles. Elizabeth Warren apparently did not matter to ABC and was not shown at all, because she spent her time explaining rather than attacking.

 

Stephen Colbert’s monologue later that night tells a different story, not only because he is much funnier. For 12 minutes, he used excerpts of what America had just seen to get laughs after laughs. Colbert’s principles were clear: portray every candidate truthfully, and then make fun. He made fun of Sanders’ voice, Biden’s age, Harris’s vagueness on what to call the unmasked little Wizard of Oz, and Klobuchar’s movie reference.

 

Colbert began by talking about “fireworks” and gleefully displayed a few moments of real one-on-one conflict. But by the time Colbert wound up, most of the candidates had their say about something important, even when he fantasized something funny in response. Klobuchar expressed the “existential threat” to our environment. Beto told the world he would take away assault rifles. Bernie said that Medicare for All would cost our society much less than we’re spending now. Yang made his remarkable philanthropic offer. Harris showed off her plan on how to deal with Trump – laugh at him. Biden emphasized his link to Obama. Warren got a brief moment of real American family á la Norman Rockwell, which is a staple in her campaign. Buttigieg summarized a universal, but ever ignored wisdom about our never-ending wars – don’t start them. Castro said everybody would be covered under his health plan. Only Booker was left out.

 

Age is playing a surprising role in this campaign. It certainly matters, but it’s hard to say how it matters. Laughing at old men was in lots of Colbert’s jokes about Biden and Sanders. The clip of Castro and Biden interrupting each other was about age. But Buttigieg, the youngest candidate since the beginning, said nothing disparaging about the older candidates.

 

Warren is 70, but she gets left out of the public laughter about the elderly. Maybe because her age is not apparent in what she does. But it’s notable that everybody finds her hard to criticize. That may be a hidden advantage for her campaign.

 

Maybe a difference in purpose led to these differences in reportage. Although Trump incessantly whines about the mainstream networks as “fake news” trying to defeat him, ABC was much more interested in promoting conflict as significant, who’s ahead, who’s desperate, who is nasty about whom. All the networks and all the print media try hard not to put themselves on one side or the other, even as they pick and choose what to tell us.

 

Colbert was clear about his purpose in his monologue. Toward the beginning, he called Trump a non-violent criminal. At the end, he said: “What did we get? . . . hopefully, one person who can beat Donald Trump.”

 

The news isn’t fake, but it is spun, not false, but often misleading about important things. Colbert tells obviously fake stories, but gives us a better picture of reality. Unfortunately, this election is not a laughing matter.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/blog/154249 https://historynewsnetwork.org/blog/154249 0
Roundup Top 10!  

The State Department is weak and getting weaker. That puts us all at risk.

by Mark Edwards

We need a robust diplomatic engine at the heart of our foreign policy.

 

When Adding New States Helped the Republicans

by Heather Cox Richardson

Putting new stars on the U.S. flag has always been political. But D.C. statehood is a modest partisan ploy compared with the mass admission of underpopulated western territories—which boosts the GOP even 130 years later.

 

 

The historical profession's greatest modern scandal, two decades later

by Bill Black

Historians are criticized for not engaging with the public--and then criticized for how they engage when they do. Looming in the background is the Michael Bellesiles controversy.

 

 

The populist rewriting of Polish history is a warning to us all

by Estera Flieger

Thirty years after communism ended, Poland’s past is again being manipulated for political motives, this time at a museum in Gdańsk.

 

 

Why Democrats can’t speak for the ‘silent majority’

by Seth Blumenthal

President Trump is exactly the kind of champion the voting bloc wants.

 

 

Joe Biden isn’t the only Democrat who has blamed black America for its problems

by Marcia Chatelain

Well-meaning liberals have long failed to recognize their own role in systems of oppression.

 

 

The History of Citizenship Day Is a Reminder That Being an American Has Always Been Complicated

by S. Deborah Kang

“We welcome you,” Truman declared, “not to a narrow nationalism but to a great community based on a set of universal ideals.”

 

 

Ending the Afghan War Won’t End the Killing

by Stephanie Savell

Since 2015, casualties from explosive remnants of war and abandoned IEDs have been rising rapidly.

 

 

When Texas was the national leader in gun control

by Brennan Gardner Rivas

How the land of gunslinger mythology regulated weapons to reduce violence

 

 

There Are No Nostalgic Nazi Memorials

by Susan Neiman

Americans could learn from how drastically German society has moved away from the nadir of its history.

 

 

 

Two re-namings, two defaults. How and how not to use history and public memory at Yale

by Jim Sleeper

“The real work for a place at Yale is not about the name on the building. It’s about a deep and substantive commitment to being honest about power, structural systems of privilege and their perpetuation.”

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173094 https://historynewsnetwork.org/article/173094 0
Gloria Steinem, the Women’s Movement and a Big Question

 

Gloria, A Life, the new play by Emily Mann about women’s rights activist Gloria Steinem, opened last weekend at the McCarter Theater, in Princeton, leaving a big, big question.

 

The play is not a biography of Ms. Steinem. It is not a drama, either. It is, well, an “experience.” Playwright Mann has put together a story in which Ms. Steinem, she of the famous aviator glasses, serves as the narrator for a tale about dramatic improvements in the lives of women since the late 1960s. The story of feminism is powerful. The “experience” is not only wonderful but admirable.

 

The big question, though, is where is Gloria Steinem?

 

You find out a little bit about her tentative relationship with her mentally unstable mother, her poor and struggling father, a bit about the founding of New York Magazine and Ms. Magazine, and her roles in both, and a whole lot about women’s leaders, such as Bella Abzug. You discover very little about Gloria, though. She is one of the most famous women in America, in the history of America, and yet the play does not tell you how she became so well-known and influential.

 

The story, told mostly by Ms. Steinem, played nobly by Mary McDonnell, puts her here and there in plot turns, and in the middle of hundreds of enormous photos projected onto the stage walls, but it never delves into what makes her tick. She is a great writer. She is a good speaker. She is flamboyant. She knows lots of important people. But how did all of that jell together to make her so famous? You do not find out and that is a shame.

 

There is much information, missing, too. She worked for a research company connected to the CIA, but that is not in the story.  There were a lot of people in the feminist movement who did not like her and accused her of using the movement to enhance her own glamourous image.  She was very involved in politics and was a delegate to one Democratic convention, but little is made of that. She had a lot of critics on many of her liberal views, and that is missing, too. There are large gaps of time in her life, such as her post college days, that are simply unaccounted for.

 

The play begins with Gloria graduating from Smith College in 1956 and going to work in New York City as a freelance writer in journalism, a field, at the time, run nearly completely by men. She has a hard time landing assignment until, by chance, she works for a while as a Playboy bunny in 1964 and writes a story about it. That gave her some notoriety and propelled her into the freelance writing field.

 

She winds up covering women’s’ rights protests and abortion marches. Then she blended into them and became a leader and speaker.

 

You find out little about her personal life. She had a sister, but the sister is not in the play. She had relationships with lots of men, she says, but did not get married until she was 66, to Robert Bale, the father of actor Christian Bale, and then her husband died just three years later. Why no other marriages or deep relationships? Did she have hobbies? Pets? Favorite books? Whom did she admire?

 

While all of that is a bit disappointing, the “experience” is terrific. Throughout the play, Gloria meets and works with numerus feminist leaders, such as Ms. Abzug (remember her and those fabulous hats?). Their story is, of course, a taut drama about American history (a women’s history going back to the 1840s). That story, of the marches and rallies, magazines, women’s colleges, court decisions and enormous national publicity, makes for a triumphant American story and Ms. Mann writes it well. The “experience” makes the play worth seeing.

 

Mann, who re-staged the play (Diane Paulus directed it last year in New York), gets other fine performances (in addition to Ms. McDonnell) from Patrene Murray, Brenda Withers, Gabrielle Beckford Mierka Girten, Erie Stone and Eunice Wong.

 

Young people, especially, should see the play. Women did not get where they are today by writing letters to the editor, baby – they marched in the streets and shouted from the mountaintops for it.

 

PRODUCTION:  The play is produced by the McCarter Theater in association with the American Repertory Theater at Harvard University and with special arrangement with Daryl Roth. Scenic Design:  Amy C. Rubin, Costumes: Jessica Jahn, Sound: Robert Kaplowitz and Andrea Allmond, Lighting:  Jason Lyons. The play is re-staged by Ms. Mann. It runs through October 6.

     

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173071 https://historynewsnetwork.org/article/173071 0
Autocrats do not need a majority to destroy democracy. A divided opposition helps them.

As we witnessed in the third Democratic primary debate last week, Democratic presidential candidates are struggling to distinguish themselves from their party rivals and competing for endorsements. Their horizontal vision in these disagreements diverts their gaze from the peril we face as Donald Trump dismantles the norms that have guided our political life since 1776.  

 

Whatever their differences, Democratic candidates must agree to broad principles related to key issues, for example, immigration, health care, and the growing wealth gap.  A general consensus would leave plenty of room for healthy debates about implementation, but failure to emphasize shared ideals in relationship to two or three major questions will blunt Democrats’ offensive against a candidate whose campaign is based on slander and fear. 

 

Although we Americans like to think our nation is exceptional, the choices made by defenders of democracy in 1922 Italy and 1933 Germany are worth revisiting. The parallels are not perfect.  Our two-party tradition sets us apart from Germany and Italy, each of which had five major parties. But legislative gridlock and voter cynicism today are reminiscent of conditions that marked the last months of democracy in Italy and Germany. The threat to our democracy does not command militias, but hate groups incite violence. Our economy is stable but many Americans feel left behind.  Most worrying, Republicans march in lockstep behind Donald Trump, while Democrats fragment – like opponents of authoritarianism in interwar Europe. Of course, we can’t know whether different strategies in Italy and Germany would have preserved democracy, but since hindsight is 20-20, let’s use it.  

 

In economically devastated post-World War One Italy, labor unions and peasant leagues clashed violently with Mussolini’s Black Shirt militias. Voters in 1921 gave Socialist and Christian Democratic candidates almost half of the vote – compared to seven percent for the Fascist Party. Fearing a revolution from the left, the King used his constitutional power to appoint Mussolini as prime minister in 1922. Mussolini manipulated the election of 1924 to create a Fascist majority. An exposé of Fascist electoral interference by journalist Giacomo Matteotti touched off massive demonstrations. After Fascist thugs murdered Matteotti, 150 deputies protested by walking out of the Chamber of Deputies. After Mussolini expelled them and won the King’s approval, erstwhile critics in the Chamber calculated that opposition to “il Duce” would be futile.  Superficially, the trappings of democracy remained.  

 

Fast forward to the German elections of 1932, when Marxist parties won 38 percent of the vote, compared to the Nazis’ 33 percent. Instead of forming an anti-Nazi phalanx, Communists and Social Democrats fought about tactics and theory. In January 1933, the President appointed Adolf Hitler as chancellor. On February 27, after arsonists set the Reichstag on fire, Hitler called it the beginning of a communist revolution. Despite massive repression of leftist rivals, on March 4, Nazi Party candidates won only 43 percent. Two weeks later, the Catholic Center Party legislators joined conservatives and moderates in granting Hitler four years of dictatorial power.  As in Italy, the handover was technically legal. 

 

Mussolini and Hitler promised to restore national glory and depicted themselves as the last defense against radical socialism. Neither appealed to racism at first. Not even Hitler, who muted his virulent anti-Semitism in public to attract middle-class voters during the late 1920s. New followers told themselves he had mellowed, but his base and his Jewish targets never doubted his true intentions.  

 

President Trump has violated many of the norms and laws on which our democracy depends.  He circumvents Congress by declaring the “crisis” at the border a national emergency. He orders his staff to ignore subpoenas. He uses his presidential status to enhance his family’s wealth.  He demands absolute loyalty from his appointees. He treats truth like a despot and jokes with Vladimir Putin about his “fake news” problem. He boasts about his misogyny and spews racist insults.

 

Trump is not a despot.  But neither were Mussolini and Hitler early on.  No black or brown shirts march in our streets.  President Trump’s enablers wear white shirts and black robes.  They are unified.  Democrats are not.  

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173030 https://historynewsnetwork.org/article/173030 0
How the Kikotan Massacre Prepared the Ground for the Arrival of the First Africans in 1619

A painting depicting the construction of a fort at Jamestown, close to Fort Comfort, from National Park Service

 

 

Reckoning with the past is never easy. We’ve seen this in the United States and the United Kingdom this summer, as British universities grapple with their connections to the wealth and human suffering resulting from transatlantic enslavement, and Americans debate the historical meaning of the 400th anniversary of the arrival of the first enslaved Africans in English North America. 

 

Commemorating the 400th anniversary of what the English colonizer John Rolfe described as the “20 and odd Negroes” (a number that was actually closer to 30) has dominated social media and the summer’s newscycle. But there’s an aspect of this commemorative activity that hasn’t received much attention. I refer specifically to the violence that occurred at Point Comfort less than a decade before the slave ship White Lion made anchor in August 1619. On that spot, a bloody event worthy of historical introspection took place: the massacre of the Kikotan Indians.That bloody event is important because it made it possible for the English to take Native lands and build Fort Henry and Fort Charles. The Kikotan massacre prepared the ground for the arrival of the first Africans in Virginia.

  

The history of English North America and what became the United States is a complex and often-violent story involving the enslavement of African peoples and the territorial dispossession and genocide of Native American communities. This is an uncomfortable history and neither the British nor Americans have fully reconciled itwith the contemporary economic, political, and social dimensions of their respective societies.

 

Most Americans don’t like to think about genocide as a foundational part of US history, while the English certainly don’t view their forebears as capable of perpetrating the mass killing of indigenous people. However, historian Jeffrey Ostler makes a compelling case for how genocide is woven into the fabric of North American history in his most recent book, Surviving Genocide. In Virginia, English colonization sparked dramatic population declines among Native American communities. While Virginia Indians numbered about 50,000 in 1607, by the early twentieth century, only a little over 2,000 remained.

 

But did the English initiate a genocide against Virginia’s Indian people? To answer this question it’s important to define genocide. The United Nations defined genocide in 1948 as “acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group.” Genocide can involve killing members of a group, causing “serious bodily or mental harm,” deliberately creating conditions designed to physically destroy a group “in whole or in part,” imposing measures that prevent births, and forcibly transferring children out of one group and to another. 

 

This definition describes not only the “founding” of Virginia but the course of US history and its relationship to Native America. Importantly, the genocide of Virginia Indians didn’t occur within a discrete time period and under well-established bureaucratic conditions; genocide in Virginia unfolded slowly over a period of decades.

 

The opening act in the tragedy of Native land loss, attacks on indigenous culture and language, the separation of children from families, and the physical destruction of entire communities, began in 1607 when English ships passed through the mouth of the Chesapeake Bay. The English aboard those vessels passed lands belonging to the Accomac, Nansemond, Warraskoyaak, and Kikotan (or Kecoughtan) people. These weren’t the first European ships the region’s Native people saw, but the English were different: they were determined to stay. This wasn’t good news for the Kikotan. They’d once numbered as many as 1,000, but by 1608 the English estimated that the Kikotanhad as few as 20 fighting men and perhaps a total population of no more than 60. The Kikotan had been reduced to a small community vulnerable to external attacks. Joining the Powhatan Chiefdom, albeit by force, under the leadership of Wahunsenacawh (Chief Powhatan) offered a degree of protection from both European and Native American violence and captive raids.

 

In the spring of 1608, though, the English probably didn’t seem like much of a threat to the Kikotans because the English were starving. Although the Kikotans and other Native communities provided the English with small parcels of food, in the spring of 1608 the English were on the verge of abandoning Jamestown. The colonizers were saved, however, by the arrival of supply vessels from England.

 

The English recognized they couldn’t sustain a colony that relied on supply ships from England. They needed to make changes. One of those changes was establishing trade relationships with Virginia Indians. An Englishman by the name of John Smith helped to initiate trade talks. Smith was an ambitious man determined to make a name for himself in Virginia. Unfortunately for Smith, the Kikotan “scorned” his advances to engage in trade talks, allegedly mocking him for his inability to feed himself. Smith wasn’t amused. He immediately let “fly his muskets,” whereupon the Kecoughtan “fled into the woods.”

 

Such incidents seem small and petty when viewed in isolation. However, these types of encounters grew in regularity and fueled mutual mistrust along Virginia’s Anglo-Indian frontier. 

 

That mistrust grew between 1609 and 1611 when the English made plans to build forts and establish homesteads on indigenous lands at the mouth of the Chesapeake Bay. The Kikotan need only look across the bay to see how English homesteads had started to displace Nanesmond families. English intentions were clear. Slowly, methodically, a genocide was unfolding.

 

Two factors overlapped to result in the genocide of the Kikotan people. First, English colonizers began establishing homesteads on Kikotan lands. Just as they did among the Nansemond, English land use practices were designed to sever indigenous people from their crops, sacred spaces, and homes. 

 

Second, violence played an important role in eliminating the remaining Kikotan people from their homelands at the mouth of Chesapeake Bay. In 1610, the English moved aggressively against the Kikotans. This sudden English assertiveness was in response to Kikotans aligning with neighboring indigenous tribes in opposition to the construction of English forts – including the fort that witnessed the arrival of the first Africans in Virginia. By early July, 1610, Sir Thomas Gates, the governor of Virginia, was "desyreous for to be Revendged upon the Indyans att Kekowhatan" for their opposition to English colonial expansion.

 

Colonial officials initiated a plan to “drive” the remaining “savages” from the land. The violence directed against the Kikotan people in July 1610 became known as the Kikotan massacre. The exact number of Kikotan deaths is unknown. Those who did survive the massacre fled their homelands and took refuge among neighboring indigenous communities. The Kikotan’s connection to their homeland was lost.  

 

For the Kikotans, the physical and psychological toll of the 1610 massacre were compounded by English actions in the proceeding years.  To reinforce the sense of loss that Kikotan people undoubtedly felt, the Virginia General Assembly agreed to “change the savage name of Kicowtan” to Elizabeth City in 1611. The Kecoughtan name remained to demarcate the foreshore, but in 1619 English families pushed to have the Kikotan erased from memory and the Corporation of Elizabeth City established. As the "20 and odd negroes" stepped onto Virginia should, the colonizers were writing their name over a Native landscape.

 

The English were changing the landscape that Virginia’a Indians had nurtured for as long as anyone could remember. When Wahunsenacawh died in 1618, less than a year before the White Lion set anchor at Port Comfort, Opechancanough, Chief Powhatan’s brother, took up the fight against English incursions into Powhatan homelands. 

 

Over the next two decades, violence between English colonizers and Powhatan warriors broke out in fits and starts throughout Virginia. The English, however, weren’t leaving. In 1624 Virginia was declared a royal colony and Native people continued to use violence to prevent the growing number of colonizers from squeezing them off their homelands. 

 

Virginia’s Indians were up against a determined foe. Governor Wyatt’s response to Indian resistance in the 1620s captured both the intent and determination of the English: “to root out [the Indians] from being any longer a people.”

 

Wyatt’s words are chilling. They reveal that prior to a treaty between the Powhatan and English in 1646, guerrilla-style warfare punctuated life in Virginia. So long as this fighting continued the English would take no quarter with their enemies. Native people, reduced in numbers and confined to reservations by the 1650s, suffered traumas that live on today in the stories Virginia Indian’s tell about seventeenth-century English colonizers. 

 

In remembering 1619 it’s right to reflect on the lives of the African people who disembarked from the White Lion on the traditional homelands of the Kikotans. We should also remember the loss of indigenous life in Virginia, losses that grew as the decades unfolded. We need not look too far beyond the events of 1610 and 1619 to see how the English treated Native resistance to their expansive plans for a settler colonial society supported by plantations and the exploitation of unfree labor. 

 

At the end of September, Norfolk State University in Virginia will host academics, journalists, and community members at a summit called “1619: The Making of America.” Sponsored by American Evolution, the summit will undoubtedly provide a forum for reflecting on Virginia’s past. I also hope that in trying to understand the “Making of America” we remember that English (and ultimately, United States) colonialism was (and is) built not only with the labor of stolen bodies from Africa, but the stolen lands of Native Americans. 

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173032 https://historynewsnetwork.org/article/173032 0
The Stasi's Cold War Espionage Campaign Inside the Church

 

We remember them: the East German church peace prayer meetings that, 30 years ago, grew and grew, spilling out into the streets and setting in motion protests too large for the government to contain. That November, the protests led to the fall of the Berlin Wall. It feltlike the victory of peace prayer participants over a ruthless regime. But for four decades, the Stasi had managed to prevent that very outcome.

 

Ever since East Germany’s birth on 7 October 1949, the country’s churches had been the regime’s particular foe. Not only did they represent a worldview at odds with the atheism the regime stood for; they also had countless connections to fellow Christians abroad, including in Western countries. On top of that, one of the country’s denominations -- the Lutheran Church -- was a powerful institution, comprising not just a large part of the population but also pastors prepared to speak out against the government. When the Stasi, the Ministry for State Security, was created in 1950, keeping an eye on the country’s churches and their members became one of its central tasks.

 

How do you keep an eye on Christians to prevent them from voicing demands such as free and fair elections? The Soviet Union took a brutal approach, sending countless Christians to penal camps on trumped-up charges. The Stasi’s church department, by contrast, opted for a more cunning approach (after a relatively brief Soviet-inspired experiment). “Let them pray, let them sing, but they shouldn’t do politics,” Joachim Wiegand told me. 

 

You have probably never heard of Wiegand. That’s because he was a Stasi officer, a man operating in the shadows. During the final decade of the German Democratic Republic’s existence, he led the Stasi’s church department, called Department XX/4. He has never before been interviewed for a book; like most Stasi officers, he – probably correctly – surmises that any interviewer will misconstrue his words. To my great surprise, Wiegand agreed to be interviewed for my new book about Department XX/4’s activities, God’s Spies (published by Eerdmans on September 17). Wiegand and I spent countless hours together, discussing every detail of how Department XX/4 worked to prevent East Germany’s Christians from doing politics. After that initial period of focusing on punishment, Department XX/4 mostly relied on seduction. It got clerics – from bishops to pastors-in-training -- and other Christians to become agents.

 

Imagine the setting: a secret police agency, staffed by men with impeccable proletarian credentials but no church connections, trying to convince pastors to join the Stasi as agents. (The Stasi’s word for such undercover agents was Inoffizieller Mitarbeiter, IM.) Like other Department XX/4 officers, Wiegand -- a former farmhand who had been one of the first graduates of the Stasi’s training school -- learnt churches’ terminology and structures. He identified which pastors could be suitable recruitment targets: perhaps they were frustrated with the slow advancement of their careers; perhaps they wanted advantages such as foreign trips. Before even making the initial contact with a pastor, Wiegand and his colleagues had conducted thorough research on the potential recruit, aided by input from existing IMs.

 

When a pastor had signed on the dotted line, he reported on whichever setting he found himself in. Some pastors needed more guidance from their case officers than others as to what sort of material might be useful to the Stasi, but the result was a vast collection of reports. Pastors reported on members of their congregations, on their fellow clerics, on international faith organizations. They told Department XX/4 which decisions church leaders were planning to make; sometimes they even influenced those decisions in a Stasi-friendly direction. And all along, they had to worry about other pastor agents in their midst. Because nobody knew who else was working for the Firm, everyone might be doing so. It was a hall of mirrors. And all along, Department XX/4 collected the agents’ reports. Nothing was too small to be documented, not even the style of beards certain theology students wore: it indicated a willingness to rebel against the regime.

 

But despite their frequently gossipy reports, the pastor agents were instrumental to the survival of the German Democratic Republic. Because churches formed the country’s only semi-independent space, opposition-minded citizens of all stripes took refuge in churches’ seminaries, their environmental groups, their peace prayer meetings. If the Stasi was to prevent discontent from festering in churches around the country, its pastor agents had to keep a very close eye indeed on their fellow Christians. 

 

In God’s Spies, I follow the ecclesiastical-and-intelligence careers of four of those pastor agents. One, an academic who felt overlooked, spied for career purposes, badmouthing his peers while touting his own horn. Another became a rare pastor agent on permanent foreign assignment. A third masterfully combined a career as a pastor-and-church journalist with undercover duties; all in the name of helping his country survive. As East Germany was collapsing, an American Christian magazine published an article by the pastor, its editors clearly in the dark about his dual affiliation. A fourth pastor deviously infiltrated Western Bible-smuggling groups, preventing the books from reaching their destination and endangering the lives of the intended recipients.

 

Department XX/4 achieved great success: without its infiltration of every corner of East German Christianity, the church-led protests would likely have gained steam much earlier. Would the Berlin Wall have fallen earlier too? It is, of course, impossible to tell. But through their diligent work, the Stasi’s pastor agents – who have never before been the subject of an English-language non-fiction book – played a vital role in helping the German Democratic Republic limp along until its 40th birthday. On that day, Mikhail Gorbachev dutifully attended his fellow Socialists’ proud celebrations. Two days later, on October 9, record numbers of Leipzigers attended the peace prayer meetings in their city, then marched through the city. A month after that, the Berlin Wall fell. No snooping in the world could have saved a country as unwanted by its citizens as East Germany.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173040 https://historynewsnetwork.org/article/173040 0
Trump—A Wannabe Dictator in Training

Last week, Trump tweeted a "Trump 2024" sign. 

 

 

In a view shared by many, it is easy to believe that what Donald Trump really wants is not to be president of the country, but dictator of it. 

 

Indeed, he has suggested how good it might be for him to enjoy a third term, perhaps more, even though the Constitution forcefully forbids it. 

 

In a Fathers Day tweet he fantasized over the possibility, suggesting the public might “demand” that he serve a third term. The [good news], he wrote, “is that at the end of six years, after America has been made GREAT again and I leave the beautiful White House  (do you think the people would demand that I stay longer? KEEP AMERICA GREAT)….”  

 

After Chinese president Xi Jingping abolished term limits in his own country, Trump said he liked the sound of that. “President for life. No, he’s great. And look, he was able to do that. I think it’s great. Maybe we’ll have give that a shot some day.” Just joking? It is not all that laughable. [The quotes in these two paragraphs are from an article by Gabrielle Bruney in Esquire, July 16, 2019. They have also appeared in the New York Times and Washington Post.]

 

So what does a wannabe dictator who wants to give that a shot some day have to do day-by-day to reach his miserable goal? And how is Trump measuring up?

 

It is not an easy jump from democracy to dictatorship in our country. There is that dratted Constitution in the way. There are laws to be violated. There are critics, opponents, Democrats, immigrants, and the unfaithful to be purged.There is the critical mainstream media to be done away with. There are many lies to be told. It is hard, nasty work and Trump is fast running out of time. 

 

A dictator must as soon as possible, by any means, shut down the media or control it. No dictatorship can exist in the presence of a free press. So far Trump has only been able to call the media names—"the fake news media,” “the enemy of the people,” “the opposition party.” From the time of announcing his candidacy to the end of his second year in office he had tweeted 5,400 times. Some 1,339 of those tweets were attacks of some kind on the media or individual reporters. This doesn’t count the times he has harangued the media in his speeches. He has turned many of his supporters into media haters like himself. 

 

He is particularly incensed by the Washington Post and the New York Times, both of them highly critical of him. He has said that in the event that the public did demand  that he stay, “both of these horrible papers will quickly go out of business & be forever gone!”

 

A dictator must be an avid hater of some minority group and wanting to purge it. Trump hates Muslims. He is not too crazy about blacks and Hispanics either. But there are now many of them, a very tall purging job. On the other side of this coin, Trump seems in no mind to purge white supremacists, who love him as one of their own.

  

A dictator must tell lies, lots of lies. Trump is far and away the champion liar in presidential history. It is said by news outlets who keep track of such things, that Trump averages about six lies per day.

 

A dictator must be willing to exterminate people, lots of people. Since this is forbidden in a democracy, Trump can only slander them with a tweet or in a speech or fire them. In a successful dictatorship you just shoot anybody you want any time you want. Trump can only fire them, or tweet insults at them. But it is not a far jump to think of Trump’s firings and insults as symbolic exterminations. Nor is it a far leap to think of the would-be immigrants pinned up and families separated on our southern border as concentration camps.

 

A dictator often has an affinity with other established dictators. Trump admires and is on friendly footing with two elite of the world’s dictators—Vladimir Putin of the Soviet Union and Kim Jong-Un of North Korea. He has met and spoken kindly of both. And there is that admiration of  China’s Xi Jinping for making himself president for life.

 

A dictator is narcissistic, in love with himself and glory seeking, demanding and getting total obedience and acclamation from his followers. In his first cabinet meeting Trump invited each member to praise and celebrate his greatness. He loves his many rallies out in the country, where he basks in the acclamation of his many avid followers, which are said to be a strong third of the country’s population. 

 

A dictator is lawless, often pushing the limits of his power. Breaking a law seems meaningless to Trump. Whether intentionally or ignorantly, he has violated a host of laws, many later challenged or overturned by the courts, thwarted by the judiciary. That is why it is so important for Trump and the Republican Party to appoint sympathetic judges to as many courts as possible. As for the Constitution, it can perhaps be questioned whether he has ever read it, much less  whether he worries about violating it.

 

A dictator questions the legitimacy of his opponents, demonizing them and curtailing or abolishing their rights. In the 2016 campaign for president, Trump suggested he might not accept the legitimacy of his opponent, Hillary Clinton, if she won the election, and suggested several times that she ought to be thrown in prison. He has tweeted hatred of many of his detractors and has encouraged brutality against anti-Trump demonstrators at his rallies. 

 

As a wannabe dictator in training he has been doing reasonably well.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173039 https://historynewsnetwork.org/article/173039 0
The Native Americans Who Assisted the Underground Railroad

 

In an interview conducted in 2002, the late Helen Hornbeck Tanner, an influential historian of the Native American experience in the Midwest best known for her magisterial Atlas of Great Lakes Indian History (1987), reflected on the considerable record of “coexistence and cooperation” between African Americans and Indians in the region.  According to Tanner, “[an] important example of African and Indian cooperation was the Indian-operated Underground Railroad.  Nothing about this activity appears in the historical literature.”

 

Tanner’s assertion is largely true.  Native American assistance to freedom seekers crossing through the Midwest, then often called the Old Northwest, or seeking sanctuary in Indian villages in the region, has largely been erased from Underground Railroad studies. Two key examples from the historiography of the Underground Railroad demonstrate the extent of that deficiency.  The first volume, The Underground Railroad from Slavery to Freedom (1898) by pioneering Underground Railroad historian Wilbur H. Siebert, is still a beginning point for many who investigate efforts to assist freedom seekers in the pre-Civil War Midwest.  Siebert collected testimony from hundreds of participants and witnesses in the struggle and converted this documentary record into a broad and influential work that is still in print.  Exactly two sentences in a work of 358 pages discuss the aid given to freedom seekers by Native Americans, in this case the hospitality afforded at Chief Kinjeino’s village on the Maumee River in northwestern Ohio.  

 

Fast forward nearly eleven decades to the second work, perhaps the most extensive and authoritative Underground Railroad interpretation since Siebert.  Bound for Canaan: The Underground Railroad and the War for the Soul of America (2005) by journalist and popular historian Fergus Bordewich does only slightly better.  It includes four sentences out of 439 pages on the assistance given to freedom seekers by Native Americans passing through the region, in this case the aid provided to Jermain Loguen and John Farney in northern Indiana and Josiah Henson and his family in northwestern Ohio.  Readers of these two volumes could be excused for thinking that there was little interaction between freedom seekers and Native Americans in the Midwest.

 

There are at least two primary reasons for the absence of Native Americans in the historiography of the Underground Railroad. 

 

First, both freedom seekers fleeing slavery in the South and the Native Americans who assisted them in the Midwest came from oral cultures.  Scholars of slave literacy estimate that only five to ten per cent of those in bondage could read and write.  Although the percentage might have been slightly higher among those who made their way to freedom, a small minority of freedom seekers had achieved literacy.  Indians across the pre-Civil War Midwest also lived in primarily oral cultures.  Scholars have noted that “oral histories were central to indigenous society,” making use of mnemonic devices and reflected in storytelling.  As a result, both freedom seekers and Native Americans left a limited written record of their interaction.

 

Second, local histories, including the large volume of county histories produced across the Midwest in the late nineteenth and early twentieth centuries, start the clock with white settlement, ignoring Native American contributions generally and particularly those after the War of 1812.  In fact, most of these county histories make it seem as if Native Americans disappeared from the lower Midwest by the end of the War of 1812.  My own investigation of county histories for nearly two dozen counties in northwestern Ohio, an area where Native Americans were the primary population group until the 1830s, shows that Native Americans are largely excluded from this later history.  When the Underground Railroad is mentioned, it consists of white settlers aiding anonymous freedom seekers and is completely a post-settlement phenomenon of the 1840s and 1850s.  This is reflected as well in Siebert’s massive project in the 1890s.  As a result, the interaction of freedom seekers and Native Americans in communities across the Midwest has been obscured.

 

In spite of the absence of Native Americans in the historiography of the Underground Railroad, a scattered documentary record exists to demonstrate that freedom seekers received significant assistance from Indians in the pre-Civil War Midwest. There are at least five major evidences of this interaction.

 

The first of these evidences is simple geography.  Tiya Miles, who has written extensively about African American-Native American interaction, notes that “the routes that escaping slaves took went by these (Native) communities.”  Examples abound, especially in the lower Midwest.  The Michigan Road, a major thoroughfare for freedom seekers making their way through central Indiana, ran through or past dozens of Potawatomi villages north of the Wabash.  Hull’s Trace and the Scioto Trail ran through or past Ottawa and Wyandot reserves, respectively, in northwestern Ohio.  Another major trail ran through Shawnee villages in western Ohio, before reaching Ottawa villages at Lower Tawa Town and Chief Kinjeino’s Village on the Maumee River.  From about 1800 to 1843, a maroon community of sorts existed at Negro Town in the heart of the Wyandot Grand Reserve on the Sandusky River.  It was peopled by runaway slaves from Kentucky or western Virginia who had followed the Scioto Trail northward.

 

A second of these evidences can be found in the slave narratives, autobiographies written by freedom seekers after their escape from bondage.  Several of these tell of assistance received from Native Americans. Two provide particularly instructive content about the Midwest.  Josiah Henson’s narrative traces his and his family’s escape from Kentucky to Upper Canada (contemporary Ontario) in 1830, eventually taking them up Hull’s Trace through the heart of Indian Country in northwestern Ohio.  There they were assisted by Native Americans (probably Wyandot) who fed them “bountifully” and gave them “a comfortable wigwam” to sleep in for the night.  The next day, their Indian companions accompanied them along the route for a considerable distance, before finally pointing them toward the port of Sandusky on Lake Erie, where they could take a vessel across to Upper Canada. Jermain Loguen’s narrative traces his escape with John Farney from Tennessee to Upper Canada in 1835 by way of central Indiana.  North of the Wabash, they were aided at a number of Potawatomi villages, receiving food, shelter, and direction from their Indian hosts.  Upon reaching Michigan Territory, they turned eastward and crossed into Upper Canada.  Both Henson and Loguen later achieved literacy and became well-known black abolitionists.

 

A third of these evidences survives in Native American oral tradition.  One of the best examples comes from Ottawa oral tradition in western Michigan.  A story of helping twenty-one freedom seekers to reach Upper Canada was passed down through three generations of the Micksawbay family, before it was finally recorded in print by Ottawa storyteller Bill Dunlop in the book The Indians of Hungry Hollow (2004).  The oral tradition recounts an episode in the 1830s that involved the group of freedom seekers, who had gathered at Blackskin’s Village on the Grand River.  Ottawa elders, fearful that these runaways would be overtaken and captured by slave catchers, and sensing that sending them to Detroit was unsafe at the time, arranged for ten Ottawa men to accompany them overland to the Straits of Mackinac, where they were handed off to friendly Ojibwa.  The latter took them across by canoe to Michigan’s Upper Peninsula, and then accompanied them overland, crossing into Upper Canada by way of Neebish Island.  Oral history interviews with Native American descendants in the Midwest have also proven useful in establishing elements of this African American-Native American interaction.

 

A fourth of these evidences comes from the memoirs, letters, and journals of white traders, trappers, missionaries, and soldiers who lived in or passed through Indian Country in the Midwest and recorded their experiences in textual form.  My own research in northwestern Ohio has located discussions of Native American assistance to freedom seekers in the memoir of trader Edward Gunn and the letters to Siebert by trader Dresden Howard, both of the Maumee River valley, and the letters and journals of Moravian missionaries and U.S. soldiers in the War of 1812 who recounted life in Negro Town. A particularly instructive example appears in the memoir of Eliza Porter of Wisconsin.  She and her husband Jeremiah, missionaries in Green Bay, cooperated with Native Americans on the Stockbridge reservation east of Lake Winnebago in aiding fugitive slaves making their way through eastern Wisconsin to Great Lakes ports in the 1850s.  On one occasion, detailed in Porter’s memoir, they assisted a family of four runaways from Missouri in avoiding slave catchers and bounty hunters said to be “sneaking around” the reservation.  The Stockbridge helped their guests make their way to Green Bay and gain passage on the steamer Michigan, which carried them to freedom in Canada West (formerly Upper Canada).

 

A final evidence appears in the bodies of freedom seekers and Native Americans and their descendants. This takes us into the realms of genealogy and the DNA record and particularly applies to those freedom seekers who sought permanent sanctuary in Native American villages in the Midwest. Native American genealogist Don Greene has found extensive evidence of African American ancestry among the Shawnee in the region.  A case in point is Caesar, a Virginia fugitive who escaped across the Appalachian Mountains to the Ohio Country in 1774 and was adopted by the Shawnee.  He married a mixed-race Shawnee woman named Sally and fathered children known to history as “Sally’s white son” and “Sally’s black son” due to their difference in hue.  The latter is still listed as “Sally’s black son” on the roll of Shawnee migrants removed from the reservation at Wapakoneta to the Kansas frontier in 1832.  Similarly, researchers have suggested that the origin of the R-M 173 Y-chromosome among Native Americans, especially Ojibwa in the Great Lakes region, comes from the large number of runaway slaves settling among them.  These are examples from Indian Country in the Midwest of what historian William Loren Katz labels “Black Indians.”

 

Some subjects of historical research can be substantiated by investigating a single archive or a few collections in related archives.  The role of Native Americans in assisting freedom seekers in the pre-Civil War Midwest is not one of those subjects.  The latter subject requires the historian to assemble an archive from a range of disparate sources.  The evidence exists, however, to suggest that it can be done.  Simple geography, a few slave narratives, Native American oral tradition, dozens of scattered documents by particularly involved and insightful whites in Indian Country, and genealogy and the DNA record substantiate Tanner’s 2002 observation about Native Americans and the Underground Railroad in the Midwest.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173041 https://historynewsnetwork.org/article/173041 0
S.F. History Museum Highlights America’s First Immigration Restriction: The Chinese Exclusion Act of 1882

 

It may come as a surprise to many Americans to learn that the first country to have its citizens specifically excluded by the U.S. Congress was not Mexico, or a Middle East nation, but China. In 1882, Congress passed the first of several Chinese Exclusion Acts that prevented all immigration of Chinese laborers. These laws were not reversed until 1943, when China was an important ally in the war with Japan. 

 

The Chinese Historical Society of America Museum (CHSA) is currently displaying an exhibit, Chinese American: Exclusion/Inclusion, which vividly documents the changing views of Americans towards Chinese immigrants in 19th and 20th century America. The museum, in the heart of San Francisco’s Chinatown, is the oldest organization in the country dedicated to the interpretation of the cultural and political history and contributions of the Chinese in America. 

 

Tamiko Wong, the executive director of the CHSA, spoke with the History News Network about the exhibit. Before joining the CHSA, Wong was the Executive Director of the Oakland Asian Cultural Center. A graduate of U.C. Berkeley, she is a former Asian Pacific American Women’s Leadership Institute fellow. She has served on the California Commission on Asian and Pacific Islander American Affairs and is a recent graduate of Coro’s Women in Leadership program. 

 

 

Q. The current exhibit is very timely. How was it organized? 

A. Chinese American: Exclusion/Inclusion was originally curated by the New York Historical Society in 2014. Our museum provided content and paintings for the initial 2014-2015 show in New York. The NYHS originally hoped that the exhibit would travel to different parts of the country, but after one run at the Oregon Historical Society, the New York museum gave it to us in 2016. We were delighted because it is a high quality, interactive exhibition highly relevant to our story. 

 

When it arrived in San Francisco, we added more objects from our own collection and refocused it to include more of a West Coast story.  Although there are more than 5 million Chinese in America, we are still a small minority being only about 1.5% of the total population. Our hope for the exhibition is to enlighten visitors on the struggles of immigrants and contributions of Chinese in the U.S. even when citizenship was not an option. We want to contribute to the discussion of what it means to be an American.  

 

The exhibit is timely because it clearly shows a pattern we have seen throughout history, not just in the US but elsewhere too. Immigrants are brought in as a source of cheap labor, and have often faced discrimination and harsh conditions. Especially during downturns in the economy, immigrants become scapegoats who are blamed for society’s ills. Racist rhetoric becomes normalized in the media, discriminatory feelings are codified into law, and immigrants face violence and unfair treatment. For Chinese Americans in particular, we have been seen as perpetual foreigners even when many of us are citizens, have fought and died for this country, and have contributed to the building of this country from everything from railroads to some of the civil liberties we currently have such as birthright citizenship. 

 

Q. What has been the reaction to the exhibit by the local Chinese community and the broader, regional public? 

Overall, Chinese American history is not general knowledge. Here in California, even though there are curriculum standards to teach about the Chinese Exclusion Act, I would say 90% of our new visitors know very little about Chinese American history prior to coming to the museum. So often we hear, “I had no idea about what the Chinese went through.” Those who took Asian American studies courses when they were in college may remember some aspects of our history, but the content which we cover in Chinese American: Exclusion/Inclusion is a very powerful history lesson on immigration policy, discrimination, and resilience. Visitors who leave messages in our comment books express how important it is to have this history shared and how timely it is because of how immigrant and racial issues are discussed today. 

 

Q. Most of the exhibit materials were of American origin, including photographs, posters, newspaper clippings. Apparently very few written materials from Chinese immigrants (e.g. diaries, letters, books) have survived. Why is that? 

I am not sure if the lack of first-hand accounts from letters or diaries is true of all periods in Chinese American history. The lack of letters and diaries is true in the case of Chinese railroad workers during the late 1800s. As reported by professor Gordon Chang, head of Stanford’s Chinese Railroad Workers in North America Project, despite an extensive search, researchers have been unable to recover first-hand accounts or letters from these workers although many could read and write Chinese.  There are theories about why this is so, for example, many family and historical documents were destroyed during the Cultural Revolution in China. In the U.S., many Chinese communities were damaged or destroyed by hostile forces in the late 1800s-early 1900s. And the 1906 San Francisco earthquake and fire destroyed most of this city’s Chinatown.   

 

Q. A large section of the exhibition is given over to the role of Chinese American women. One display noted that in 1850 only seven of San Francisco’s 4,000 Chinese residents were women. However, today in San Francisco, the city’s assessor and one of its seven supervisors are Chinese American women. What are the reasons for this dramatic evolution?

One current section of our museum, Towards Equality: California’s Chinese American Women is devoted to that topic. Unfortunately, this particular display will close in October. 

 

To summarize, few Chinese women immigrated to the U.S. in the mid-1800s due to patriarchal norms in China that relegated them to their homes and reinforced their inequality. In the U.S. Chinese women were viewed as immoral, and Congress passed the 1875 Page Act that effectively stopped Chinese women from immigrating. The 1882 Chinese Exclusion Act curtailed the overall number of Chinese immigrants, but ironically created a broader opportunity for Chinese women to come as family members of merchants or American citizens. Over time, the population of Chinese women grew, especially among those born in the U.S. We find that these American-born, English speaking 2nd and 3rd generation women broke with traditional Chinese values and sought independence, mainstream acceptance, and became community activists seeking inclusion by American society. 

 

Q. This year, 2019, is also the 150th anniversary of the transcontinental railroad, in which Chinese workers played an important part. One part of the exhibit notes that a delegation to the 1969 Centennial celebration was snubbed by Nixon’s Transportation Secretary. 

I was proud to have been present at the transcontinental railroad’s sesquicentennial celebration held on May 10, 2019 in Utah (also known as Spike 150). 

 

CHSA Board Emeritus, historian, and railroad worker descendant Connie Young Yu also represented CHSA at the Spike 150 celebration, and gave the opening speech. Connie’s speech paid homage to Chinese railroad workers and called for the reclaiming of the immigrants’ rightful place in history. Her presence at the podium fulfilled a mission begun in 1969 by then CHSA President Phil Choy. He was initially asked to speak at the centennial celebration, but at the last minute he was removed and replaced with actor John Wayne.  To add insult to injury, Transportation Secretary John Volpe said in his keynote address, “Who else but Americans could chisel through miles of solid granite? Who else but Americans could have laid 10 miles of track?” 

 

This was an insult to the memory of Chinese immigrants who actually performed these feats yet could not become citizens at this time due to racist laws. 

 

Q. What are the next steps for the exhibit and the CHSA? Will this exhibit or others travel to other museums?

Chinese American: Exclusion/Inclusion has content that remains relevant and we plan to continue exhibiting it. We hope to add more content that helps to tell the important and timely story of how immigrants have been treated here in the U.S. through different points of view. We hope to add features that will focus on how the lives of well-known Chinese Americans who have intersected with history, looking at immigration to the U.S. that has been mediated by experiences in other places such as the Philippines, Latin America, and Taiwan.  

 

The overall exhibition is large and this makes it difficult to travel. However, we have built a number of traveling exhibitions that touch upon the themes covered in it.  The most travelled exhibit to date is Remembering 1882 which focuses on the Chinese Exclusion Act and this year, because of the 150 Anniversary of the completion of the Transcontinental Railroad, our The Chinese and the Iron Road: Building the Transcontinental has been very popular. Additionally, starting in October, our exhibit Towards Equality: California’s Chinese American Women will be available to travel and we welcome inquiries from other institutions who may wish to show it. In addition, we have available another display, Detained at Liberty’s Door, which traces the formation of the Angel Island Immigration Station in San Francisco Bay and highlights the inspiring story of Mrs. Lee Yoke Suey, the wife of a native-born citizen who was detained for more than 15 months. 

 

Note: to see excerpts from the exhibit and learn more about Chinese Historical Society of America, go to www.chsa.org., You may contact the museum staff at info@chsa.org.    

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173031 https://historynewsnetwork.org/article/173031 0
Expansion and Motivation: Frontiers and Borders in the Past and Present of the United States and Russia

 

Three new books push us to consider and compare the role of the frontier, or sometimes borders, in the past and present of Russia and the United States.  Greg Grandin, The End of the Myth: From the Frontier to the Border Wall in the Mind of America (Henry Holt, 2019); and David McCullough, The Pioneers: The Heroic Story of the Settlers Who Brought the American Ideal West (Simon and Schuster, 2019) take virtually diametrically opposed stances on what the frontier and its settlement have meant to America.  Grandin argues that expansion provided a “safety valve,” although it did not work well, for release of tension produced by internal difficulties.  In his book, expansion across North America and in foreign wars, at a great cost in blood and abandonment of ideals, is at the heart of the American experience.  McCullough offers fulsome praise for expansion in the case of Ohio, where he finds that true American ideals were put into practice. Angela Stent, Putin’s World: Russia Against the West and with the Rest (Hachette, 2019) discusses Russia’s past expansion and its relationship with “near abroad” neighbors; she finds, no surprise, that the issue bears gravely on the question of what Russia wants today.

 

The history of both Russia and America can be written around the issue of expansion and its ramifications.  Both states started small and had several factors in common in their growth, for instance the quest for furs, not mentioned by Stent and underemphasized by Grandin.  McCullough describes Ohio as a land of great resources, but his main interest is in the high purpose of white expansion into the territory.  In short, the proffered motives and results of expansion in these three books lead in profoundly different directions.

 

Grandin’s chief villain is Andrew Jackson (Donald Trump’s favorite president), who massacred Creek Indians in 1814, oversaw Indian removals from the East in 1830, and owned slaves.  For Jackson and many others, the West–wherever it happened to be–provided relief for the whole country.  Expanding westward focused people’s attention on the frontier and provided distraction from social and economic problems back east.  Americans could at least dream about going west to start a new life.  The West was always touted as a site of freedom.  But along the way, U.S. troops committed many crimes, among them rape, murder, and destruction of churches in the Mexican War of the 1840s.  Such acts occurred again in the following decades, especially in our war in the Philippines, 1899-1902, but also in Nicaragua in the 1920s and in other countries.

 

Although many of these stories are well known, Grandin weaves them together in moving and depressing fashion. He also ties the wars on the frontier and abroad to wars at home, above all the Civil War but also “race war” and violent repression of socialists and labor unions.  Together, these fights, which used up money and energy, help explain the absence here of “the social-democritization of European politics . . . including the rights to welfare, education, health care, and pensions” (95).  Attention to people’s needs at home lost out to the settlement, but even more to the idea, of the frontier.  By the 1840s, the U.S. was “becoming inured to its [own] brutality and accustomed to a unique prerogative: its ability to organize politics around the promise of constant, endless expansion” (94).  

 

Grandin portrays white movement west as a pattern of broken treaties and ethnic murder or cleansing.  Andrew Jackson is little more than a crude butcher.  Close behind in villainy is Thomas Jefferson, who also gushed about “the West” but never traveled farther toward it than the Shenandoah Valley. Jefferson justified American westward expansion as the spread of goodness and light.  However, he wrote in 1813 that “all our labors for the salvation of these unfortunate [Indian] people have failed” because of England’s support for them.  It would be “preferable” to “cultivate their love,” he said of the indigenous folk, but “fear” would also work.  “We have only to shut our hand to crush them” (43).  Thus rapacious frontiersmen strode forth to make a beautiful new world for themselves in the wilderness, relying all the way on mass violence.  Grandin’s view of American expansion is relentlessly grim.

 

Happier are the believers in McCullough’s “heroes” in the settlement of Ohio.  The dust jacket calls them “dauntless pioneers who overcame incredible hardships to build a community based on ideals that would come to define our country.” McCullough lauds the selfless careers of Manesseh Cutler, a minister and master of all sciences, and his son Ephraim. They played key roles in leading white settlers into Ohio in the late eighteenth and early nineteenth centuries and in the state’s early law-making.  They believed in and practiced democracy and insisted successfully that slavery would not be introduced beyond the Ohio River.  

 

Manasseh Cutler helped draft key provisions of the Northwest Ordinance (1787). Filled with stirring words, it insisted on freedom of religion and “morality and knowledge” spread by “schools and the means of education,” with all overseen by “good government.”  The Cutlers and friends also promoted the “New England system,” in which “the establishment of settlements [would be] done by legal process, and lands of the natives to remain theirs until purchased from them” (7). Suppose they didn’t want to sell? That possibility was not explored in the Northwest Ordinance.  Indian rights proved not to be a problem because beyond the Ohio River lay “howling wilderness” (7; McCullough is quoting a contemporary source).

 

The Treaty of Greenville (1795) opened the way “to clear and cultivate lands that had never known the axe and the plow” (118, quoting George W. Knepper).  Here is the old “empty land” (terra nullius) idea, although Indians had cultivated the earth in various parts of Ohio.  “Empty” meant that civilized people had the right to take it.  In McCullough’s narrative, Indians were obstacles to the spread of American greatness.  He rhapsodizes that, “West was opportunity.  West was the future.”  Achievements in Ohio, McCullough writes, “would one day be known as the American way of life” (13).

 

McCullough abandons the pretense of a dispassionate history in his subtitle. His book is a paean to American goodness; it ends with the idea that the Cutlers et al. overcame the “adversities they faced, propelled as they were by high, worthy purpose.  They accomplished what they had set out to do not for money” or fortune or fame, but to “advance the quality and opportunities of life–to propel as best they could the American ideals” (258). 

 

Angela Stent tries to achieve some detachment in outlining Russia’s past concern with borders and the country’s goals and fears at present.  She occasionally grants that Russians might have a point of view worth mentioning about the near abroad.  She notes that George W. Bush’s “Freedom Agenda” advocated regime change in Georgia and Ukraine.  Mistakes and insults to Russia from the American side are introduced; for example, U.S. officials misled Yeltsin about the EU’s extension to Eastern European countries. Russian Prime Minister Yevgeny Primakov learned about NATO’s bombing campaign against Serbia in 1999 only when he was in mid-air en route to Washington to discuss a solution to genocide in the former Yugoslavia.

 

NATO “made a major mistake” in 2008 when it “mishandled” enlargement to ultimately take in Poland and the Baltic States (129).  But for Stent, the major problem is that “NATO” did not think through the implications of a military pact with those countries.  Are the Baltic states “defensible,” she asks (127)? (A far better question would be what would Russia gain by conquering those countries, even if no shots were fired? More sand and gravel?)  Supposedly “the Russians” are deeply chagrined by the loss of their empire, and they want it back.  Looking at events in Georgia in 2008 and in Ukraine 2013-14, Stent asks, “What is it that propels this Russian drive for expansion?” (17).

 

The “drive” is connected to the Russian people. Stent mentions that many foreigners who went to Russia for the World Cup matches in 2018 brought with them “stereotypes about unfriendly Russians living in a backward country.”  However, at least some visitors discovered “normal” people there, ones who could smile and party.  Then we learn that they can no longer celebrate in the streets (2).  

 

Stent announces that, “To understand Putin’s world, one has to start with the history and geography—and, yes, culture, that shaped it.”  American and British readers apparently must be told that Russia possesses a culture and exhorted to think of the country’s people as “normal.” 

 

Of course Stent discerns an “iron hand” ruling the country under both tsars and Soviets (22), although in 2005 “the government was forced to back down” after protests by pensioners about “reforms” in their payments (41).  Whatever the “hand” might be, Stent insists that the U.S. can engage Russia where that country has a national interest.

 

Along among the three books, McCullough suggests that expansion was based on high ideals.  But Americans apparently have the right to bring civilization not only to those Indians or Filipinos we have failed to kill, but also to the Vietnamese, Iraqis, Afghans, and so on.  If National Security Adviser John Bolton seems like a rabid dog in his eagerness for more war, it is well to remember that he is part of a long tradition that sees American greatness as a justification for imposing our will (there should be no talk of ideals) on other peoples.

 

Grandin’s book, more solidly argued, will sadden some people–though surely far fewer will read it than will pick up McCullough’s.  However, Grandin conjures up a social democratic paradise in Europe that does not exist.  Or, if it does, it is limited to a few northern European countries that today evince a certain distrust of democracy and ugly ethnic prejudices.  In his eagerness to criticize domestic life in the U.S., Grandin sometimes goes too far.  He mentions lynching repeatedly, for example as part of the “relentless race terror African Americans faced since the end of Reconstruction” (130). Yet much recent work on lynching shows that it was erratic and fell, with some short upward movements, steadily after 1892.  See studies, for example by Michael Pfeifer, Fitzhugh Brundage, Stuart Tolnay and E. M. Beck, and myself.  Lynching was always horrible, but in my view it cannot be described as “relentless” or as a system.  Moreover, Grandin is not interested in the rise of land ownership or a middle class among African Americans, despite the great odds against them. (Yes, they lost great amounts of that land later.)  Nevertheless, his dark vision explains much about American policy at home and abroad.

 

In Stent’s Russia, expansion and the “iron hand” go together.  But what does that phrase mean?  Did people not live and love, at least a little, on their own terms?  I’ve been to a lot of Russian parties, starting in 1978, where talk and vodka flowed together, and I would say yes.  The concept of “lived socialism” (e.g. Mary Fulbrook, Wendy Goldman) should be considered by the Washington circle. And, if there has been an “iron hand,” how can anyone explain the Soviet victory in World War II (no, the NKVD did not drive troops into battle; see Richard Reese), mass mourning at Stalin’s death, his popularity in many recent polls, and Putin’s own popularity?

 

Russians are always subjects, never actors in accounts like Stent’s.  Denigrating Russians is an old tactic.  Stent cites George Kennan in Russia and the West under Lenin and Stalin (1961), where he mentioned the then-accepted estimate of twenty million Soviet dead in World War II.   He added, “But what are people, in the philosophy of those who do not recognize the existence of the soul?”  (Russia and the West, 275). Kennan liked the work of the Marquis de Custine published in 1841, also cited by Stent—but without a key passage. Custine wrote that “real barbarism” characterized Russia; the inhabitants were “only bodies without souls.” In an “empire of fear,” foreigners are “astonished by the love of these people for slavery.”  The trail then leads back to another travel account that Kennan, and surely Stent, knew, Sigismund zu Herberstein’s best seller on Muscovy.  His book was first published in Latin in 1549, then translated into multiple European languages.  Herberstein found that, “It is debatable whether such a people must have such oppressive rulers or whether the oppressive rulers have made the people so stupid.”  

 

During the Cold War, high American officials loved Custine.  Zbigniew Brzezinski, for instance, wrote in 1981 that, “No Sovietologist has yet improved on de Custine's insights into the Russian character and the Byzantine nature of the Russian political system.”  None of Custine’s leading American devotées disavowed his or Herberstein's final judgments on Russians.

There is one nearly useless map in Stent’s book, a cluttered view of Eurasia crammed onto a single page.  She does not provide a map showing the expansion of NATO over time up to Russia’s borders.  Would that expansion not have made normal people in Russia quite nervous?

 

Stent writes that, “There is no precedent in Russian history for accepting the loss of territory, only for the expansion of it” (17).  Then at the bottom of the same page, she insists that since the fifteenth century, Russia “has constantly alternated between territorial expansion and retreat.” She might have considered that from the 1760s on, Russia/USSR has “retreated” from France, Manchuria, Austria, Hungary (twice), part of Finland, Poland, Czechoslovakia, Romania, Bulgaria, northern Iran, the Baltic states, and Germany (twice or more, depending how you count matters).  

 

Yet even a cursory look at Russian expansion shows that much of it was defensive: the Tatars (Mongols) attacked deep into Russian settlements in the south and east every summer in the fifteenth and sixteenth centuries.  These raids were not the response of scattered tribes who often despised each other; they were military expeditions organized by the heirs of the original Mongol Horde that had conquered Russia in the thirteenth century.  In response to the continuing attacks, which reached Moscow as late as 1571, the Muscovite government extended a string of forts (the zaseka) further and further south.  The culmination of this drive was the conquest of Crimea in 1783 by a Russian imperial army from a Tatar remnant.

 

Catherine II then reportedly said, “That which stops growing begins to rot.  I have to expand my borders in order to keep my country secure” (17, no source given). Shades of Grandin!  But to suggest that internal security was the motivation for Russian expansion is to miss essential parts of the country’s history.

 

In the nineteenth century, Russian expansion was typical of the lust for more territory among all major European powers, the U.S., and the Japanese. In 1944-45, the Red Army marched into Eastern Europe to push the Germans out, with full Western approval.

 

Nothing in America’s past resembles the recurring invasions of Russia by foreign powers, from the Mongols in the thirteenth century to the Germans in 1941.  Sometimes, as in the early seventeenth century, assaults came simultaneously from several directions.  If Russians are sketchy on the details of these campaigns or exalt their own victories, they still base their outlook today on the knowledge that these incursions happened.  Stent is not much interested in this past.

 

 More than five years out from the Ukrainian crisis of spring 2014, little indicates that Russia covets more territory anywhere.  Annexation of the Crimea involved a region that was never Ukrainian in any profound sense.  The war with Georgia in 2008 also resulted in Russia’s absorbing new land, but areas not populated by Georgians.  Seeing some “drive” for endless conquests in these affairs, however much they broke international law, is gross speculation and is not based on a serious examination of Russian history.

 

Stent has a lot of valuable detail and some useful insights into Russian concerns. But what rational interests would a nation of dead souls have?  Her book becomes at once more suspect and more valuable when read together with Grandin’s examination of American frontiers.  In turn, Grandin will infuriate or dishearten fans of David McCullough’s glorious American past, which in its argument could serve as a foil to Russia’s supposedly limitless, ugly ambitions.  Could we at some point adduce Britain in the nineteenth century or Germany in the twentieth?  Could we just watch Game of Thrones?

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173042 https://historynewsnetwork.org/article/173042 0
The Apolitical Antidote to Unjust Politics

 

The 2020 election cycle has already produced numerous solutions for the woes plaguing America and the world. Candidates in the recent Democratic debate explained how problems related to education, healthcare, racism, income inequality, and immigration can be solved. Most of their plans required more laws, bigger bureaucracies, and fully integrating more peopleninto the American political system. 

 

On display recently in conservative circles is a different kind of obsession with politics which focuses on prioritizing national interests and cultivating an intense patriotism for the American nation-state. Sometimes conservatives advocate a restricted role of government or a restricted citizenry, but, like their political opponents, they usually still see Americans more engaged with a more vigorous political system as the right way forward.

 

The Politics of Utopianism

 

American policy in the last seventy years has been dominated by this kind of thinking. Americans started a “War on Poverty” in the 1960s, which was followed by the “War on Drugs” in the 70s. George W. Bush’s “No Child Left Behind” was a battle to give “every single child” a “first class education.” RAND recently released a study about how the right kinds of laws and bureaucracies can totally eliminate fatalities on American roads. Should it be called a “War on Traffic”?

 

Foreign policy has shared the same broad, vague aspirations with regard to real wars. The Truman Doctrine committed America to support all “the free peoples of the world.” President Kennedy clarified this in 1961, saying that America would effectively respond to “any potential aggressor contemplating an attack on any part of the free world with any kind of weapons.” Johnson applied this logic in Vietnam because, in his words, “A threat to any nation in that region is a threat to all, and a threat to us.” The same logic was applied when the Cold War transitioned to a “War on Terror” after the 11 September attacks. Similar to his predecessors, Bush pledged to hunt down any and all terrorists and eliminate their safe havens. America would end world terror. 

 

The idea behind all of these is that if we just have more politics, or more of the right politics, we’ll fix everything. We’ll find perfection. Rarely does anyone actually say that, but the logic behind the objectives implies the possibility of perfection. Francis Fukuyama probably summed it up best in his 1989 article crowing over the victory of democratic capitalism in the Cold War. The liberal order, led by its chief missionary America, had saved the world and if only it could permeate to the ends of the earth, mankind would see an “end of history.”

 

But history didn’t end. Terror, poverty, and drugs remain with us or have increased. So what is wrong with politics? And why is it getting nastier despite such lofty aspirations? 

 

Like many average citizens after 11 September, I wanted to do my part for our republic. As an aspiring military historian, joining the army was a suitable detour after my PhD. All of my life I had followed politics, I had degrees in politics, and I intended to teach the history of war and politics. In Afghanistan, however, I reached the limit of politics when I saw how imperfectly the liberal democratic nation-state was being applied to a people with a vastly different culture and history than our own. Bitter and confused about my own republic and its aspirations, I returned to Plato and Augustine with fresh eyes. They argued that unjust politics needs redemption through citizens’ apolitics. 

 

A Brief History of Apolitics

In ancient Greece, the face-to-face societies of city-states were bound together by intense loyalties. A city-state saw itself as the perfection of the political ideal. Tribes, gods, rituals, and public ceremonies were all centered on the cohesion of this unit, called a polis. 

 

The obvious question is what happens when you have fundamentally different conceptions of the polis that clash? This occurred when democratic Athens and militaristic Sparta collided during the Peloponnesian War. The two poleis dragged most of the Greek world into a devastating, three-decade long war that ruined Greece. Despite their defeat, bitter Athenian citizens were still blindly loyal to the idea of the polis. A brilliant teacher, Socrates, challenged this order and pointed out the flaws in Athens’ democracy, so they tried and executed him.

 

The war and then the execution of Socrates revealed the injustices of the polis, but the unexpected occurred in the aftermath. Plato, a disciple of Socrates, transferred the concept of the polis to the soul. How do you live in a world where the political order has collapsed and injustice reigns? Plato argued that you seek wisdom and courageously apply goodness and justice in your own life, regardless of what the politicians of your day are doing. This apolitical citizen pursues a well-ordered soul based on the perfection found in the realm of ideas, and the citizen also knows perfection cannot be found on earth.

 

Plato’s apolitism created a tension in political thought and action. Henceforth the virtuous citizen should pursue goodness and order his soul, and only then should he or she try to apply it in this temporal world. Perfection could be imagined but never realized. Throughout history apolitics guided the greatest minds living in the most unjust times. 

 

Plato was followed by the Cynics, with the most famous among them being Diogenes of Sinope. When the polis collapsed, the Macedonians conquered Greece and created a new world order. Greece was no longer free and the average citizen was powerless. Diogenes was the first to apply Plato to a broader political reality. He rejected the old polis, insisting that he was a citizen of the cosmopolis. He also eschewed the tyrannical politics of great conquerors like Alexander the Great. One day Alexander met Diogenes. Standing over him, he offered Diogenes whatever he desired. The old philosopher rejected Alexander’s power by mockingly requesting, “that you step out of my sunlight.”

 

By the 1st century BC, Rome picked up Alexander’s imperial idea and gained dominion over the Mediterranean. Roman power seemed ultimate and eternal, but Jesus of Nazareth challenged the emperor obliquely by implying that he instead was the anointed son of God and the true king; however, his kingdom was “not of this world.” Jesus’ followers, like Diogenes, would claim that citizenship was broader and deeper than temporal politics. Christians were “foreigners” and “exiles” in the world. Pontius Pilate, the representative of Roman power could not understand this, so he approved Jesus’ execution as the Athenians had done with Socrates.

 

Jesus’ apolitical legacy remained, finding a particular expression in monastic communities, where men and women would withdraw in isolation or small communities in order to contemplate God and pursue holiness. The clearest articulation of Christian apolitism came from Augustine of Hippo, who described two realms, the City of God and the City of Man. The earthly city was fleeting, beset by failures and injustices. Christians should not abandon this city, but can only better it by keeping their eyes on the perfect, heavenly city. Augustine’s understanding of the tension between political and apolitical has remained the clearest synthesis of classical and Christian apolitics.

 

Apolitism Today

 

The classical and Christian orders have long since passed, but the dangers of utopianism remain. America needs a new apolitics. As this brief history illustrates, apolitics is not the hedonistic withdrawal into self-interested behavior. Neither is it apathy or laziness. Historically it has been the conscious rejection of perfection in this world. 

 

The moment a political society believes it can achieve perfection, it hardwires itself for the worst sorts of injustices. The most perfectly conceived states that pledged to purify politics were the fascist and communist experiments of the 20th century. Philosopher Eric Voegelin, who fled the Gestapo and escaped to the United States, noted the elimination of apolitics in these societies. In his later years he warned that places like the United States were taking a different path toward the same utopian dead end. The rhetoric in contemporary politics confirms his concerns.

So should we care about American politics? Yes, but not to the extent that it distracts us from loving our neighbors and treating people—all people—decently. It should also never be severed from a contemplation of the moral cosmos. Only the citizen with the well-ordered soul can begin to understand what a well-ordered society or state should look like and what its limitations should be.

 

There is no simple, five-point action plan for apolitics. The good citizen is left with a tension between the perfect order of ideas and imperfect temporal politics. This tension balances what we seek from human political order and keeps us from descending into tyranny. Without the tension—without the apolitical—no realm of perfect ideas or conception of love can exist. All that remains is the power to be as unjust, imperfect, and unloving as we can imagine.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173033 https://historynewsnetwork.org/article/173033 0
Eastern European Historian Emanuela Grama on Romania’s Heritage

 

Emanuela Grama is an Associate Professor in the Department of History at Carnegie Mellon University. She received her PhD in Anthropology and History from the University of Michigan, Ann Arbor in 2010. Her first book, Socialist Heritage: The Politics of Past and Place in Romania is currently in production Indiana University Press. Visit her website or follow her on Twitter @emanuela_grama.  

 

What books are you reading now?

 

I’m currently re-reading Transylvanian Trilogy by Miklos Banffy. This is a 1,400-page novel about the world of the Transylvanian Hungarian aristocracy at the end of the 19th century and beginning of the 20th). Banffy wrote this novel after the end of the Austro-Hungarian empire, musing both elegiacally and ironically about his Hungarian compatriots who could not see “the writing on the wall”—in this particular case, the disintegration of the empire and of the social and political order it represented. 

 

I am also reading Holly Case’s The Age of Questions, a brilliant intellectual history of the many “questions” that emerged in the 19th century and the ways in which political actors at that time tried to make sense of the inherent radical changes brought about the industrial revolution, the rise of capitalism and the modern age.  

 

In general, I am the type of person who reads multiple books at the same time, according to how I feel on a particular evening. Right now, I’m moving in between several volumes, including Toni Morrison’s Song of Solomon, Valeria Luiselli’s Lost Children Archives, an edited volume entitled How We Read (@punctumbooks), and Jill Lepore’s These Truths. 

 

What is your favorite history book?

 

Obviously, my answer to this question would continue to change probably from one month to another, depending on what I am reading at the time. One of the books I’ve read recently and loved—as in, I-could-not-put-it-down type of love—is East West Street by Phillipe Sands. It transgresses the genres, being at the same time a family memoir, a love story, and a historical analysis of the intellectual trajectory and biography of the legal scholars who coined the concepts of “genocide” and “crimes against humanity,” each echoing a particular understanding of the relationship between individual, state, and society. 

 

Why did you choose history as your career?

 

Actually, I could say that history chose me. I am an anthro-historian working in a history department and I am teaching courses in European history as well as in cultural anthropology. I was privileged to be a graduate student of and receive my PhD from the Doctoral Program in Anthropology and History of the University of Michigan. As part of this program, I took a wide range of courses, from socio-cultural and linguistic anthropology to historical methods and theory and the historiography of modern European and Eastern European history. During grad school, I learned how to think as an anthropologist when doing archival research, and as a historian while in the field. Specifically, I constantly tried to consider the historical and political conditions under which an archival fond was constituted, organized, and made available to researchers—and even sometimes, as I’ve learned during my recent research in the National Archives in Bucharest, Romania, re-classified for political reasons. (In Romanian, there is even a special word for this process, re-secretizare, meant to signal that particular files and archival funds are being re-classified, often at the request of specific political actors in the government).  

 

In my work, I continue to draw on insights from both anthropology and history. For instance, I recently published an article about some of the art historians and collectors who worked with the communist government in post-1945 Romania to reorganize the nationalized art collections and to form a socialist network of art museums. I drew on a wide range of primary sources, such as memos of the meetings, donation deeds, inventories of the collections, communist party meetings, etc., and I relied on anthropological theories about property and value to look at these sources in a new light. Specifically, I used Weiner’s brilliant concept of “keeping-while-giving” to argue that these collectors and art experts became particular “arbiters of value”, straddling two distinct political and social orders: the interwar and the early communist periods. 

 

I employ a similar strategy in my forthcoming book, Socialist Heritage: The Politics of Past and Place in Romania (Indiana UP, 2019). The book is a social and political history of one place: the historic district of the Old Town in Bucharest, Romania’s capital. I approach the Old Town as a window onto understanding broader political and cultural changes during the communist and postcommunist periods. This district had historically been a place of transactions and transgressions, a place that defied easy categorization. When the communist officials wanted to transform Bucharest into a modern socialist capital, they initially wanted to demolish the old houses in the district. The architects hated its aesthetic heterogeneity, its narrow streets, its old houses. They wanted it gone. But in the 1950s, some archaeologists found the ruins of a medieval place, and used them to fight back the demolition plans. In the end, paradoxically, the Old Town shifted from being an urban eyesore to be portrayed as a key heritage site of the socialist state and of the Romanian nation. After the end of communism, however, this heritage turned into a burden, a symbol of a time that everyone wanted to forget. 

 

I also explore how the district became once again a political resource for the postcommunist elites, who used it to naturalize a more exclusionary concept of citizenship, one that depended on property ownership. They did so by promoting the Old Town as a symbol of Bucharest’s European history and by attempting to alter its social and architectural fabric—evicting the homeless, changing the utilities infrastructure, adding new pavement to the narrow streets. In parallel, however, they refused to assume responsibility for the state-owned old buildings, many of them in decrepit condition, and implicitly for the state tenants’ precarious situation. The case of the Old Town reveals how these new elites managed to deny their own role in the increasing economic and political volatility of postcommunist Romania, and instead place this responsibility exclusively with the poor. 

 

What qualities do you need to be a historian?

 

This summer, I spent two months conducting research in two archives in Bucharest, Romania. Over coffee with an old friend, I was very excited to tell her about some of the documents that I found. But my mood suddenly changed when she looked at me unfazed and asked: do you really like doing this? Her question took me by surprise. Of course, I should have realized that some people might view the act of reading dusty old documents as a waste of time. My answer was yes, I do like it, but I did not sound too enthusiastic. I thought that a specific example would be more persuasive, so I told her how once I found a draft of a love letter hidden among some boring bureaucratic forms. 

 

The letter was written on the back of some typed documents, a long backlog of art objects in a museum collection. Who knows how that letter ended up in the archives? Maybe the writer did not have access to blank paper, and he wrote that letter on some pages that he did not think were of much value—but then he forgot to take out those pages from the file. I will probably not be able to use that love letter as a source for an article or book chapter, but I am thrilled that something like this could happen in an archive; to stumble upon a trace of an anonymous bureaucrat’s intimate anxieties about a seemingly unrequited love, and to feel empathy for someone living in a different time and place (in this case, it was Romania in the aftermath of the second world war, as the letter was dated on January 1, 1945). My friend did not seem too persuaded, but she nodded diplomatically when I told her the story. 

 

What I love most about doing archival work is the potential of creating a story out of disparate pieces. But the road from posing a research question and finding primary sources to the end product, often a book, is a long and twisted one. To succeed as a historian, to walk to the end of that road, you need to be patient, to be hard-working, to embrace failure, chance, and serendipity, to be open-minded, to be willing to revise your thinking and arguments along the way, and especially to be persistent. 

 

It is a long road, but it needs not be solitary. We need to find ways to make our tentative arguments heard, by sharing them with friends and colleagues; to go back to reading good fiction when we feel that our writing becomes stale; and especially to find a community of kindred spirits and mutual support—either in our own department, academic circles, or on #AcademicTwitter. We need to actively search for empathetic peers; and generous friends who would want to engage in conversation and allow us to talk about our work. 

 

Obviously, to find those peers and especially to keep them in your intellectual life means that we also need to be equally generous with our time and ideas. And here I would draw from my experience conducting interviews. Listening, truly listening to someone, takes tremendous effort and energy. (During my fieldwork, after one hour and a half of a conversation with someone, when I was trying to follow every word and think about each possible significance of every utterance, I would become so tired that I would often need to take a nap.) I have tried to apply this active listening during conversations with my peers and my students. I don’t always succeed, but when that happens I feel I can truly engage with someone’s ideas in a fresh and generous way. 

 

Ultimately, in my view, the best historian is a kind of magician; one that can transform a puzzle of disparate dusty documents into a persuasive analysis and an electric narrative. And whenever a friend asked them whether they like what they are doing, this historian would answer: “Yes, I love it! Let me tell you about this time when…” And then a true, genuine and generous conversation would follow.

 

Who was your favorite history teacher?

 

I was privileged to learn a lot from Katherine Verdery and Gillian Feeley-Harnik, my PhD co-advisers, both brilliant anthropologists who seriously engaged with historical analysis in their work and conducted extensive archival research in addition to fieldwork. Their scholarship represents a model of astute analysis and intellectual rigor. 

 

What is your most memorable or rewarding teaching experience?

 

I once had a student who was completely silent in class and even looked a bit aloof. It was the beginning of Spring semester and this was a seminar of around 15 students. I wanted to better understand why: did she have a very busy schedule that would not allow her to get prepared for this class? or did she find the material plainly boring? I invited her to schedule an appointment during office hours. During that meeting, she admitted to me that she could not follow the class conversation and that she felt always behind. I asked her to describe to me how she studied for the class. She took out one of the books assigned for the course and I took a look. All of the pages were highlighted, because, as she put it, she felt that everything was important. I realized how much work she had been putting in this course, and also how lost she might have felt dealing with all that information. I told her that she needed to learn how to skim read, a skill that is so important in college. 

 

Afterwards, we met almost every week that semester and we discussed different ways in which she could go through a book without paying attention to every word. I taught her tricks that I’ve myself learned in graduate school: read the introduction and conclusion first, then read the intro and conclusion of various chapters, skim through pages until you get to a part that catches your attention and then read closely only that section. We alternated between different forms of reading, ranging from quickly skimming some pages and getting one idea on the paper to closely reading a particularly beautiful paragraph or a persuasive analysis—and stopping there.

 

The student became more and more confident and she began speaking in class. She was soft-spoken, but her comments were poignant and persuasive, revealing her originality of thought and attention to detail. Her writing improved significantly, and she passed the course with a good grade. When I returned to campus at the beginning of the next academic year, I ran into her on campus. She was with her mom and her grandmother. I then learned that she was a first-generation college student—something that she did not tell me during our meetings. She was elated to be done with all of the required courses and to be soon the first in her family to become a college graduate. 

 

From that story I have learned that I should not imagine anything about students’ silences; that there could be many other issues hidden behind their unwillingness to speak in class. Since then, I’ve tried to encourage every student in my courses to come to talk to me at the beginning of the semester. Such individual meetings have helped me to learn more about each student and forge a more nuanced interpersonal connection, one that would otherwise be more difficult to emerge in the classroom.

 

What are your hopes for history as a discipline?

 

That more archival funds will be made available to researchers and that more positions will become available for junior scholars currently on the job market. And that more and more students will choose to enroll in history courses and pursue history majors because they will realize how much those courses could contribute to their becoming informed citizens, confident in their beliefs and less prone to be influenced by political manipulation.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I’m not a collector but I love to visit second-hand bookshops and museums.

 

How has the study of history changed in the course of your career?

 

The field of Eastern European history has changed both thematically and quantitatively in the last twenty years. This is a byproduct of two major changes: 1) the archives of the former communist governments have been mostly made available for research and 2) a new generation of scholars in the region have begun a systematic study of this treasure trove of newly available primary sources. There is a much more intense conversation and collaboration among scholars living and working in the region and historians living abroad, as I could see at the conferences that I attend regularly (especially the ASEEES, the American Association for East European and Eurasian studies). 

 

Also, up to the mid 1990s, the field of post-1945 Eastern European history continued to be heavily influenced by topics and assumptions that were themselves byproducts of Cold War: a penchant for political history, the assumption of a clear division between East and West, one that would not pay attention to systematic exchanges among of various European countries within and outside the communist bloc. Things have changed dramatically. Historians have recently shown that such transfers between West and East were part and parcel of the politics of Cold War, and not just simple "deviations" from the ideological norm. The politics of urban planning and the relationship between place-making and state-making during communism is another subfield that is rapidly expanding.

 

What is your favorite history-related saying? Have you come up with your own?

 

I don’t have a history-related saying, but one of my all-time favorites, one that I keep repeating to myself when I get stuck, is “feather by feather.” It is inspired by “Bird by Bird,” the famous piece by Anne Lamott, in which she talks about the trials and tribulations of writing. She starts with the story of her own father telling her brother how to begin and stick with a school project—by drawing one bird at the time and not become panicked by thinking about the magnitude of the whole project. When I was writing my book, by the end I was so tired that I thought that perhaps I could not accomplish even one “bird” at the time—the equivalent of a few pages. So, I deconstructed that “bird” into a multitude of “feathers,” that is, individual words. It felt easier to just think in terms of word counts instead of pages. However, if I think about it, “feather by feather” is also fundamentally historical. It speaks about gradual change, and thus implicitly about history as a process. Almost anything, from political institutions to ideas, concepts, and attitudes, needs time; to emerge, to mature, and to flourish.  

 

What are you doing next?

 

I am currently writing an article based on my recent archival research, focusing on the confiscation of the property of the German ethnic minority in post-1945 Romania and on the subsequent negotiations that the Lutheran Church initiated with the communist state to regain some of these assets.

 

I have also started working on my next book, which draws on archival and ethnographic research to explore how political regimes (communist and postcommunist) in Romania used property confiscation or restitution to negotiate their relationship with Transylvania’s ethnic Germans and Hungarians.  

 

I am also preparing to start the Fall semester, when I would be teaching a graduate seminar, Methods and Theory in Historical research, and an undergraduate course about immigration. 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173038 https://historynewsnetwork.org/article/173038 0
Celebrating the 200th Birthday of Prince Albert

 

200 years ago, a German petty princeling was born. Little Albrecht came into the world in what is now Bavarian hinterland, in a summer residence that had recently been revamped in Gothic Revival Style. A second son, he missed out on being heir to a diminutive duchy that was about the size of the Isle of Wight. 

 

In 1819, “Germany” was a confederation of sovereign states that were mostly run by many royal families. Thus, the birth of yet another royal mustn’t have seemed all that historically significant. But this year historians, tourists, and a fair few locals will be celebrating in Schloss Roseau. In Newcastle, meanwhile, visitors to the Laing Art Gallery can view watercolours of this beau, and slightly embellished, “castle” — on loan from the Royal Collection. 

 

Why the celebration? Because aristocratic Albrecht grew up to become Albert, husband of Queen Victoria of England.

 

In the age of Victoria and Albert, the Prince’s German-ness was a PR problem. Today, it is welcome soft power in politically hard times. In May, on what had been planned as a post-Brexit visit, Prince Charles was sure to mention the legacy of Albert. Over a century before, Queen Victoria had tried to tone down Albert’s teutonic traits: she worried about British prejudice. Germans were seen as serious, pedantic, and in search of power abroad. 

 

A biography of the Prince commissioned by the Queen portrays him as an English witto avoid such stereotypes. In his student days he was apparently good at comic impressions, which “a University, especially a German University, with the oddly accentuated ways of its professors, can never fail to supply.” Behind the managed public image, though, in private Victoria and Albert wrote to each other in earnest — and in German. 

 

Despite the Germanophobia or Germanophilia that has surrounded Albert, it’s debatable how much he embodied the Germany of two centuries ago. Born into a contemporary culture famous for its poets and philosophers, once in Britain Albert became Chancellor at Cambridge. He’s credited with bringing the university into competition with the continent’s most reputable establishments — in Germany. But Albert wasn’t a bastion of German Bildung, or education, himself. In Bonn he followed a standard syllabus for his noble sort, rather than embrace any republic of letters. He did, however, attend lectures by a luminary of German Romanticism: August Schlegel. He didn’t insist on either a reserved seat or being addressed formally at the start of the hour — unlike pompous princely peers. Though perhaps acting as one of the people allowed Albert to skive off without much notice. 

 

If Albert wasn’t a token German intellectual, neither Dichter nor Denker, in domestic life he was more typically Germanic. His and Victoria’s palaces were decorated with Christmas trees — although these were first introduced by earlier German British royals, those Georgian Hanovarians — and guarded by dachshunds called Waldmann and Waldina. And Albert loved German Lieder, or classical songs: he would play on a Buckingham Palace piano and sing as Victoria and even Mendelssohn joined in. 

 

Princes like Albert were the period’s major German export. Germans had long set up royal houses abroad, and the provincial German aristocracy already married internationally. Victoria and Albert were cousins, after all. But two foreign monarchies came from the diminutive duchy of Saxe-Coburg-Saalfeld in the early nineteenth century, which still exist today: in Belgium, Albert’s uncle took up the throne; in Britain, Albert gave his name to Victoria’s line of succession. It was in this way that the tiny German territories expanded, in what was otherwise an era of global empires. 

 

Albert’s father, Ernest, had been given an additional petite principality at the Congress of Vienna in 1815, yet he found “Lichtenberg” to be a bit of a burden. So he sold it to the Prussians without consulting either the people of the place or his own, nominal parliament in Coburg — in a move Donald Trump would surely admire. While Ernest focused on ruling his lands closer to home (in reactionary fashion), his brother, son, and nephews were shipped away in the name of the family. Just like their becrowned German colleagues. The strategy that worked in Britain and Belgium was less successfully rolled out in other countries, however. Prince Otto, second in line to the Bavarian throne, was made King of Greece — until he was deposed. Maximilian I of Mexico arrived as the younger brother of the Habsburg Emperor, and was soon executed.

 

A motley collection of mini-monarchies is hardly our stock image of Germany. But it defined most of German history. Britain is part of this legacy. What’s more, the German royalty of these times — such as Albert and his relatives — have survived longer abroad than in Germany, since they abdicated there en masse after the First World War. Gloria von Thurn und Taxis, of a House that was made princely because it ran an efficient postal system, has reigned only over the parties of 1980s high-society — and has now switched her allegiance to high priests, and reportedly Steve Bannon.

 

There’s an exception to this German royal rule of thumb. Last month, Liechtenstein celebrated its national day as a German-speaking monarchy, a micro-state even smaller than the Isle of Wight. Although exceptional in the present, in the past Liechtenstein was exchangeable with the likes of Saxe-Coburg-Saalfeld, or for that matter Lichtenberg. The story goes that Prince Philip once painted watercolours on Liechtenstein’s mountainside. Whether true or not, imagine the scene for a moment: as a small, pretty, and provincial Schloss. That would symbolise both British royal and German cultural history of Albert’s age, which in 2019 is — somehow — still alive. 

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173043 https://historynewsnetwork.org/article/173043 0
The History of Impeachment and Why Democrats Need to Act Now Ronald L. Feinman is the author of “Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama” (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

 

 

Two American presidents have been impeached: Andrew Johnson in 1868 and Bill Clinton in 1998-1999. Richard Nixon resigned in order to avoid formal impeachment. All three instances produced extreme political division and controversy.  All three occurred with a divided government—the President was from a different party than the Congressional majority.

 

Andrew Johnson became president after Abraham Lincoln’s assassination in 1865. Lincoln, a Republican, asked the Democratic Johnson to be his running mate in 1864 due to concerns that Lincoln might face a tough reelection campaign against former General George McClellan. Lincoln hoped Johnson would help him gain the support of loyal Democrats who appreciated Johnson’s strong support of the Union.

 

However, Johnson did not agree with much of Lincoln’s agenda and Republicans in Congress strongly turned against him. The inability of Johnson to work with and get along with the party that had elected him Vice President was made worse by his horrible temper, refusal to compromise, and tendency to use foul language.  No one would defend his prickly personality and racist tendencies in retrospect.

 

Johnson was impeached and brought to trial for breaking the Tenure of Office Act of 1867, which was designed to prevent the President from dismissing cabinet officers without approval of the Senate. Johnson fired Secretary of War Edwin Stanton, a major critic and collaborator with Radical Republicans, who wished Johnson to be removed. The law was eventually declared unconstitutional by the Supreme Court in Myers V US in 1926, 59 years after the enactment of the law. 

 

Ultimately, Johnson avoided removal from office by just one vote. Ten Republicans joined with nine Democrats and voted to keep Johnson in office. The final vote was 35-19, one vote short of the two-thirds majority needed to approve removal. Johnson had not abused power or obstructed justice, and the impeachment case was flimsy. While his personality and viewpoints were obnoxious to many, there was no real justification for his impeachment.

 

Richard Nixon was facing impeachment in 1974 from the opposition Democratic Party in Congress due to strong evidence of abuse of power, obstruction of justice,  contempt of Congress, refusal to cooperate with the impeachment investigation relating to the Watergate Scandal, and other illegal acts discovered in the process of the investigation by the House Judiciary Committee.  

 

Ultimately, the Nixon impeachment was based on bipartisan support of that committee, with seven Republicans joining the Democrats in backing three articles of impeachment against Richard Nixon. The Supreme Court also stepped in via the case of US V Richard Nixon, ordering that Nixon must hand over the Watergate tapes demanded by the House Judiciary Committee.  

 

Additionally, bipartisan support for Richard Nixon’s removal from office was made clear by a visit of Republican leaders of Congress to the White House, including Senate Minority Leader Hugh Scott of Pennsylvania, Senator Barry Goldwater of Arizona, and House Minority Leader John Rhodes of Arizona, informing Nixon that he had lost the support of the Republicans in the US Senate, and would be unlikely to gain more than fifteen votes of the 34 needed to survive an impeachment vote to remove him.  

 

With the strong case against Nixon, and the bipartisan move against him staying in office growing rapidly, Nixon realized it was time to leave the Oval Office, and avoid a further constitutional crisis.

 

Bill Clinton faced impeachment in 1998-1999 from the opposition Republican Party in Congress. Republicans were determined to remove him based on his perjury before a grand jury in the Jones V. Clinton 1997 Supreme Court case regarding Clinton’s extramarital sexual relationships, and the need for the President to testify before a grand jury.

 

Clinton was impeached on the last day of the 105th Congress in December 1998 and the trial was held by the new 106th Congress in January and February 1999. This violated the rule that an impeachment and trial must be conducted in the same Congress. The trial was part of the policy of Newt Gingrich and other Republicans to do what they could to undermine the Bill Clinton Presidency and plan for the upcoming Presidential and Congressional Election of 2000.  

 

Ultimately, the Senate voted to determine if Clinton would be removed from office on two counts. On the first count, lying under oath, the Senate voted 55-45, but this was less than the two-thirds majority necessary to remove the president. On the second count, obstruction of justice, the Senate voted 50-50 to remove Clinton, again short of the two-thirds majority required. Ten Republicans joined the Democrats on the first charge and five Republicans on the second count. Although some Republicans attempted to hold a vote on another impeachment article on a separate obstruction of justice charge, this failed miserably and was not considered by the Senate.  

 

The case against Bill Clinton was more similar to the political vendetta of the Republican Party against Andrew Johnson 130 years earlier than Richard Nixon’s offenses.  No one then or since would defend Clinton’s private behavior in the Oval Office or his lying under oath, but it was clearly an unpopular move by Republicans to impeach Clinton, and the President remained popular in public opinion polls at that time.

 

So, what do these past examples tell us about a potential impeachment of Donald Trump? It is extremely unlikely that Trump would be removed from office because the Senate is Republican-controlled. It is still essential, however, that Democrats push impeachment to make a political point. Just as the Republicans in 1999 pursued impeachment without consideration of how they might appear in public opinion, the Democrats should not worry about public opinion or political ramifications because Trump’s actions require accountability. If Democrats don’t take action, history will record that the Democratic Party refused to see the long term danger of Trump, and it will set a bad precedent for the future. 

 

As I’ve written before, the case against Donald Trump is overwhelming. Donald Trump obstructed justice to prevent a thorough investigation into Russian involvement in the 2016 Presidential campaign. His son and others in the Trump campaign engaged in collusion with a foreign nation determined to undermine the candidacy of the opposition nominee, Hillary Clinton. Trump has also violated the Emoluments Clause of the Constitution, by making profits daily on his various hotel properties and other business ventures, as recent reporting has made even more clear.   

He has abused the Pardon power by promising or hinting at pardons for those who break the law and enforce his illegal and unethical actions.  He has engaged in conduct that grossly endangers the peace and security of the United States in foreign policy.  He has advocated violence and undermined equal protection under the law. He has undermined freedom of the press, a threat against American democracy, and has pressured the Department of Justice to investigate and prosecute political adversaries. 

 

Finally, Trump has shown contempt of Congress by refusing to cooperate with their investigation of his administration, a charge that was one of the three brought against Richard Nixon before he decided to resign ahead of a certain impeachment by the House of Representatives and conviction by the US Senate. 

 

Democrats need to act before the upcoming presidential election consumes even more political energy. It is time for the Democrats to move ahead on what needs to be done:  the impeachment on high crimes and misdemeanors of the 45th President.

 

For more on impeachment, click on the links below: 

What To Know About the History of Impeachment

George Orwell and Why the Time to Stop Trump is Now

What Should Historians Do About the Mueller Report?

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/blog/154248 https://historynewsnetwork.org/blog/154248 0
What Historians Are Saying: 2020 Election Democratic Primary Debates Click inside the image below and scroll down to see articles and Tweets. 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172385 https://historynewsnetwork.org/article/172385 0
The Divine Right Presidency Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Trump’s latest use of our government to cover up his mistakes, this time about weather forecasting, is revealing about the nature of his Presidency.

 

No government weather maps showed Hurricane Dorian threatening Alabama. On Thursday, August 29, Trump was briefed in the Oval Office on the Hurricane by the head of FEMA, which released a photo of him looking at a map of where Dorian had been and where it was headed. A white curved line showed the areas that Dorian might possibly hit. Not Alabama.

 

Early Saturday morning, August 31, the National Hurricane Center realized that Dorian was not going to hit Florida directly, and threat projections were shifted further east. The next morning, Sunday, at 7:51 AM Trump tweeted the following: “In addition to Florida - South Carolina, North Carolina, Georgia, and Alabama, will most likely be hit (much) harder than anticipated.”

 

The National Weather Service’s Birmingham office reacted in 20 minutes, tweeting at 8:11: “Alabama will NOT see any impacts from #Dorian. We repeat, no impacts from Hurricane #Dorian will be felt across Alabama. The system will remain too far east.”

 

For Alabamans, whew. For Trump, though, emergency – he had made a mistake. Nobody died, his tweet perhaps scared some people, but he had been wrong, and that was impossible. At noon on Sunday at FEMA headquarters, he repeated that Alabama remained in the path of the storm, based on “new information”.

 

As the Hurricane moved north, doing tremendous damage but having nothing to do with Alabama, the storm in Washington about Alabama intensified. On Monday Trump repeated his clam that Alabama was in danger. By then, it was clear to everyone that Alabama would remain untouched, and the controversy shifted to whether Trump was correct that Alabama had been part of earlier forecasts. On Wednesday, Trump brought out the map from his briefing 6 days earlier. Somewhere in the White House, a new black Sharpie line had been added, extending Dorian’s “threat” another 100 miles west into a corner of Alabama.

 

On Thursday, Rear Admiral Peter J. Brown, Trump’s homeland security and counterterrorism adviser, released a statement that Alabama had been in the path of the storm. Wilbur Ross, the Secretary of Commerce who oversees NOAA and the National Weather Service, threatened to fire any employee who contradicted Trump.

On Friday afternoon, NOAA disavowed the Birmingham NWS office’s statement that Alabama would not be hit.

 

We all might soon forget this saga of Dorian and Alabama when the next outrage emerges, but its details display the character of our current government. Right-wing populist politicians and parties in democratic systems across the globe are being examined for their similarities to 20th-century fascists. Trump however is no strongman, he commands no armed militia of followers, who brutalize opponents. He acts more like the unelected monarchs who ruled for hundreds of years by divine right. Trump is the state and “L’état, c’est moi,” as Louis XIV is supposed to have said.

 

Trump’s equation of himself with the state emerges in many of his statements. When the prime minister of Denmark curtly rejected Trump’s notion of buying Greenland, he said, “She’s not talking to me, she’s talking to the United States of America. You don’t talk to the United States that way.”

 

Let’s add up some individual instances where Trump has identified the USA with himself, made the government into his personal servants, and claimed unprecedented powers to do whatever he wants. As soon as he was inaugurated, he enlisted the National Park Service to crop photos of the inauguration to pretend that his crowd was larger than Obama’s. He ordered by tweet all US companies to stop doing business with China. He claimed he had the right to end the Constitutional provision of birthright citizenship by executive order. He threatened to close our southern border with military force to stop migrants. He deployed the National Guard and active-duty troops to the southern border to deal with the “emergency” that he had created.

 

In response to Robert Mueller’s investigation, Trump’s lawyers created an argument that the President cannot commit obstruction because he can do anything he wants: “the President has exclusive authority over the ultimate conduct and disposition of all criminal investigations and over those executive branch officials responsible for conducting those investigations. Thus as a matter of law and common sense, the President cannot obstruct himself or subordinates acting on his behalf by simply exercising these inherent Constitutional powers. This led Trump to claim that he has the “absolute right to PARDON myself.”

 

King George III said during the American Revolution that “A traitor is everyone who does not agree with me.” Trump has often characterized his critics as traitors: when Democrats did not applaud his State of the Union speech in 2018; any Jews who vote for Democrats; congressional Democrats for opposing his anti-immigration policies. The website AXIOS counted 24 times by this past June that Trump had accused other Americans of treason.

 

Things didn’t turn out so well for George III, when the American colonists decided that he did not represent them. To prevent Trump from crowning himself King Don I, Americans will again have to reject divine right pretensions.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/blog/154247 https://historynewsnetwork.org/blog/154247 0
Roundup Top 10!  

On 9/11, Luck Meant Everything

by Garrett M. Graff

When the terrorist attacks happened, trivial decisions spared people’s lives—or sealed their fate.

 

Busing Ended 20 Years Ago. Today Our Schools Are Segregated Once Again

by Gloria J. Browne-Marshall

Any desegregation plans must be a shared burden. But are we willing to take it on?

 

 

On or off, peace talks with the Taliban spell disaster for Afghanistan

by Ali A. Olomi

If history is any indication, the consequences of the Trump administration’s reckless attempt at an agreement and even hastier reversal will be borne out by Afghans themselves.

 

 

The Electoral College was Terrible from the Start

by Garrett Epps

Epps doubts that Alexander Hamilton could foresee the consequences of an electoral college.

 

 

The Lost Promise of Reconstruction

by Eric Foner

Can we reanimate the dream of freedom that Congress tried to enact in the wake of the Civil War?

 

 

Should We Give Up on Democracy?

by Rick Shenkman

Some social scientists say we might not have a choice.

 

 

How Africa is transforming the Catholic Church

by Elizabeth A. Foster

Pope Francis’s visit to Africa highlights the growing trend toward decolonizing Catholicism

 

 

Why Southern white women vote against feminism

by Angie Maxwell

The often overlooked question that explains why discussion of a gender gap leads us astray.

 

 

The Necessary Radicalism of Bernie Sanders

by Jamelle Bouie

Conflict was the engine of labor reform in the 1930s. And mass strikes and picketing, in particular, pushed the federal government to act.

 

 

 

The Power of Serena Williams

by Tera W. Hunter

"What she and Venus have accomplished is far more important than future titles and broken records."

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/173035 https://historynewsnetwork.org/article/173035 0
Trump’s Wall and the Aggrandizement of Despots

 

During the last week of August, The Washington Post reported that President Trump told aides to “fast-track billions of dollars’ worth of construction contracts, aggressively seize private land and disregard environmental rules.” He reportedly added that he would pardon any “potential wrongdoing.” Although acknowledging that an administration official insisted the president was only joking about pardons, the report reveals the extent of the president’s desperation to secure a victory before the 2020 presidential election. A week after the Post story, the U. S. Department of Defense authorized diverting $3.6 billion to fund 11 wall projects along the Mexican border.

 

Egotistical rulers like Trump often have grandiose architectural plans. Hitler had his “Germania,” his name for a new redesigned Berlin that would dazzle the world. Mao Zedong had his “10 Great Buildings” built in Beijing 1959. Stalin had his never-built Palace of Soviets, which was to be higher than the Empire State Building, and later, Moscow’s seven skyscrapers known as the “seven sisters.” As a candidate and heretofore as president, Donald Trump has been consumed by his dream of building "a great, great wall" on the United States–Mexico border. After announcing this when declaring his presidential candidacy in mid-2015, he added that “nobody builds walls better than me,” and that he would “have Mexico pay for that wall.”

 

For comparison with Trump’s wall obsession, however, let’s just concentrate on one despot’s architectural plans—those of Stalin. Although the idea of Palace of Soviets had been floating around for a while, it was not until August of 1932 that Stalin began to personally supervise its design by indicating specific alterations he wished in one architect’s plan. In 1933, he gave further instructions. They indicated various details in regard to shape, the need for the building to reflect the international solidarity of the proletariat, and, most importantly, that it be a monument to Lenin. Thus, Stalin wanted it to be topped by a gigantic statue of Lenin that would be much higher than the Statue of Liberty. To make room for the new structure, which was to be the tallest building in the world,  the massive Cathedral of Christ the Savior was destroyed (later rebuilt in post-Soviet Russia). 

 

But the German invasion of the USSR in 1941 prevented Stalin’s architectural dream from being realized. Building materials for it simply could not be diverted from the war effort. Stalin’s penchant for gigantic buildings, however, resurfaced after Germany’s surrender in 1945. From 1947 to 1953 (the year of Stalin’s death), he had seven skyscrapers built. One of them, a new structure for Moscow State University (MSU) became the highest building in Europe.

 

According to Khrushchev’s memoirs, Stalin said, “We’ve won the war. . . . We must be ready for an influx of foreign visitors. What will happen if they walk around Moscow and find no skyscrapers?They will make unfavorable comparisons with capitalist cities.” Although Stalin wanted “to impress people” with the grandeur of buildings such as the new MSU one, Khrushchev thought “the whole thing was pretty stupid.” 

 

Not only did Stalin impose the “seven sisters” on Moscow, but he also dictated a similar architectural style on some buildings in other cities dominated by Soviet power. Warsaw’s tallest building, originally the “Joseph Stalin Palace of Culture and Science” but later renamed just the “Palace of Culture and Science,” is one example.  

 

Yet, neither Stalin, nor Hitler, nor Mao, ever obsessed about building any structure as much as Trump has about “the wall.” From the beginning of his presidential campaign to the present, no other topic has concerned him more—even his Mueller investigation worries did not begin until 2017. A May Washington Post article stated that he “has demanded Department of Homeland Security officials come to the White House on short notice to discuss wall construction and on several occasions woke former secretary Kirstjen Nielsen to discuss the project in the early morning.” He also repeatedly urged the U.S. Army Corps of Engineers and Department of Homeland Security to award the border-wall contract to a construction firm whose head frequently appears on Fox News and is a GOP donor. 

 

Two primary reasons seem to propel Trump’s wall fixation. First, “the wall” is a clear symbol of his immigration policy, which demonizes immigrants crossing our southern border—“When Mexico sends its people, they're not sending their best. . . . They're sending people that have lots of problems, and they're bringing those problems with us. They're bringing drugs. They're bringing crime. They're rapists” (June 2015). More recently he warned of “invaders” coming across in “caravans.” The “wall” is the major symbol of his politics of fear and division

 

Secondly, Trump not only considers himself a master builder, but a “master” at most things (“My I.Q. is one of the highest.” “I’m a smart person. I know how to run things. I know how to make America great again.” “I have a very good brain and . . . . I know what I’m doing.”). Thus, as The Washington Post reported, Trump is defying Congress and diverting military and other funds to build his “wall,” and he “is micromanaging the project down to the smallest design details. But Trump’s frequently shifting instructions and suggestions have left engineers and aides confused, according to current and former administration officials.”

 

In the face of expert opinions, he has agreed to build steel bollard fencing as opposed to a concrete wall, but he insists the bollards (or slats, as he likes to call them) should be painted black to make them hot and less climbable. He has also expressed a desire to arm the slats with sharp spikes that would cut the hands of any attempted climbers. And, like Stalin, “the higher the better.” As one official said in regard to Trump’s wall preferences, “He always wanted to go higher.”

 

In an earlier article, I mentioned “6 disturbing parallels between Stalin and Trump,” as well as some differences. Two of the former, their egoism and “politics of fear,” have already been suggested, and both men attempted to foster “a cult of personality.” 

 

But Trump’s use of the catchwords “Make America Great Again” and “Build the Wall” indicates still another commonality between Stalin and Trump: a willingness and ability to successfully employ simplistic slogans. Like Stalin, for example, Trump has labeled political opponents “enemies of the people.” He has also encouraged “build the wall” rallies and apparel—one of his supporters (singer Joy Villa) wore a “Build the Wall” dress with a “Make America Great Again” purse to the 2019 Grammy awards. And in January 2019 Trump tweeted, “BUILD A WALL & CRIME WILL FALL.” 

 

Such slogans cater to basic emotions like fear. Liberals and progressives sometimes find it difficult to understand the appeal of a Stalin or Trump—in 2019, six and a half decades after his death, Stalin remains tremendously popular in Russia. This failure, as Lionel Trilling indicated in 1950 and Rick Shenkman more recently, stems partly from underestimating the importance of emotions, myths, and non-rational political behavior.

 

Many of Trump’s slogans also reflect a populist and anti-intellectual mindset, an “us versus them” dichotomy that both Stalin and Trump often employed. Stalin frequently attacked “bourgeois specialists,” bureaucrats, and intellectuals, as enemies of the people. As a January 2017 Washington Post columnist noted: “Trump's campaign was pitched entirely at the idea that egg-headed wonks and liberal elitists—including the entire literary and entertainment culture centered on the two coasts—were not only deeply out of touch with the concerns of average Americans but also dismissive of them.” It went on to state that Trump views himself as channeling the will of the people, a group that has been ignored or laughed at by coastal elites over the past decade.” 

 

Although the future of the “wall” remains uncertain, it would not be surprising, if unblocked, he follows the example of the man who changed city names to the likes of Stalingrad, Stalinbad, Stalino, and Stalinogorsk. Plastering his name on everything from hotels and casinos to planes and golf courses is already characteristic of Trump. And on a trip to George Washington’s Mount Vernon estate he commented that if our first president “was smart, he would've put his name on it. You've got to put your name on stuff or no one remembers you.” 

 

Trump has already invaded our brains so that we will never again be able to hear “wall” without thinking of him. Maybe that will be enough for the “great wall-builder.” Or maybe he has heard of the “Great Wall of China,” and dreams that someday a “Great Wall of Trump” will help memorialize him. 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172975 https://historynewsnetwork.org/article/172975 0
The American Left Needs a Contemporary Thad Stevens

 

Donald Trump’s presidency has accelerated what was already the biggest upsurge for the American Left in several generations. The past decade’s crises, beginning with the Great Recession of 2008 and then the Republican Party’s lurch to the right under President Obama, have radicalized many Democrats and young people, with thirteen million people voting for an avowed socialist in 2016. This realignment leftwards has increased since Trump’s election: hundreds of thousands who had never participated in grassroots politics have joined local groups like Indivisible;  socialists are running for and winning office in many parts of the country; mainstream Democratic presidential candidates are vying to propose the most comprehensive programs for economic and social transformation.

 

The present momentum is a great opportunity for practical radicals, but they need to get serious about politics if they expect to seize this day.  Protest, “resistance,” and speaking truth to power are no longer enough. Leftists need to think about how to wield power in our complex political system.

 

For many, the sudden proliferation of radical movements and ideas evokes “the Sixties” or even “the Thirties,” when powerful movements drove massive social change. But today’s party and electoral politics differ profoundly from those two eras.  For most of the last century, the key ideological divisions in U.S. politics were not partisan, but regional and cultural. As Joe Biden’s recent gaffes have reminded us, only a generation ago the Democratic Party’s congressional leadership still included Southern white supremacists controlling key committees. When Jimmy Carter took office in 1977, the most powerful Democratic Senators were two Mississippians, John Stennis (Chairman of the Armed Services Committee) and James Eastland (Chairman of the Judiciary Committee). These men entered politics in the late 1920s and the Senate in the 1940s, and both remained obdurate foes of racial equality and any use of federal authority to guarantee black civil and voting rights. Conversely, in 1977 and after, some of the strongest defenders of black rights were northern Republicans like New York’s Jacob Javits, New Jersey’s Clifford Case, and Rhode Island’s John Chafee. The twentieth century’s only black Senator until 1993 was Massachusetts Republican Edward Brooke. Nor were civil rights a residual exception to an otherwise clear distinction between the parties. On other key issues, whether environmental, peace, or social welfare, Southern conservatives (mostly still Democrats) voted with conservative Republicans, and Northern liberals voted as a bloc across party lines.

 

Since then, we have lived through a fundamental realignment.  Democrats like James Eastland and Republicans like Jacob Javits are long gone.  Today the most powerful Southern Democrat is Representative James Clyburn of South Carolina. Like Clyburn, the majority of the Democratic party in South Carolina is African American (in the state where Radical Reconstruction crested). The Republican progressives are extinct and, while caucuses of center-right Democrats remain, the Solid South’s “yellow dog Democrats” committed to racial domination have disappeared—or turned Republican, with Strom Thurmond and Jesse Helms leading the way in the 1970s.  

 

Having finally attained ideological clarity in our party system, there is a historically unprecedented opportunity to make the Democratic Party what this country has always lacked: a party of working people and all those historically excluded by race, gender, sexuality, religion, or nativity—the party of human rights, if you will. Since Thomas Jefferson, Democrats have claimed to be the “party of the people,” but that boast always was qualified by white skin and manifold exclusions.  Myths aside, the party always included plenty of rich planters like Jefferson, Andrew Jackson, and later James Eastland, plus the oilmen and agrobusiness interests whom Lyndon Johnson and others faithfully represented for decades, and more recently, the financial and tech sectors avidly pursued by Clintonian neoliberals.

 

Today’s left-wing Democrats need to examine which legacies from U.S. political history they should draw upon in remaking their party. Mainstream pundits are waking the ghost of Eugene V. Debs, five-time Socialist presidential candidate in 1900-1920, as a forerunner to Bernie Sanders. Debs was a remarkably effective agitator who repeatedly went to jail for his principles and put socialism on the map in American politics, but he is not a model for the intra-party struggle American radicals need to wage now. Debs never held elective office, and his party never managed even a small caucus in Congress or any state legislature outside of Oklahoma (a “red” outlier during the ‘Teens).  Their greatest accomplishment was periodic control of city hall in industrial towns like Reading, Pennsylvania, Schenectady, New York, and Bridgeport, Connecticut, and one major city, Milwaukee, a far cry from national power. 

 

If contemporary leftists want to learn from the past, a better example would be the most revolutionary parliamentary leader in our history, Congressman Thaddeus Stevens of Lancaster, Pennsylvania. Stevens was an extremely effective legislative infighter in Pennsylvania and then in Washington, renowned for his deadly acuity in debates, admired and feared by both allies and enemies. He was Lincoln’s bane during the Civil War, relentlessly pushing the President to do what needed to be done--free the slaves and crush the slaveowners. 

 

Stevens understood that at key moments politics really is a zero-sum game, in which you either win or lose.  Moral victories are bittersweet consolations; prevailing over one’s opponent is what matters. In 1866-1868, he helped unite the Republican Party in pushing through the House all the key measures of Radical Reconstruction, including the Thirteenth Amendment (uncompensated freedom for all slaves) and the Fourteenth (making the “freedmen” into citizens with equal rights which no state could abrogate).  He passed the crucial Reconstruction Act of March 1867, which imposed military governments over the former Confederate states to block their legislatures’ efforts to recreate slavery via Black Codes controlling the freedpeople. That legislation authorized the Army to hold elections for state constitutional conventions in which all men, regardless of race, could vote. 

 

Stevens and the other Radicals grasped the essence of “movement politics,” to push from the outside and mold public opinion via ceaseless agitation while carefully maneuvering on the inside to get the votes needed for decisive policy changes.  These are the lessons we need to learn now, post-haste. It is also worth noting that Stevens was fearless in the face of significant disabilities. He was born with a club foot and ceaselessly mocked as a “cripple,” and in his youth suffered a disease which left him hairless, requiring ill-fitting wigs for the rest of his life.  For decades, he lived openly with his black housekeeper, Lydia Hamilton Smith, ignoring salacious rumors. In 1851, while a Congressman, he defended 33 black men in the largest treason trial in U.S. history after some of those men killed a Maryland slave-owner who crossed into Lancaster County to claim his escaped chattels.  

 

Thaddeus Stevens gave no quarter to the enemies of liberty.  He focused relentlessly on how to defeat them, by any and all means necessary, to bring about a true republic. When he died in April 1868, he lay in state in the Capitol with an honor guard of black soldiers. He asked to be buried in Lancaster’s one integrated cemetery with the following epitaph: “I have chosen this that I might illustrate in my death the principles which I advocated through a long life, equality of man before the Creator.” We need women and men like him now, in Congress and in the statehouses, and in power.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172967 https://historynewsnetwork.org/article/172967 0
The Democratic Presidential Candidates’ Ivy League Problem – and the Party Divide It Signals

 

The latest round of Democratic presidential debates invited the candidates to weigh in on the question of whether the oldest candidates in the race have the “vision” to appeal to a new generation.  But since all three of the frontrunners are septuagenarians – while no candidate under age 60 has reached even 15 percent in the polls – a more relevant question is whether the “vision” of the younger candidates will catch fire among Democratic voters of any age.  Most of the younger Democratic presidential contenders have an educational and ideological background that is at odds with the views and experiences of many in the party. Nowhere is that more evident than in the near-monolithic dominance of elite colleges in the younger Democratic candidates’ educational histories.

 

Eleven out of the fourteen candidates born after 1960 were educated at Ivy League universities, while only two attended state institutions.  (As a comparison, among the 20 million American undergraduates who begin college each year, only 0.4 percent go to the Ivy League, while nearly 75 percent go to public colleges and universities). By contrast, among the Democratic presidential candidates born before 1960, not a single one attended an Ivy, and several began their educational career at local public colleges.  Bernie Sanders went to Brooklyn College.  Joe Biden attended the University of Delaware.  Even Elizabeth Warren, who eventually became a Harvard Law School professor, earned her bachelor’s degree in speech pathology as a transfer student at the University of Houston after initially dropping out of college to get married. Like an earlier generation of Democratic politicians who often attended state colleges (as Lyndon Johnson did) – or even, like Harry Truman, skipped college altogether – the oldest Democrats in the race for the 2020 presidential nomination did not enter adulthood as part of a meritocratic educational elite who had the credentials or resources to attend the nation’s most selective schools.

 

Of course, Ivy League-educated presidential candidates are nothing new; the United States has had them since the eighteenth century.  But for much of the twentieth century (until at least the late 1960s), the Ivy League functioned more often as finishing colleges for the privileged than as creators of a new meritocratic class defined by intelligence.  Money and family connections often mattered more than SAT scores in securing admission.  As a result, hardly any Democratic politicians from working-class backgrounds were Ivy League alumni; the only Democrats who did attend were, like their Republican counterparts, children of wealthy families or political dynasties. The rest – including Hubert Humphrey, Eugene McCarthy, George McGovern, Walter Mondale, and a host of others – went to local state schools or even religious colleges, and their political priorities reflected the education that they received there.  McCarthy, a Catholic graduate of Saint John’s University, was a philosopher-of-sorts on the campaign trail, while McGovern, a graduate of Dakota Wesleyan, could quote the Sermon on the Mount with the fervor of an evangelical advocate of the Social Gospel.    

 

All of this began to change in the last few decades of the twentieth century, when the Ivy League became a gateway to national politics for many first-generation members of a rising meritocratic class.  Though scions of wealthier families are still vastly overrepresented in the Ivy League, a focus on academic merit and a concerted effort to make these schools more racially and economically diverse has enabled many brilliant, hardworking people from lower-income homes to make it into the Ivy League.  As a result, the Ivy League has become more important than ever as an imprimatur of academic merit – and an increasingly important gatekeeper for entry into the upper echelons of any profession, including politics.  Since 1988, when Harvard-educated Michael Dukakis ran for president, the Democratic Party has never nominated a graduate of a state university or non-elite college; every Democratic presidential nominee for the past thirty years has had a degree from either Harvard or Yale.  And every current member of the Supreme Court has likewise attended law school at one of these two universities.  

 

Even as elite college students have become more racially and economically diverse, they have become ideologically more monolithic.  Only a generation ago, income rather than education was a better predictor of people’s political leanings, but now people with a graduate degree are far more likely to be liberal Democrats than conservative Republicans, regardless of their race or income.  To be sure, there is a conservative contingent at all of the nation’s colleges, including those that are most elite, but conservatives from the Ivy League are usually conscious that they are defying the intellectual currents at their school and that they are rebelling against the prevailing academic ethos of rights-conscious liberalism.  Liberal students, on the other hand, commonly confuse the secular, rights-conscious liberalism of their academic milieu with the views of many lower-income Democrats of color – even though there are significant differences between the two, especially on issues of religion, sex, and gender.

 

Fifty-four percent of Democrats with graduate degrees identify as “consistently liberal” on all issues (social and economic), but the same is true of only 24 percent of Democrats with “some college” and only 11 percent of Democrats with a high school education or less. Surveys show that less-educated Democrats are overwhelmingly liberal on economic questions, such as jobs and healthcare; it is only on the cultural issues, such as abortion or LGBT+ rights, that significant differences by education and race show up.  And on these issues, the differences are stark.  About half of all Hispanics, for instance, would like to make abortion illegal. Fifty-five percent of black Democrats believe that a person’s gender is determined by their birth sex – a view that only 24 percent of white Democrats take.

 

These differences on abortion, gender, and sexuality reflect a larger divide in the party between secular and religious voters.  White Democrats are heavily secular: only 22 percent attend religious services once a week, while 44 percent attend “seldom or never.”  Nineteen percent of white Democrats are atheists.  But among blacks and Hispanics – especially those who have less education – the picture is very different.  Forty-seven percent of black Democrats attend religious services at least once a week and an additional 36 percent attend at least once a month.  Seventy-six percent of black Democrats – but only 35 percent of white Democrats--say that religion is “very important” in their lives. Whether they draw from the progressive strands of black Protestant theology, socially conscious Catholicism, or another religious tradition that teaches concern for one’s neighbor, religion shapes their economic views in a way it does not for most white Democrats.  

 

The secular, rights-conscious, cultural liberalism of white Democrats is largely a mirror of the prevailing ideology at equally secular elite private colleges. A century ago, most of these colleges were bulwarks of liberal Protestantism, but their pluralistic, democratic values have now been thoroughly secularized and divorced from the religious traditions that initially shaped them.  The percent of white Democrats who never attend religious services (44 percent) just happens to be exactly the same as the percent of Yale class of 2019 undergraduates who entered college identifying as “atheist, agnostic, or nonreligious.” And while this might be merely a coincidence, it seems to point to a larger reality: the views of white Democrats on cultural issues such as abortion, gender, and sexuality are almost identical to the views of the majority of elite college students, while the views of non-whites (who, on average, are much less likely to have attended a top-tier college or, in many cases, any college at all) often diverge radically. Highly educated cultural liberals often imagine that their progressive views resonate with lower-income, racial minorities who want affordable healthcare, lower housing costs, and sustainable wages, but in many cases, they do not.  

 

To be sure, earning a degree from an Ivy League college or another highly selective institution by no means necessarily suggests that one is out of touch with the socially conservative values found among some in the working class.  There is, of course, a sizable minority of cultural conservatives even at the most liberal colleges.  Nor is it impossible for a Democratic candidate with an elite college education and progressive views on cultural issues to appeal to working-class voters, since Bill Clinton (a graduate of Georgetown and Yale Law School) and Barack Obama (Columbia and Harvard Law School) both did this very effectively.  But both Clinton and Obama were masters of expressing their liberal views in a religiously inspired language of cultural consensus that demonstrated respect for the cultural values of socially conservative voters.  Whether the current crop of young Democratic presidential contenders can do this effectively remains to be seen.  The preference of African American voters and other people of color for an older white man with working-class roots over any of the younger African American or Hispanic candidates in the race suggests that so far, they have not.     

 

If the younger generation of Democratic politicians would like to be the face of the party’s future, they may need to take a page from the party’s past and exchange their rights-based, cultural liberalism for a jobs-focused message that is sensitive to the social issue concerns of the millions of party members who have never set foot on an Ivy League campus.  The younger Democratic presidential candidates might be Ivy League graduates, but to win support from the rest of their party they will need to translate their ideas into a cultural vernacular that they probably did not learn in the classroom.  

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172964 https://historynewsnetwork.org/article/172964 0
A Nation Headed to Civil War: The Compromise of 1850

 

In 1850, the Union was proclaimed to have been saved again in a great compromise that removed slavery as a controversy from national politics. President Millard Fillmore declared it nothing less than “the final settlement.” The issue tearing the country apart, whether the vast territory conquered in the Mexican War would be slave or free, was no longer to be a matter of debate. “We have been carried in safety through a perilous crisis,” Franklin Pierce announced at his inauguration on March 4, 1853.

 

The Compromise of 1850 admitted Texas as a slave state and California a free one, and avoided determining the status of New Mexico until far into the future. Only a few agitators trying to shield fugitive slaves from being returned to their masters under the new federal law continued to be nuisances. Slavery as a question that would divide the country was now safely consigned to the past as it had once before.

 

Most importantly, this new compromise left sacrosanct the Compromise of 1820, the Missouri Compromise, the original “final settlement.” The Missouri crisis had aroused all the issues and arguments revived in the crisis in the aftermath of the Mexican War. The admission of Missouri as a state would increase the proslavery bloc in the Senate to a four-seat majority. Its admittance would also establish a precedent for admitting further Western states as slave states. The Northern objection was mirrored in Southern fears that the entire West would be denied to slavery and the balance of power inevitably shifted. Secretary of State John Quincy Adams wrote in his diary that the Missouri problem was “a flaming sword . . . a mere preamble—a title page to a great tragic volume.” He believed it was based in the Constitution’s “dishonorable compromise with slavery,” a “bargain between freedom and slavery” that was “morally vicious, inconsistent with the principles upon which alone our revolution can be justified.” He prophesied that “the seeds of the Declaration are yet maturing” and that its promise of equality would become “the precipice into which the slave-holding planters of his country sooner or later much fall.” In the Senate, the Southerners’ anxiety that slavery might be prohibited in the territories assumed a hostility congealed into ideology against the egalitarian premise of the Declaration of Independence. Senator Nathaniel Macon of North Carolina, the former Speaker of the House, posed the question, “A clause in the Declaration of Independence has been read declaring that ‘all men are created equal’; follow that sentiment and does it not lead to universal emancipation?” The Declaration, Macon stated, “is not part of the Constitution or of any other book” and there was “no place for the free blacks in the United States.” Senator Henry Clay of Kentucky managed to hammer together a narrow majority for a compromise that brought in Maine as a free state to balance the slave state of Missouri and established a line restricting slavery north of 36°31’ latitude excepting Missouri. The debate inspired a sense of panic in Thomas Jefferson retired at Monticello. “This momentous question, like a fire bell in the night, awakened and filled me with terror. I considered it at once as the knell of the Union.”

 

Jefferson’s nightmare hung over the Senate debate of the Compromise of 1850, filled with frightful images of death, premonitions of catastrophe, and curses of doom if slavery were allowed to persist as a vital issue. The Great Triumvirate of Henry Clay, Daniel Webster, and John C. Calhoun, the representative political men of their age, hurled lightning bolts from their Olympian heights. Henry Clay, young Abraham Lincoln’s “beau ideal of a statesman,” who invented the power of the Speaker of the House, who as a senator crafted the Compromise of 1820, who served as secretary of state, and who was nearly elected president, warned that the nation stood “at the edge of the precipice before the fearful and leap is taken in the yawning abyss below, which will inevitably lead to certain and irretrievable destruction.” Daniel Webster of Massachusetts, the Godlike Daniel, the voice of “liberty and Union, one and inseparable, now and forever,” whose framed picture hung in Lincoln’s law office, cautioned, “Secession! Peaceable secession! Sir, your eyes and mine are never destined to see that miracle. The dismemberment of this vast country without convulsion! . . . Sir, he who sees these States, now revolving in harmony around a common center, can expect to see them quit their places and fly off without convulsion, may look the next hour to see the heavenly bodies rush from their spheres and jostle against each other in the realms of space without producing a crash of the universe.” John C. Calhoun of South Carolina, whose stunning career included every office—congressman, senator, secretary of war, vice president, secretary of state—but the one he coveted most—president of the United States—sat wrapped wraithlike in a black cape on the Senate floor. The great nullifier, who insisted the states had preeminent authority over the federal government, objected to any compromise that would thwart the extension of slavery anywhere in the country, an “injustice” which he called the “oppression” of the South. “No, sir,” he prophesied, “the Union can be broken.” Calhoun’s acolyte, Jefferson Davis of Mississippi, in opposing the admission of California as a free state, threatened, “If sir, this spirit of sectional aggrandizement, or if gentlemen prefer, this love they bear for the African race, shall cause the disruption of these states, the last chapter of our history will be a sad commentary upon the justice and the wisdom of our people.” Calhoun died less than a month after his final appearance in the Senate. Clay and Webster were dead within two years. The old order passed. By then Secretary of War Jefferson Davis was the power behind the president.

 

 

Excerpt from ALL THE POWERS OF EARTH by Sidney Blumenthal

Copyright © 2019 by Sidney Blumenthal. Reprinted by permission of Simon & Schuster, Inc, NY.

 

To hear Sidney Blumenthal discuss his work on his five-part biography of Abraham Lincoln and more, watch his interiew with HNN editor Kyla Sommers. 

 

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172969 https://historynewsnetwork.org/article/172969 0
What We Can Learn About Surviving Frauds Like Trump from Titus Oates

 

Before there was Donald Trump there was Titus Oates. Known as Titus the Liar after he was finally revealed and reviled, Mr. Oates succeeded in roiling England for three painful years, 1678-1681. Almost single-handedly, he fabricated the now-infamous “Popish Plot” that resulted in the execution of at least 15 innocent men (mostly peers of the realm along with priests and even archbishops), the death of another 7 in prison, a genuine constitutional crisis, widespread riots, panic, dislocation, heightened distrust among neighbors, and religious hatred. In short although history doesn’t literally repeat itself, sometimes it rhymes. 

 

Born in 1649, by his mid-twenties Titus Oates had accumulated a long history of failure, fabrication, and expulsions, along with a narrow escape from the gallows. As a youth, he was expelled from several schools - mostly for financial misbehavior - before entering Cambridge University, where he was also expelled after he reneged on paying a tailor whom he had engaged to make him a coat. He then faked a ministerial degree, masqueraded as an Anglican priest and was ejected from that position for drunkenness, lewd behavior, and misusing congregation funds. He went back to his father’s residence where he manufactured false charges against a local schoolmaster, hoping to accede to his position, but when the perjury was discovered, he was jailed, but escaped to London, and eventually shipped as an Anglican chaplain aboard a naval vessel. Within a few months, he was caught at “buggery,” then a capital offense, but avoided execution because of his supposed religious vocation, although he was soon drummed out of the Royal Navy. 

 

Having returned to London, Oates was re-arrested on his earlier perjury charge, but managed to escape once more, and briefly served as Anglican chaplain once again, this time to an aristocratic family, but was soon sacked for “unsatisfactory behaviour.” Oates’s religious beliefs, if he had any, are unclear. He converted to Catholicism briefly, later claiming that he did so in order to go under cover and reveal the plot that he was soon to cook up, out of thin air. As a putative Catholic, he wheedled his way into several schools in Europe, only to be kicked out of at least two, after which he pretended to have obtained a Doctorate in Catholic Theology – which was soon revealed to be bogus because he did not know any Latin. 

 

He returned to England, having devised details of a sensational plot – allegedly hatched in Rome and to be carried out by English Jesuits – to murder the Protestant English king, Charles II. In conjunction with one Israel Tonge, a fanatic anti-Catholic crusader, Oates managed to impress many officials with precise details as to the assassination plans. One killing was said to have been foiled when a musket jammed, after which a crack team of Jesuit assassins, armed with foot-long daggers, had allegedly been dispatched to murder the king while he was on his daily walk in St. James Park, not to mention a group of Irish “ruffians” waiting to accost the king; plus, the queen’s doctor was said to be planning to poison him if all else failed. Charles himself was dubious, in part because Oates claimed to have met Don John of Austria, describing him as tall and fair, whereas Charles had actually met the Austrian nobleman and knew him to be short and dark. Nonetheless, Titus Oates proved remarkably persuasive to many in the king’s court and to the public at large. Things came to a head when he testified about this “plot” before an Anglican magistrate, Edmund Berry Godfrey, who was found murdered a month later. Oates immediately announced that the Catholics were responsible, generating a panic of anti-Catholic frenzy in which Berry Godfrey virtually became a Protestant martyr. (The actual murderers were never identified.)

 

Mobs rampaged, burning effigies of the Pope, and breaking into Catholic-owned stores. Oates was given leadership of a contingent of the King’s Militia, which entered Catholic homes, terrorizing the occupants and arresting suspects. Before the tumult was over, he had fingered hundreds of peers and prelates, Parliament had mandated that Catholics be forcibly relocated to at least 16 kilometers from London, and a constitutional crisis arose because King Charles had no legitimate heirs, and his brother, the Duke of York, being a Catholic, was considered an unacceptable successor. 

 

By the first year of his colossal hoax, Oates had become the most popular man in the country, basking in the adulation of large crowds, and proclaiming himself "The Saviour of the Nation." He also assumed the title of "Doctor," professing that he had earned the degree at Salamanca, undeterred by the fact that he had never been there. He was lodged at public expense at Whitehall, given a handsome stipend, dressed himself in fine episcopal attire, and was accorded an official bodyguard. 

 

Eventually, the fraud crumbled. Acumulated evidence of Oates’s lies plus revulsion at the execution of many highly regarded persons led to his unmasking. He was convicted of multiple perjuries and was whipped through the streets of London and imprisoned for the duration of Charles’s reign.

 

How did this gratuitous grifter, this frequent failure, this persistent perjurer and master of mendacity succeed in hoodwinking so many, and in turning England upside down? There were three main contributors: Titus Oates’s personal appeal, an inchoate fear of England’s Catholic minority, and the acquiescence of public officials, many of whom knew better but failed nonetheless to hold him to account. Thus, Oates was a gifted and charismatic orator, demagogically adroit at playing to the emotions of his followers. He had no source of income other than his personal brand, which he burnished at every opportunity. 

 

At the time, Catholics constituted only about one percent of the English population; overwhelmingly, they just wanted to practice their pre-reformation religion, often in secret because of pre-existing prejudice against them. But nonetheless, there was widespread fear of Catholicism, even as people were often friends and neighbors of individual Catholics. By the latter half of the 17th century, history was casting a long shadow over England, notably a scalding memory of the nearly successful Gunpowder Plot of 1605, which had in fact been orchestrated by a small terrorist coterie of Catholics, and which, had it not been uncovered, would have blown up the Protestant English king James I and much of Parliament as part of a conspiracy to forcibly turn England back to Catholicism. There was also the terrifying Irish rising of 1641, which slaughtered nearly all Irish Protestants; a slogan promoted by Oates and his followers was “41 is come again.” Moreover, the Great Plague of London (1665) and the Great Fire of London (1666) had lent themselves to an earlier spate of anti-Catholic rumor-mongering.

 

On top of this loomed recollection of the Spanish Armada, as well as the fact that Protestantism, although successful, was geographically limited to northern Europe, while the great powers – France and Spain – were Catholic, as was Charles’s mistress, his wife, and his brother. Furthermore, Charles had attempted to ameliorate some of the more severe anti-Catholic laws of the time, while seeking accommodation with the rulers of France and Spain. Although he was definitely an Anglican, the Jesuits very much disliked him, reputedly calling him, among other things, the “Back Bastard.” 

 

And finally, there were members of Parliament, the clergy, judiciary, and the nobility who were reluctant to criticize Oates for fear of angry public reaction, and others, notably many in the newly formed Whig Party, who embraced his lies because they fed into their own agenda of suppressing Catholicism. It was not until the Roman Catholic Relief Act of 1829 that most of the discriminatory legislation passed because of Titus Oates’s malign influence was finally suspended. Oates himself didn’t go quietly; in 1699 he loudly disrupted the funeral of a woman who had forbidden him to preach at her demise and then in 1702 he was arrested for assaulting a woman with a cane. He died in 1705, largely forgotten and certainly not mourned. 

 

Before there was Donald Trump, there was Titus Oates – but England survived, thrived, and even became great. There have been frauds, con men (con women, too), and truly dangerous, deranged characters who have sown chaos, pain, and despair. However, the true story of Titus Oates, although horrifying and downright infuriating, should give us hope that the US, too, can recover from You Know Who, just as England did from Titus Oates. 

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172963 https://historynewsnetwork.org/article/172963 0
The U.S. a Christian Nation? Not According to the Founders!

 

George Washington may have said it best, if not first: “Religious controversies are always more productive of acrimony and irreconcilable hatreds than those which spring from any other cause.” To prevent such controversies, Washington ordered Continental Army commanders “to protect and support the free exercise…and undisturbed enjoyment of…religious matters."

 

But former attorney general Jefferson [“Jeff”] Beauregard Sessions, III, of Alabama, contends that Washington’s views were “directly contrary to the founding of our country.” And Vice-President Michael Richard Pence, a fervent church-goer who publicly proclaims his Christian beliefs whenever he can, insists the United States was “founded as a Christian nation.” 

 

Pence and Sessions are but two prominent Americans in and out of politics today who continue refueling a centuries-old controversy over the role of religion in American life.

 

Washington’s friend, the widely heralded polemicist Thomas Paine tried ending the controversy. “I do not believe in…any church,” he declared. In a call to arms against what he called church-state tyranny in early America, he insisted that “every national church or religion accuses the others of unbelief; for my own part, I disbelieve them all.”

 

Both Benjamin Franklin and Thomas Jefferson agreed. President Jefferson denied that Jesus was “a member of the Godhead,” and Benjamin Franklin, a co-author of the Declaration of Independence with Jefferson, decried Christian church services for promoting church memberships instead of “trying to make us good citizens.” An outspoken Deist, Franklin criticized all religions for making “orthodoxy more regarded than virtue.” He insisted that man be judged “not for what we thought but what we did…that we did good to our fellow creatures.”

 

Most of America’s Founding Fathers echoed Franklin’s beliefs. America’s fourth President, James Madison was raised an Anglican and was a cousin of Virginia’s Episcopal bishop. But he was a fierce proponent of church-state separation and fathered the Bill of Rights, whose opening words outlawed government “establishment of religion” and any prohibition of “the free exercise thereof.” Both Congress and all the states agreed. 

 

“It was the universal opinion of the [18th] century,” Madison wrote in 1819, “that civil government could not stand without the prop of a religious establishment and that the Christian religion itself would perish if not supported by a legal provision for its clergy.” But as President, Madison found that, “the devotion of the people have been manifestly increased by the total separation of church from the state.”

 

Even the devout, church-going Congregationalist John Adams, who had signed the Declaration of Independence, inked his presidential signature on the 1796 Treaty of Tripoli affirming to Americans and the world that “the United States is not, in any sense, a Christian nation.” The 23 members present in the U.S. Senate (out of 32) ratified the document unanimously. 

 

That should have settled matters, but in the centuries since the founding, some Americans have persisted in claiming that the United States was founded as a Christian nation, ignoring-- even scoffing at--the words of the Founders, the Constitution, and the Bill of Rights. 

 

The sole grain of truth to claims of governmental ties to Christianity in early America lies in the different religions established in each of the independent British-North American provinces before the birth of the United States. Although individual states retained state-supported religions well into the 19gh century (four did so until after the Civil War), the ratification of the Constitution created an absolutely secular nation.

 

Indeed, each of the nation’s three founding documents—the Declaration of Independence, the Articles of Confederation, and the United States Constitution—carefully avoided all mention of Christianity or Christ. Article VI of the Constitution states as dramatically as possible, that “no religious test shall ever be required as a qualification to any office or public trust under the United States” –hardly the hallmark of a “Christian” nation. 

 

To reaffirm America’s not becoming a Christian nation, Congress and all the states added the First Amendment to the Constitution in 1791, reiterating the nation’s areligious character by barring government establishment of any and all religion.

 

Only the Declaration of Independence even mentions God--in a single ambiguous reference in the opening paragraph to what Deists rather than practicing Christians called “Laws of Nature and Nature’s God.” 

 

Like the founding documents, the collected letters, speeches, and papers of George Washington never invoked the name of Christ or Christianity and mentioned God only once, as he concluded his oath of office as first President of the United States and added, “So help me God.” Prior to that, he carefully omitted all references to God and Christ, appealing instead to “providence,” “destiny,” “heaven,” or “the author of our being” as sources of possible supernatural favor for himself and the nation

 

“Providence has directed my steps and shielded me.” young Colonel Washington affirmed after escaping death in a fierce encounter in the French and Indian War. And as President, he wrote carefully worded letters affirming the nation’s areligious status and its promise of religious freedom to leaders of twenty-two religious groups—and atheists!

 

In a reaffirmation of his deep opposition—and that of all the Founding Fathers--to state-sponsored religion, Washington wrote a personal letter to members of the Jewish synagogue in Newport, Rhode Island, in 1790, restating the United States Government commitment that “gives to bigotry no sanction, to persecution no assistance.” 

 

Again, the nation’s first President avoided all mention of God or Christ. 

 

Thomas Paine reinforced the thinking of Washington and America’s other Founders in his famed pamphlet Common Sense—the most widely read publication in the western world in the late 18th century after the Bible. Washington called Common Sense critical in convincing Americans of “the propriety of a separation [from Britain].”

 

A fervent patron of Deism, Paine called the “connection of church and state adulterous.”  He said such a connection in Britain and British-America had been designed to enrich both institutions and keep mankind in their perpetual thrall by infecting men’s minds with the myth of divine right of kings and hereditary rule. “Why,” Paine demanded, “should someone rule over us simply because he is someone else’s child?”  Calling the notion absurd, he added, “Mingling religion with politics [should be] disavowed and reprobated by every inhabitant of America.”  The Founding Fathers agreed.

 

John Adams disliked Paine intensely, but  nonetheless declared, “I know not whether any man in the world has had more influence on its inhabitants or its affairs for the last thirty years than Tom Paine. Call it then the Age of Paine.” He might have said, “The Age of Deism.”

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172973 https://historynewsnetwork.org/article/172973 0
1933 May Be Closer than We Think

 

On January 30, 1933, Adolf Hitler was appointed Chancellor of Germany, effectively ending the Weimar Republic, the nation’s second attempt at democracy. On January 20, 2017 Donald Trump was inaugurated President of the United States effectively ending….well, what exactly?

 

Immediately after Trump’s ascension to office many political commentators sought to fill in this blank with comparisons to the ill-fated Weimar Republic. Historians and other academics rejected the analogy as too facile. They pointed out that, unlike the United States, Germany had little experience with democracy. It had lost a major war and suffered a draconian peace settlement. Its economy had also been buffeted by rampant inflation, high unemployment and finally a Great Depression. Moreover a large share of its population believed in conspiracy theories, including the infamous “stab in the back” legend that blamed the nation’s defeat in World War I on internal enemies such as Socialists, Communists and Jews.

 

While contingent events leading to the rise of Adolf Hitler and the election of Donald Trump might seem to differ beyond the point of comparison, two and a half years of the latter’s presidency now force us to look deeper below the surface. Increasing cultural, social and political continuities between Weimar and America should give us serious concern in assessing the fate of these two democracies as part of an analogous historical phenomenon.

 

The sociologists Rainer Baum and Frank J. Lechner characterized pre-Hitler Germany as a “nation of moral strangers.” It was a country whose people could neither agree about the nature of a good society nor the social relations and community that that social order entailed. Germans generally divided into three closely bounded and often incompatible social and cultural milieu: liberal, social democratic and authoritarian corporatist. 

 

From an American perspective the seemingly most unique of these milieu was the authoritarian corporatist, or what the historian Mack Walker characterized as the German mentality of hometowns. Hometowns, according to him, were communities of webs and walls that could be both physical and cognitive in character. The webs consisted of integrated and hierarchical social status groups or corporations, such as craftsmen, merchants, financiers and local government officials in cities and towns, and peasants and small farmers in the countryside. These groups earned their legitimate place in society through extensive training and socialization. They shared solidary, often in-bred and exclusionary values in opposition to liberal individualism and socialist collectivism and were considered “rooted” like no others in the nation’s social fabric. The walls, in contrast, protected against those elements of society who were “rootless” and “disturbers” of the hometown community. They consisted primarily of the working class and the Jews, but also included immigrants, criminals and social deviants.

 

Hometown mentalities in the United States historically flourished in the ante-bellum South, with its belief in the principles of social honor and white superiority, its exclusion of millions of non-white slaves and its staunch opposition to Northern economic and political liberalism. The Civil War and Reconstruction were supposed to have brought an end to such particularistic and racist visions of the good society. But notions of the glorious lost cause of southern independence, underlying today’s overt and covert white nationalism and nativism, have proven that American webs and walls continue to flourish in our collective psyche. They exist literally in terms of building a physical barrier along our border with Mexico designed to keep out “rootless” and therefore dangerous immigrants. They also continue to exist mentally in the recent words of a President who can, without apparent penalty among his supporters, blithely tell women of color elected to the House of Representatives to “go back and help fix the totally broken and crime-infested places from which they came.”

 

The Weimar Republic tried to reconcile the values of liberalism, socialism and hometown corporatism in a single constitution. It proved to be a spectacular failure. In the words of Otto Kirchheimer, a contemporary jurist and political scientist, the effort resulted in a “constitution without decision,” one that did not contain “any values in whose name the German people can be in agreement.”  By its very nature, it did not encourage true democratic compromise and reconciliation among interested parties, but only winning and losing based on the political strength of competing social milieu, each seeking to impose its own worldview and material interests on its opponents.

 

In the United States our own revered Constitution is showing similar signs of cultural and ideological strain and conflict. Although it did not include socialist values, it did try to reconcile liberal and hometown visions of a good society in a great compromise over the existence of slavery. Its very federal foundations were designed to protect the hometown aspirations of a white nationalist South, by giving each state in the Union two senators regardless of population, creating an electoral college to elect the President and preserving the right of individual states to oppose federal authority through the so-called reserved powers clause of the Tenth Amendment. The result has been a thwarting of a democratic interpretation of the popular will of the people, most recently through the election of two Republican presidents receiving fewer votes than their opponents and the prospect of it happening again in 2020.

 

In fact, the supporters of hometown values in the United States—be they Donald Trump, the Republican Party or right-wing media commentators—have come to the same conclusion that their predecessors in the Weimar Republic reached. Under a liberal constitution they can neither win nor maintain political power. Even in the Reichstag parliamentary elections of March 1933, with National Socialists in power and the full force of the state’s coercive powers behind their campaign, Hitler could only garner 44% of the national vote.

 

Two factors in particular enabled the victory of hometown values and the destruction of liberal and social democratic ones in the Weimar Republic, and may yet do so in the United States. The first was the power and prejudices of the courts. Despite the socialist-democratic revolutions of 1918/19, very few judges from the German Empire were replaced. Educated in a hometown milieu and usually staunch opponents of parliamentary democracy, they exploited the process of legal and constitutional review to undermine democratic practices and procedures at both the national and state levels of government.  They defined endemic domestic terrorism as the stepchild of the left and ignored radical right-wing terrorism against the Republic as the legitimate outrage of national patriots. Even when Adolf Hitler staged a violent uprising in Munich in November 1923 against the Republic and was convicted of treason, he spent a mere 264 days of a five year sentence in the relative comfort of Landsberg prison, where he composed Mein Kampf.

 

No one understands more fully the lesson of the judiciary in the Weimar Republic in preserving hometown political power than the Senate Majority Leader, Mitch McConnell. He has made it his primary mission to eradicate the “liberal bias” of the federal court system.  He spectacularly violated accepted Senatorial practices by refusing to even meet with, let alone hold a hearing on, Justice Merrick Garland, President Barack Obama’s nominee for the Supreme Court. Since then he has been assiduously pursuing the appointment of extremely conservative, mostly white and male judges to the federal bench. According to a recent review in The Nation, Mitch McConnell has been able to confirm to date 123 federal judges, including 41 to the federal court of appeals, compared to only 19 circuit-court judges during a similar period under President Obama.  These appointments were 78 percent male and 81 percent white, with an “unsettling number of them” having “earned their stripes as partisan think-tank writers, op-ed columnists, or even bloggers.” The vetting for most of these nominations has been through the ultra-conservative Federalist Society, while in March 2017 the more professional liberal American Bar Association was denied by White House Counsel Donald F. McGahn II its previous special access to background information on judicial candidates prior to their nomination. Right-wing critics of the ABA have always chastised it for its “liberal” biases.

 

The second factor contributing to the victory of hometown values in the Weimar Republic, which eventually morphed into the “blood and soil” and Volksgemeinschaftof the Third Reich, was the expansion and use of the office of the President. Nothing enables an untrammeled misuse of executive power more than a compliant court system and an impotent legislature. Article 48 of the Weimar Constitution granted the President the right to take emergency measures in times of crisis and national emergency. While the Reichstag could rescind an emergency decree, it never did so. By the time of the economic Depression of the early 1930s its impotence as a legislative body had become a stark reflection of a German nation of “moral strangers.” It proved virtually incapable of agreeing on anything and ultimately consisted of a majority of elected parties staunchly opposed to the continued existence of democracy. As a political force it became totally irrelevant in the face of the expanding executive rule by the President and the Chancellor he appointed. In 1932 the Reichstag met for only 13 days in total, passing only five laws in the entire year.

 

Donald Trump, in his more than two years in office, has been busy crafting an American version of Article 48. He has discovered the possibility of governing without legislative approval. His tools have been the executive order, the declaration of a national emergency and the extension of executive privilege. He has steadfastly ignored subpoenas for members of his staff and government to testify before the House of Representatives. Legislative efforts to reign in his executive proclivities have proven futile in a badly fractured Congress, with the Republican led Senate determined to deflect all efforts to hold the President and his staff publicly accountable. While Democrats have been able to seek succor in the courts to some degree, that opportunity is withering and dying as Mitch McConnell perfects his reshaping of the Federal judiciary in a hometown image.

 

On February 27, 1933, the Germany Reichstag, the physical symbol of country’s democracy and the rule of the people, burnt to the ground. Hitler immediately blamed Communist agitators and used the national crisis as a springboard to dismantle the Republic. In short order he assumed virtual dictatorial powers by means of legislative Enabling Decrees, interned Communist leaders and members in concentration camps, excluded Jews from public service, outlawed trade unions and banned all remaining political parties except for National Socialism. By the summer of 1933 the Third Reich could no longer be deterred.

 

What might prove the tipping point for American democracy almost ninety years later? It could be a severe economic crisis, a war with Iran, another massive terrorist attack or simply the fact that in 2020 President Trump refuses to leave office after adverse election results, claiming that the outcome was rigged by unspecified “outsiders” seeking to destroy hometown America. Would the ideologically refashioned Federal courts, especially the Supreme Court, stand in his way? The Supreme Court has already intervened in the outcome of one presidential election in its Bush v. Gore decision halting the recount of ballots in Florida. Would the present Court, with its growing penchant to ignore standing legal precedent, be willing to go even further this time around? And would a badly fractured Congress be able to act effectively, or would our democracy simply dissolve in a stalemate as it did at the close of the Weimar Republic?

 

To some, these questions might seem at best hypothetical, and at worst illusionary. But the mere fact that they can now be seriously entertained in terms of the historical precedent of Germany’s Weimar Republic should give us pause. In today’s United States of America, 1933 may be closer than we think.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172965 https://historynewsnetwork.org/article/172965 0
The Cuban Missile Crisis and the Trollope Ploy Myth

 

Response to Matthew Hayes: “Robert Kennedy and the Cuban Missile Crisis: A Reassertion of Robert Kennedy’s Role as the President’s ‘Indispensable Partner’ in the Successful Resolution of the Crisis,” History, The Historical Association and John Wiley and Sons Ltd (May 7, 2019), 473-503 and “RFK’s Secret Role in the Cuban Missile Crisis,” Scientific American (August 6, 2019).

 

I was naturally intrigued when I learned about a purportedly new take on Robert Kennedy’s role in the Cuban Missile crisis. The unique personal/official relationship between President John F. Kennedy and his younger brother Robert has been thoroughly explored in dozens of studies over the last half century. RFK’s “portfolio,” widely understood at the time, was that of JFK’s most trusted adviser and confidant—and, as Hayes suggests, “the president’s de facto chief of staff.” A different attorney general would likely not even have been invited to take part in secret discussions during a dangerous foreign policy crisis. The loyalty and trust between the Kennedy brothers will surely remain a one-off in the history of the American presidency. 

 

Matthew Hayes’ work confirms the already well-documented story of RFK’s unique role, especially his JFK-approved back-channel contacts with Soviet diplomats before, during and after the missile crisis; he emphasizes, however, the importance of the more than 3,500 recently declassified documents which confirm that the attorney general was overseeing interdepartmental planning for possible contingencies in Cuba—including “the installation of missile sites” and “warning his brother of the possibility over a year before the crisis.” [Scientific American 2 (5 page printout); hereafter SA] Hayes cites Cuba-related documents which undeniably confirm that RFK was not your conventional attorney general. These examples augment the historical record but fail to provide anything genuinely new about the bond between President Kennedy and the brother eight years his junior. [History 32-35, 38,42; hereafter HY]  

 

“In the first days of the crisis,” referring directly to the ExComm tapes, Hayes contends that RFK “insisted that an invasion remain on the table and even pushed for a reduction in lead time required to initiate one. Until recently (italics added) this approach was held up as evidence for a belligerent, hawkish adviser, promoting the sort of military action that would have led to dangerous escalation.” (SA3) In fact, from 1962 to the declassification of the White House tape recordings in the late 1990s, historians took for granted that RFK was the top dove at the meetings—mainly because of his posthumous 1969 memoir, Thirteen Days (which has never been out of print). Hayes declares that: 

 

He saw his role as pressing for all alternatives, regardless of where they might lead. … he was instrumental in convincing other advisers of its [the naval blockade’s] merits and, ultimately, the president. In both cases he was able to do so because he was seen as balancing resolve with restraint, bridging the more forceful approach advocated by the military and Joint Chiefs with the optimistic diplomacy pushed by dovish advisers such as U.N. Ambassador Adlai Stevenson.”[SA3]

 

The quote above is a historical rope of sand. RFK only briefly and reluctantly backed the blockade and continued to grumble about it well after the president had endorsed it; he certainly did not convince the JCS to support it: they never did. There is no escaping or rationalizing the facts—the tapes have irrefutably identified RFK as one of the most contentious ExComm hawks—from day one to day thirteen. Hayes is, in effect, turning the historiography of the missile crisis upside down, as if these new documents [“Until recently”] can somehow explain away the substance and tone of what Robert Kennedy actually and repeatedly said in the recorded meetings—but carefully concealed in Thirteen Days. RFK’s role as chair of the Special Group Augmented, even more thoroughly documented since 2012, (https://www.jfklibrary.org/asset-viewer/archives/RFKAG) is entirely consistent with his hawkish views in the ExComm meetings—in which he certainly did not reveal an “innate understanding of the missile crisis as more a political struggle than a military one, with its own limitations.” [SA2;HY480] Hayes’ nebulous claim that these “declassified private notes and a closer understanding of the brothers’ intimate relationship, now support a more holistic view of RFK,” fails to even dent the indisputable historical record on the White House tapes. 

 

RFK’s key responsibilities included chairing the Special Group Augmented which coordinated Operation Mongoose in Cuba, overseeing industrial and agricultural sabotage, which some historians have called ‘state-sponsored terrorism,’ as well as attempts to assassinate Fidel Castro. Richard Helms, CIA deputy director for operations, recalled: “If anybody wants to see the whiplashes across my back inflicted by Bobby Kennedy, I will take my shirt off in public.” A senior Mongoose planner agreed, “That’s how he [RFK] felt about this stuff. It was unbelievable. I have never been in anything like that before or since and I don’t ever want to go through it again.” [Stern, Averting the Final Failure14; hereafter AV] Hayes never even mentions the Special Group Augmented.  

 

Hayes’ discussion of the “Trollope Ploy,” (hereafter TP: a reference to a plot device in a 19th century Anthony Trollope novel) is even more problematic. He explains the TP as “a bold strategy for navigating two different proposals from Khrushchev…within the space of a few hours.” The first (late on 10/26) promised to remove the missiles if the US pledged not to invade Cuba; the second (early on 10/27), asserted publicly on Moscow Radio that the missiles would be removed if the US withdrew the Jupiter missiles from Turkey. “RFK took hold of the situation,” Hayes concludes, “assuming the leadership mantle.” He and the president’s chief speechwriter, Ted Sorensen, went into a separate room and came up with what Arthur Schlesinger, Jr. called an idea of “breathtaking simplicity:” “we ignore the latest [10/27] Khrushchev letter [Hayes incorrectly substitutes “while barely acknowledging receipt of the second”] and respond to his earlier [10/26] letter’s proposal.” [SA4;Thirteen Days,1971 edition,77] “JFK approved the ploy,” and sent RFK to make what Hayes calls a “highly secret assurance to [Soviet Ambassador] Dobrynin that the missiles would be removed ‘at a later date.’” 

 

This account, however, is not what happened! The tapes reveal conclusively that JFK remained very skeptical and only grudgingly and unenthusiastically agreed “to try this thing [the TP];” but also demanded new contacts with Turkey and NATO to convince them to give up the Jupiters because Khrushchev “had moved on” and could not go back to his earlier demand for a non-invasion pledge after his public statement about a trade. The entire ExComm—very much including RFK—continued to vigorously oppose the trade. The real breakthrough did not occur until the late evening rump meeting (about 20 minutes) of seven ExComm members, chosen and invited by the president himself. (JFK failed to activate the tape recorder and we will never know if he acted deliberately or simply forgot.) Secretary of State Dean Rusk, finally acknowledging the president’s determination about giving up the missiles in Turkey, suggested requiring that the Soviets keep the swap secret; the president accepted this recommendation and everyone finally acquiesced—however reluctantly. The president, in short, never let go of “the leadership mantle.” As Barton Bernstein observed, “they were the president’s men and he was the president.” [AV369]

 

It was JFK himself who first utilized the TP myth. Just hours after Khrushchev had agreed on 10/28 to the removal of the missiles in Cuba, the president phoned his three White House predecessors (Eisenhower, Truman, and Hoover) and skillfully lied to them, claiming that Khrushchev had retreated from the 10/27 missile trade proposal and had agreed, in the end, to remove the Cuba missiles in exchange for a non-invasion pledge. Eisenhower, who had dealt with Khrushchev, was skeptical and asked if the Soviet leader had demanded additional concessions; JFK coolly repeated the contrived administration cover story. The same version was fed to a gullible press corps and quickly became the conventional wisdom, later enshrined in Thirteen Days. [AV388]

 

Hayes criticizes my work for “dismissing the accounts of early [missile crisis] historians such as Schlesinger as ‘profoundly misleading if not out-and-out deceptive.’” [HY476] This accusation is irresponsible as well as false. First, the quoted passage actually refers to onedocumentfrom the first day of the ExComm meetings found in RFK’s papers by Schlesinger (granted special access by the family in the 1970s). Second, I explicitly warned readers that “Schlesinger could not have known the full context of the RFK quote” at the time because the tapes were still classified. My judgment has nothing whatsoever to do with the ‘early [missile crisis] historians.’ If there is deception here, the deception was neither Schlesinger’s nor mine. [AM34-5]

 

“Historians such as Sheldon Stern,” Hayes maintains, “have argued that President Kennedy ‘bore a substantial share of the responsibility’” for precipitating the crisis. Hayes, however, chooses to call the missile crisis one of the Kennedy administration’s “principal moments of glory” and “a heroic and ingenious defense against Soviet aggression.” [HA476]

 

This “moment of glory,” “heroic and ingenious” language is unprofessional advocacy, bordering on hagiography, and is particularly baffling because there is a huge amount of evidence (including in Soviet archives) which confirms Khrushchev’s claim that the missiles were sent to Cuba to defend Castro against a second US-backed invasion. Hayes, nonetheless, dances around RFK’s dominant role in the Special Group Augmented and Operation Mongoose, which in reality aimed “to undermine the Cuban regime and economy by blowing up port and oil storage facilities, burning crops (especially sugarcane) and even disabling or assassinating Castro himself. … It became the largest clandestine operation in CIA history up to that time, ‘involving some 400 agents, an annual budget of over $50 million.’” [AV15] Hayes acknowledges that RFK was the president’s “eyes and ears in Mongoose,” (HY495) but otherwise ignores RFK’s fervent leadership role in that effort. 

 

“Stern,” Hayes complains, “continues to quote a second-hand exchange between RFK and Kenneth O’Donnell, JFK’s special assistant and confidant during the crisis, to undermine the veracity of RFK’s memoir Thirteen Days.” After reading the manuscript, “O’Donnell is said to have exclaimed, ‘I thought your brother was president during the missile crisis!’, while RFK replied, ‘He’s not running [for office], and I am.’” Hayes insists that this account “by someone who didn’t participate in most of the ExComm meetings should surely not be given so much prominence.” [HY478] This is an apples and oranges argument: the remark is not about the meetings or the crisis, but instead about O’Donnell’s shrewd insight into RFK’s personal, political motives in writing his memoir. (Of the four people present, the surviving two I consulted vividly recalled and confirmed each other’s account.)

 

That ambition is precisely what O’Donnell, known for his candor and directness, immediately perceived and RFK promptly admitted. RFK initially intended this crisis memoir for publication during JFK’s 1964 reelection campaign, but changed his purpose after Dallas. Bobby’s ambition, in fact, had even surfaced during the crisis itself. On October 29, Ambassador Dobrynin gave the attorney general a letter from Khrushchev to the president which specifically mentioned the missile trade. RFK consulted with JFK and returned the letter, reminding Dobrynin that the swap was to remain secret—and explaining that he personally could not “risk getting involved in the transmission of this sort of letter, since who knows where and when such letters can surface or be somehow published—not now, but in the future…. The appearance of such a document could cause irreparable harm to my political career in the future.” [AV403] The O’Donnell/RFK exchange is an entirely legitimate nugget of historical evidence and Hayes’ objection is disingenuous special pleading. 

 

“Critics such as Stern,” Hayes continues, [HY483-4]

 

far from viewing RFK as a leader of the doves (through his support for the blockade route), point to the primary source material and advocate his role as a dangerous hawk advocating invasion from the outset. 

 

In evidence for this assertion, Stern directly quotes RFK: ‘We should just get into it, and get it over with and take our losses if [Khrushchev] wants to get into a war over this.’… Stern argues that RFK’s memoir of the crisis ‘was an effort to manipulate this history of the missiles crisis and invent the past. A ‘consistently hawkish’ figure emerges from Stern’s analysis of RFK, ‘one in sharp contrast to his brother.’

 

I don’t “view” RFK as “the leader of the doves” because he was not; he accepted the blockade only after JFK publicly announced it. I plead guilty as charged to pointing “to the primary source material,” the tapes, to prove conclusively (not to “advocate”) that RFK was a hawk on the first day and was still pressing to “take Cuba back” militarily on the thirteenth day. The “consistently hawkish figure” that rankles Hayes was not invented by “Stern’s analysis”—but derived from RFK’s own words captured on the ExComm tapes, words which he spunvery differently in his memoir. The assertion that ‘I was there’ is most often a red flag for historical manipulation, not a superior form of validation. History based on individual memory rarely rises above the personal motives for writing it. Thirteen Days and the tape recordings cannot both be right, and there is absolutely no question which account is reliable. 

  

Hayes, however, cites a specific case to allege that “this analysis is skewed, for Stern quotes RFK out of context, paring back RFK’s words selectively to support his argument.” The indented quote below, he claims, “actually begins with a series of qualifications, as RFKtentatively hedges his comments.”

 

Now [think] whether it wouldn’t be the argument, if you’re going to get into it at all, whether we should just get into it, and get it over with, and take our losses. And if [Khrushchev] wants to get into a war over this . . . Hell, if it’s war that’s gonna come on this thing, he sticks those kinds of missiles in after the warning, then he’s gonna get into a war over six months from now, or a year from now…. [HY483]

 

Accusing a scholar of “selectively” using evidence “to support an argument,” is a serious personal and professional accusation—especially when untrue. This passage is not, as Hayes is determined to “prove”  in spite of the ExComm tapes, some one-off, devil’s advocate musing by Bobby before he settled on a dovish line; rather, it is typical of his approach through the entire crisis. I just relistened to this tape and there is no question that before the “get into it” comment RFK is overtly scoffing at all suggestions of more limited action (such as air strikes) rather than invasion. Indeed, adding the “Now [think]….” sentence makes no change whatsoever in the meaning of his remarks. He is not “tentatively” hedging anything. In fact, Bobby makes his position abundantly clear minutes later, suggesting that the administration could stage an incident that would justify military intervention: “You know, sink the Maine again or something.” I included the ‘sink the Maine’ statement later in my narrative – yet Hayes leaves it out entirely. A reader might reasonably ask just whose version is skewed and selective.

 

Equally important, the indented quote above first appeared in the 1997 May-Zelikow transcripts, which I was the first to publicly expose as seriously flawed and unreliable. (AV,’ Appendix, 427-439.) Nothing in the Hayes articles suggests that he is even aware of the ensuing controversy. No historian genuinely familiar with the crisis literature would trust the 1997 version, which the editors themselves finally acknowledged has been superseded by the much-improved 2001 Miller Center transcripts. 

 

Hayes also accepts RFK’s claim in Thirteen Days that “many meetings” of the ExComm took place “without the President.” [HY491] I listened to every recorded meeting numerous times over two years (including the crucial “post-crisis” meetings that continued into late November)—as well as checking passages in the original White House master recordings against the copies used for research and studying the minutes of the unrecorded meetings. JFK definitely attended every ExComm meeting, except during brief campaign trips to New England (10/17) and the Midwest (10/20).  

 

The November post-crisis lasted longer (32 days) and required more recorded meetings (24 vs. 19) than the iconic Thirteen Days. [AV403-12] The naval blockade remained in place and tensions remained high after 10/28. Negotiations at the UN broke down over Soviet resistance to removing the IL-28 nuclear bombers from Cuba and the deadlock was not resolved until 11/20. JFK then ordered the lifting of the blockade, but not before RFK persuaded him to drop the non-invasion pledge: “I don’t think,” RFK insisted, “that we owe anything as far as Khrushchev is concerned.” The president worried that it would “look too much like we’re welching” on our promise and added that retaining the pledge might “make it politically less difficult for Khrushchev to withdraw his conventional forces from Cuba.” In the end, however, JFK agreed to his brother’s tougher stance. Bobby was Bobby, hawkish to the last. Hayes never even mentions the November post-crisis—in effect leaving out everything after the 9th inning in the account of an extra-inning game—a fitting metaphor for these essays. [AV410]. (1)

 

(1) When I began listening to the tapes I did not expect that they would fatally undermine the veracity of 13 Days. I had worked in RFK’s presidential campaign, convinced that he was a very different man than in 1962. However, as a historian, I had to confront the evidence on the tapes. I admired Bobby in 1968, and still do.  

 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172974 https://historynewsnetwork.org/article/172974 0
Beyond the Vote, The Suffragists Helped Launch Modern Business Franchising

 

As we begin to celebrate the 100th anniversary of the 19th amendment, giving women the right to vote, we should also acknowledge the role of the suffrage movement in supporting the launch of modern franchising begun by Martha Matilda Harper in 1891.

 

Harper was born near Oakville, Ontario, Canada, and at age seven, she was bound out into servitude. For the next twenty-five years, she remained a servant, but was determined to find a path out of her poverty- stricken world.  Fortunately, her last Canadian employer was a holistic doctor who taught her about healthy hair care including demonstrating the power of his proprietary hair tonic. As a result, Harper had Rapunzel-like floor length hair.  On his deathbed, the doctor bequeathed Harper the hair tonic formula.   With it, she left Canada in 1882 and emigrated to Rochester, NY, a hotbed of entrepreneurial innovation and social advocacy.  

 

Harper remained a servant for six more years until she opened Rochester’s first beauty salon for women with her lifetime savings of $360 in 1888.  That was the same year George Eastman launched the KODAK camera with one million dollars of venture capital.  Harper located her salon in the most fashionable office building in Rochester, where people banked, visited art galleries, took music lessons, and conducted various business transactions. It happened that Susan B. Anthony’s trusted lawyer, former Congressman John Van Voorhis, was located in that building.

 

Harper surely knew of that building from serving Rochester’s society folks and wisely chose to locate there. However, Daniel Powers, the building owner, was reluctant to allow Harper to start such a questionable business, fearing it would attract prostitutes and the wrong kind of women to his fancy building. Cleverly, Harper enlisted Van Voorhis to advocate for her and he succeeded. Van Voorhis additionally provided two key contacts to support Harper: his wife, a society lady, and his former client Susan B. Anthony.  Both ended up playing major roles in Harper’s success.

 

A keen observer and well taught in the art of pleasing, Harper invented the first reclining shampoo chair and cut out a neck rest along the sink’s rim to make hair shampooing a more comfortable process.  She also invited mothers bringing their children to music lessons next door to rest their weary feet in her salon. Astutely, she even posted a picture of her flowing floor-length hair on the exterior of her office door to attract customers.  With this inventiveness, she wowed Rochester women, recruiting Anthony and many others.

 

It is said that Anthony and Harper had long discussions about women’s rights and the cause of full equality during treatments.  During those discussions and with Anthony’s advocacy, suffragists began to patronize her shop and became a niche of growing Harper’s customer base.  As Rochesterian Mrs. Josephine Sargent Force indicated, “I was brought up a suffragist and was interested in Miss Harper and her work and was interested in anything women were doing.”

 

Suffragists understood the bigger issues facing women. In the 1848 Declaration of Sentiments, Elizabeth Cady Stanton and others spoke about the restrictions of women’s education and job prospects. These limitations assured women were destined to earn meager wages as servants or factory workers.  In addition the Declaration took on coverture,a legal term that referred to the limitation of women's possessions including wages, children and property.  Stanton and the suffragists were envisioning a world where women would be independent of such legal restrictions. Susan B. Anthony understood all of this.  She understood the vote was simply a first step to emancipating women.  Thus she declared, “Every woman needs a pocketbook, ” suggesting that the control of money was essential for women’s equality.

 

Harper was likely influenced by this philosophy as she expanded economic opportunities for herself and other women.  Enter Bertha Palmer, Chicago socialite, who was lured to Rochester to experience the Harper Shop.  She loved the experience and told Harper that she wanted such a shop in Chicago in time for the 1893 Chicago World Exposition, where she would be the President of the Women’s Division.  Boldly, Harper countered Palmer, informing Palmer  that she would need to deliver a written commitment from 25 of Palmer’s best friends assuring future patronage  at a Chicago Harper shop.  Palmer delivered and then Harper had to figure out how to expand.  

 

Cleverly, Harper used the Christian Science Church structure as something she could replicate, with herself as the strong female leader, her Rochester headquarters like the Mother Church in Boston, and then satellite shops around the world that followed Harper’s instructions and used her products. For the first 100 Harper shops, Harper put only poor women into ownership positions of each shop.  Harper, thereby, was also a pioneer of social entrepreneurship. Ultimately, there were 500 such franchises around the world.  British royalty, the German Kaiser, U. S. Presidents and their First Ladies, suffragists, luminaries were all loyal Harper customers. 

 

Harper’s success was recognized around the world.  When she died in 1950 even the NY Times wrote a two -column obituary about Harper, citing her relationship with Anthony.  Unfortunately, Harper’s achievements and the role of the suffragists in creating this new business model has been forgotten. It is time to credit Harper and the suffragists with the fastest growing segment of retailing— franchising.  As Susan B. Antony understood, though, that was only part of the goal.  The other was to assure that all women had their own purse for financial independence and choice.  Perhaps by August 26, 2020, real pay equity for women and widespread recognition for forgotten achievements  of women like Harper will have been established.  Then, the ultimate goal of the suffragists will have been met and we can all cheer!

 

©Jane R. Plitt 2019

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172968 https://historynewsnetwork.org/article/172968 0
Sir Ian Kershaw on His Latest Book, Brexit, and the Future of Europe Sir Ian Kershaw, FBA (born 29 April 1943) is a British historian and author whose work has chiefly focused on the social history of 20th century Germany. He is regarded by many as one of the world's leading experts on Adolf Hitler and Nazi Germany, and is particularly noted for his monumental biographies of Hitler.

 

David O’Connor:  Your new book was a massive undertaking, covering so many issues in Europe’s post-war history.  How did you go about setting up a framework for your analysis?  

Sir Ian Kershaw: The first step was to acquire an overview of the period, its most important developments, changes and so on. Secondly, I then worked out the chapter divisions, and the subdivisions. Thirdly, I explored the most important literature on the relevant themes. Finally (and obviously the difficult part), I attempted the actual writing.

 

DO: You use the term “matrix of rebirth” to explain how Europe was able to recover so well from the devastation of World War II.  What are some of the key features of this “matrix,” and which do you consider the most important?  

IK:  The ‘matrix of rebirth’, as I called it, arose from the condition of Europe at the end of the Second World War. It comprised, as its fundamental premise, the elimination of German great-power ambitions. A second component was the territorial and geopolitical reordering of central and eastern Europe under the aegis of Soviet power. Thirdly, national interests were now subordinated to the interests of the two new superpowers – in western Europe the USA, in eastern Europe the Soviet Union. A fourth element was the extraordinary economic growth that, in western Europe, was a major contribution to the consolidation of pluralistic democracy. Finally, and perhaps the most important factor of all, the availability to both superpowers of a growing arsenal of devastating nuclear weapons acted as a vital deterrent to another war in Europe. 

 

DO: Many of the tensions that exist in the European Union today—especially on the issue of national sovereignty—were present from the beginning of European economic integration with the formation of the European Coal and Steel Community soon after World War II. Please explain some of the main motivations for integration and how objections to it were initially overcome.  

IK: It was widely felt in the post-war years that a new basis for friendship, cooperation and supranational organization was needed to overcome the extreme nationalism that had produced such catastrophic conflict, and to rule out any prospect of a return to war in Europe. Once the Soviet Union had replaced Germany as the major international threat in the eyes of western leaders, the path opened up for the first steps towards European integration to bolster security and promote prosperity. Alongside the idealism, the different but complementary national interests of France and West Germany were served by the creation of a common market in a trading bloc that also included the Netherlands, Belgium, Luxembourg and Italy. A tension between national and supranational interests was present from the start. But the economic success of the trading bloc in the early years overcame many of the objections, even if advances towards further integration were slow and often difficult.

 

DO: The division between Eastern and Western Europe is a prominent issue throughout your book.  The two sides had very different cultures, social structures, and economic and political systems, yet the Cold War was a remarkably stable period in European history.  What are some of the most important factors that contributed to this stability and absence of outright warfare?  

IK: Crucial, as already mentioned, was what came to be labelled ‘mutually assured destruction’ of the superpowers, both of which presided over immense nuclear capability. The nuclear deterrent was represented organizationally by the existence of NATO in western and the Warsaw Pact in eastern Europe. Stability in the Cold War was always under the shadow of a potential nuclear conflict. But once the Berlin Wall was built in 1961 and the Cuba Crisis the following year ended without catastrophe, the likelihood of nuclear confrontation in Europe greatly diminished. Meanwhile, the power of Soviet repression was sufficient to contain, if sometimes with difficulty, instability in the eastern bloc, and the West generally accepted that Soviet domination of eastern Europe could not be ended. This in itself contributed to stability in Europe.

 

DO: You cover a lot of what you call “impersonal dynamics” (demographics, economic growth, etc.) but in the book’s “Forward” you also note the importance of individual leaders who made important decisions that shaped their eras. One of the most prominent figures you examine is Konrad Adenauer. Please explain how he helped the Federal Republic of Germany become such a powerful force in Europe’s post-war economic recovery and an anchor in NATO.  

IK: Adenauer is certainly among the individuals who helped to shape Europe’s postwar history. He was crucial in ensuring that West Germany turned to the West and wedded its future to its membership of NATO, to west European integration and to friendship with the traditional enemy, France. What seems today to be an obvious step was at the time highly controversial, since the turn to the West ruled out re-unification as a realistic goal – something that was greatly unpopular with the oppositional Social Democrats and much of the population which preferred a re-unified and militarily neutral Germany to commitment to the American-dominated capitalist and militarized West.

 

DO: One of the key themes you explore is how Europeans were greatly affected by events outside of Europe in the post-war era.  Perhaps the most jolting and consequential example was the oil embargo imposed by Middle Eastern countries in the 1970s.  How did this shake the European economy and the confidence of Europeans in general?  

IK: The extraordinary economic boom that had lasted for more than two decades was already fading when Europe was hit by the first oil-shock in 1973 in the wake of the Yom-Kippur War in the Middle East, followed by a second after the Iranian Revolution of 1979. The double oil-crisis in countries by now so heavily dependent on oil led to high rates of inflation accompanied by a significant rise in unemployment. States struggled to adjust to an abruptly altered economic climate and increased political volatility. All at once, it seemed, the optimism that had characterized the first post-war decades had evaporated. The oil crises inaugurated a new era in Europe, east and west.

 

DO: The expansion of the European Community and later the European Union has often been controversial, whether it was the integration of Spain, Portugal, and Greece in the 1980s or the entry of former communist countries from Eastern Europe.  In each case, the European Community allowed for expansion even if the nations didn’t meet the economic standards needed for entry.  How did geopolitical considerations shape these decisions to ignore prescribed preconditions?  On balance, was the expansion beneficial?  

IK: Spain, Portugal and Greece had all recently emerged from dictatorship – in the first two cases lasting for decades – when they were integrated into the European Community. The political consideration was that integration would significantly help to consolidate democracy in those countries, and so it proved. A similar imperative lay behind the readiness to integrate former communist countries in eastern Europe. Here, too, the benefits have greatly outweighed the disadvantages of incorporating less developed economies, even though Hungary and Poland, especially, have come to pose some new challenges to the liberal values of the European Union.

 

DO: Mikhail Gorbachev is another one of the key figures who gets a lot of your attention in the book.  One of the most important parts of your analysis of the era of perestroikais the impact that the Chernobyl nuclear disaster had on Gorbachev.  How did Chernobyl affect his decisions at this critical point in history?

IK: No individual had a greater impact on European (and global) change in this era than Gorbachev. Chernobyl, little over a year after Gorbachev had acceded to power in the Soviet Union, convinced him that the Soviet system as a whole was rotten, and needed root-and-branch reform.

 

DO: In addition to explaining the roles of leading politicians you also provide examples of less well-known figures who played great parts in momentous events, including a Polish priest named Jerzy Popieluszko.  Who was he and how did he help strengthen opposition to the Polish government in the 1980s?  

IK: Popiełuszko was a young Catholic priest who had been vociferous in support of Solidarity, the trade-union opposition to the Polish regime which had been banned under the declaration of martial law imposed in December 1981. In October 1984 he was kidnapped and murdered by members of the state security police. The murder of Popiełuszko led to an enormous outburst of popular anger, reflected in the huge numbers attending his funeral. Indirectly, the reaction to the murder convinced the regime that concessions had to be made to the opposition. By 1986 an amnesty for all political prisoners arrested under martial law was granted.

 

DO: The fall of communism in Eastern Europe ushered in incredibly high expectations for Europe and indeed the world, yet despite the enthusiasm, there were many obstacles standing in the way of Eastern European prosperity and hopes for a peaceful transition.  Which of the former communist states did the best job making the change to democratic government and market-based economic systems? What made them successful?  Which of them fared the worst and why?  

IK: The difficult transition was best managed by the German Democratic Republic (though, of course, the incorporation in the Federal Republic made this a special case), and by Poland. In the latter case, the change to trade liberalization, convertible currency, a fully-fledged market economy, and extensive privatization took place extremely rapidly, through what was labelled ‘shock therapy’. Poland’s debts were effectively written off and the country benefited from assistance from the International Monetary Fund and the European Union. By 1992 Poland was recovering strongly, though experts differ over whether this was on account of the ‘shock therapy’ itself. The slowest countries to adapt were Romania, Bulgaria and Albania, which had been relatively backward under Communism, with poor infrastructure, low level of industrialization and weak civic culture and high levels of corruption and clientelism.

 

DO: Why did you describe Helmut Kohl as a “disciple” of Adenauer?  How do you assess Kohl’s impact on Germany and on European integration?  

IK: Kohl was particularly keen to continue Adenauer’s policy of binding Germany to the West and consolidating the friendship with France as the basis of policy towards Europe. Kohl’s reputation as the Chancellor who brought about unification is guaranteed, and as such his impact on Germany was enormous. Probably, once the Wall had fallen unification would have happened anyway in the relatively near future. But Kohl’s negotiations, especially with Gorbachev, were important. Kohl, like Adenauer, was a fervent advocate of European integration. His legacy here was the agreement that he and the French President, Mitterrand, reached at the Maastricht conference in 1991 to introduce a single European currency, the Euro.

 

DO: In the final chapter, which you call “Global Exposure,” you analyze several threats to twenty-first-century Europe that were the result of developments outside the continent, including the financial crash of 2008 and resulting recession, the rise of nationalism, mass migration of refugees from the Middle East, and the rise of Islamist terrorism.  We don’t have time to cover each so I’d like to focus on one and put it into historical perspective.  How is Islamist terrorism different from other terror campaigns in Europe like those of the IRA, ETA, and the Red Army Faction? 

IK: IRA and ETA terrorism, though the consequences for the victims were horrendous, had limited aims of national independence. The Red Army Faction’s nebulous objective was the destruction of capitalism and the West German imperialist-fascist state (as they saw it).  The terrorist attacks of these organisations were directed in the main at political, military and economic representatives of the states that they wanted to destroy – though, of course, many innocent bystanders were sometimes caught up in the attacks. Islamist terrorism, in contrast, had an essentially unlimited objective – the destruction of western values and their replacement by those of fundamentalist Islam. Civilians were directly targeted in order to make the greatest impact. And the perpetrators were ready to sacrifice their own lives for the cause. 

 

DO: When you concluded the book in August 2017, Brexit was a simmering issue, and you had some rather witty yet unkind things to say about then Foreign Minister Boris Johnson.  What are your thoughts about him now that he has become Prime Minister?  How do you see Brexit unfolding and does it pose an existential threat to the EU?  

IK: Johnson should not be taken for a fool because he sometimes acts like one. He is a clever, calculating politician. He hopes to attain legendary status in Britain as the savior of the Conservative Party who achieves Brexit and restores British greatness. He has become Prime Minister only on the votes of around 160,000 Conservative Party members and lacks any popular mandate. Nevertheless, he has surrounded himself by a cabinet committed to leaving the EU on 31 October, if need be without a deal which all experts see as damaging for the EU but far more so for the UK. Both the EU and the UK will survive even ‘no deal’, but lasting, and unnecessary, harm will have been done. How the political drama will unfold over the coming few weeks is impossible to foresee with any clarity.

 

DO: One of the striking things about your book is how well it illustrates the durability of the institutions that were created in the post-war era, despite facing numerous crises over the last seventy years.  Now with Russia’s meddling in the domestic politics of other countries, Eastern Europe’s reversion to authoritarianism and numerous other problems, are you optimistic about the future of the EU?  

IK: The EU has come a long way and, as Brexit shows, the networks built up over previous decades are extremely complex. What has been achieved will go a long way to sustaining the EU in the future. As it has done so often in the past, the EU will have to adapt to change and the current organizational framework may be reformed and in some ways reconstituted in years to come. But the prospects for the EU’s future remain bright, despite Brexit and other current economic and political problems. 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172966 https://historynewsnetwork.org/article/172966 0
Hindus-Muslim clash 72 years after Britain left India

 

Partition of India in 1947 by Britain to create two independent countries wrecked a havoc in human lives and miseries. It killed two million people, according to various estimates, and displaced 14 million. Its legacy, the two siblings of the midnight—nuclear-armed Pakistan and India, which have fought two major wars since the separation—are still at loggerheads. Was this inevitable?

 

This question assumes greater relevance today in light of India's recent decision to annex the Muslim-majority Kashmir state. On August 5, keeping Kashmiri Muslim leaders under house arrest and deploying tens of thousands of soldiers in heavily fortified Kashmir, Delhi snatched away their special rights—their own flag, own law and property rights, which were granted to Kashmir by India's constitution—in a blitzkrieg exercise in a matter of hours.

 

Kashmir is a picturesque Himalayan region that encompasses roughly 135,000 square miles, almost the size of Germany, and has a population of about 18 million. India controls 85,000 square miles, Pakistan 33,000 and China 17,000. Both Pakistan and India claim the entire state as their own and have fought two major wars over it since the British left India in 1947.

 

Both India and Pakistan claim the entire state as their own. In 1948, after a fight between the two nations, India raised Kashmir in the UN Security Council, which called for a referendum on the status of the territory. It asked Pakistan to withdraw its troops and India to cut its military presence to a minimum. A ceasefire came into force, but Pakistan refused to pull out its troops. Kashmir has remained partitioned ever since.

 

By scraping Kashmir's special status and dividing the state into two, India's Prime Minister Narendra Modi has taken a dangerous step toward making India an ultra-nationalist Hindu nation. Pakistan's Prime Minister Imran Khan has threatened war again—even nuclear. China, which occupies parts of the state, denounced India's action as “unacceptable.” The warring nations might be just a miscalculation away from a nuclear winter.

 

U.S. SOLUTION WENT NOWHERE

 

The United States has pushed the warring neighbors since the Kennedy administration to make the existing division their permanent border, but the idea went nowhere because of a fatal flaw in it—it gives nothing to the victims of this tragedy, the Kashmiris. India loves the U.S. idea, but Pakistan wants no part of it, and the Kashmiris outright hate it. 

 

An examination of the major factors that led to the fateful partition on 14 August 1947 helps understand what happened then and what is happening now. Apart from intricate socio-economic and political reasons, one thing that contributed heavily to the division was mutual distrust of the Indian National Congress and the Muslim League, British India's two major political outfits. Congress leaders Jawaharlal Nehru and Sardar Vallabhbhai Patel both doubted sincerity of their League counterparts Mohammad Ali Jinnah and Liaquat Ali Khan. Likewise, Jinnah and Liaquat never trusted Nehru and Patel.

 

U.S. diplomatic cables from New Delhi on conversations with these leaders during a crucial phase in India's freedom struggle give an interesting insight into what was behind the tragedy. One such cable came to the State Department on 14 December 1946 from Charge d'Affaires George Merrell, then the highest-ranking American diplomat in India, who reported on his talk with Nehru the night before. He interestingly noted that Nehru in his remarks painted Jinnah as a Hindu and identified himself more closely with Muslims.

 

The United States pushed Britain to leave India sooner after London had become weak following World War II. Washington feared that if the British prolonged their rule through repression, Indians would become radicalized and tilt toward communism. America wanted to keep India united, too. The Soviet Union, on the contrary, supported India's partition in an attempt to create multiple entry points to spread communism.

 

WAS JINNAH REALLY HINDU?

 

While talking with the U.S. diplomat, Nehru "embarked on restrained but lengthy attack on Jinnah who he said had Hindu background and lived according to Hindu law, Nehru himself being imbued with more Muslim culture, linguistically and in other ways, than Jinnah," Merrell wrote.

 

On Pakistan's creation, Nehru was baffled by Jinnah's posture. Congress had endeavored to learn what Jinnah wanted, but never received satisfactory replies. Jinnah never even adequately defined Pakistan. Nehru believed that Jinnah sought some changes, but did not want a democratic government. His argued that prominent Leaguers were landholders, who preferred antiquated land laws—British rule.

 

The British, however, believed that Jinnah embraced the Pakistan idea for bargaining purposes, but by the mid-1940s the movement had gained such momentum that neither he nor anyone else could apply the brakes.

 

The crux of the internal problem that India faced before the partition stemmed from differences between Congress and League as to the conditions under which provinces would join or remain out of sub-federations in northwest and northeast India.

 

"I am confident that if the Indian leaders show the magnanimous spirit the occasion demands, they can go forward together on the basis of the clear provisions on this point contained in the constitutional plan proposed by the British Cabinet Mission last spring to forge an Indian federal union in which all elements of the population have ample scope to achieve their legitimate political and economic aspirations," Merrell wrote to Washington.

 

DID NEHRU FORESEE CARNAGE?

 

Britain wanted the two major political parties to jointly frame India's constitution as a prelude to independence. This idea resulted from the British Cabinet Mission to India in 1946. The mission proposed a united India, having groupings of Muslim-majority provinces and that of Hindu-majority provinces. These groupings would have given Hindus and Muslims parity in the Central Legislature. 

 

Congress abhorred the idea, and League refused to accept any changes to this plan. The parity that Congress was loath to accept formed the basis of Muslim demands of political safeguards built into post-British Indian laws to prevent absolute rule of Hindus over Muslims. Reaching an impasse, the British proposed on 16 June 1946 to divide into a Hindu-majority India and a Muslim-majority Pakistan.

 

This resulted in unprecedented bloodbath and mass migration. In the riots in the Punjab region alone, as many as a half million people perished, and 14 million Sikhs and Muslims were displaced. 

 

No one knows for sure whether Nehru anticipated the carnage. He should have, though, because his comrade, Moulana A. K. Azad, had cautioned that if India were divided violence could erupt. Nehru remained convinced that League would ultimately join the Constituent Assembly. 

 

He, however, doubted that League would ever work constructively in a coalition government in a free India. Congress never liked the Cabinet Mission proposal, but in the interest of a peaceful and fair settlement had formed the interim government before the partition. This decision was based on an understanding that League would cooperate. But League members said they joined the cabinet to fight. If they entered the Constituent Assembly, where Muslims held 73 seats against 208 by Congress, "it would be with the purpose of wrecking it," Nehru vented.

 

NEHRU COULD PREVENT PARTITION

 

Still, had Nehru accepted Jinnah's demand for parity in the federal legislature and regional groupings as outlined in the British Cabinet Mission plan, India would have remained united. He could have served India better by following President Abraham Lincoln's policy during the American Civil War.

 

One sticking point in the partition plan was the division of Bengal and Punjab, the two Muslim-majority states with a large number of non-Muslims. Regarding Bengal's status, on 11 December 1946, Merrell talked with Chakravarti Rajagopalachari, an interim cabinet member and a favorite of both Nehru and M. K. Gandhi, India's paramount independence leader. He told the envoy that "Congress could not possibly agree to [the] interpretation of cabinet proposals which would inevitably place millions of Hindus under Muslim rule, particularly in [the] Bengal-Assam group." 

 

Asked how the basis for a democratic government could be established as long as mutual distrust between Hindus and Muslims exemplified by this view persisted, Rajagopalachari evaded the issue.

 

The United States favored India's early emancipation and pushed Britain toward this end. Washington strove to persuade Nehru to accept the Cabinet Mission plan that envisaged a weak federal administration and strong regional governments for free India.

 

"We have found that a central [government] initially with limited powers gradually acquires, as experience demonstrates necessity therefor, the additional authority which it must have to meet problems of the Federal Union," the State Department advised Nehru. "Our hope that Congress accept clear implications Brit Cabinet Mission plan...on reciprocal undertaking by Muslim League to work loyally within [the] framework [of] Indian Federal Union, subject only to reopening constitutional issue after 10 years of experiment." 

 

MUSLIM LEAGUE DISTRUSTED CONGRESS

 

Muslim League's views on its difficulty with Congress were articulated by Liaquat Ali Khan during a discussion with Merrell on 27 December 1946. Muslims, Liaquat said, "would not agree to independence [from British rule] unless adequate safeguards for minorities were provided." 

 

He expressed grave doubts whether Congress would accommodate Muslims. "Liaquat ...discussed at length his conviction that Congress leaders have no intention of trying to work Cabinet mission plan conscientiously but are determined to seize power without regard for Muslim rights," Merrell wrote. 

 

As evidence of Nehru's lack of interest in Congress-League cooperation, Liaquat pointed out that Asaf Ali was appointed India's first ambassador to the United States without consulting League members of the interim government. Liaquat learned about the appointment from read press reports in London. Asaf Ali, he said, did not command respect or confidence of Muslim Indians. 

 

Furthermore, Liaquat added, as soon as League joined the interim government, he proposed two League representatives—Begum Shah Nawaz, a Punjabi lawmaker, and Mirza Abol Hassan Ispahani, a Constituent Assembly member who later became Pakistan's first ambassador to Washington—be appointed to the UN delegation. Nehru refused on the ground that the number was limited to five and the appointment of these two would mean replacing the two who had already prepared themselves for work at the UN.

 

When League joined the interim government, Liaquat proposed that in the interest of efficiency and cooperation, questions concerning more than one department be discussed by ministers concerned prior to full cabinet meetings, regardless whether these ministers were Congress or League members. Nehru again refused, arguing it was preferable to thrash out all questions in full cabinet meetings. When Merrell asked whether all votes in cabinet meetings were along party lines, Liaquat answered in the affirmative.

 

In reply to a question from Merrell, Liaquat said he was convinced Gandhi had no desire for Hindu-Muslim cooperation; he was working for Hindu domination of India—to be attained through violence, if necessary. When the envoy further asked whether Liaquat believed that Gandhi's activities in East Bengal were a deliberate attempt to embarrass the Bengal government and to divert attention from Bihar, where communal violence had killed thousands of Muslims, he said "there was no question about it." 

 

Gandhi had gone to East Bengal to restore communal harmony after a series of massacres, rapes, abductions and forced conversions of Hindus as well as looting and arson of Hindu properties by Muslims in October–November 1946, a year before India won freedom. However, his peace mission failed to restore confidence among the survivors, who couldn't be permanently rehabilitated in their villages. Meanwhile, Congress accepted India's partition, and the mission and other relief camps were abandoned, making the bifurcation a permanent feature in South Asia.

 

MODI'S INDIA MIMICS HITLER'S GERMANY 

 

Following the partition, Kashmir won a special status as a precondition to join India. By scraping Kashmir's decades-old special autonomy status, Modi has taken a risky step toward implementing the dream of a right-wing Hindu extremist, the late V. D. Savarkar. Sitting in a prison cell on the Andaman Islands in the Bay of Bengal, in the mid-1920s the convicted-violent-revolutionary-turned-nationalist drew up his solution to the vexing issue of India's minorities, much like Adolf Hitler's final solution to the Jewish question. It is interesting to note that both of them came up their ideas almost at the same time and under similar circumstances—both were in prison for political violence. 

 

In Savarkar's Hindudom, Muslims and Christians were unwelcome, as were the Jews in Hitler's Third Reich. Savarkar disliked Muslims and Christians because of their allegiance to Mecca and Rome; they worshiped foreign gods and had no cultural affinity with Hindustan. Buddhists and Sikhs were no longer as pure as Hindus, but they were still acceptable because their religions originated in Hindustan. Hitler branded Jews as Gemeinschaftsfremde (community aliens) and condemned them as communists who aspired to dominate the world.

 

Savarkar initially wanted to convert all Muslims and Christians back into Hinduism. But he faced a significant obstacle. He could convert them, but could not arbitrarily decide their caste. A Hindu must belong to a hierarchical caste, which he acquires through birth only. Hindu religion forbids assigning a caste. 

 

To overcome this barrier, he revised his idea. First, he came up with a new identify for himself: He is a Hindu, not an Indian. Then he figured that his motherland is Hindustan, not India. Hindustan extends from the Himalayas to the Indus River and boasts a 5,000-year-old rich culture that influenced a vast number of people from Greece to Japan. On the contrary, India is a parochial concept that separates Hindus from their ancient heritage; it is championed by the nationalists who, unlike the orthodox Hindus, wanted an independent and united country for all Indians, regardless of their religion.

 

SAVARKAR'S VISION TAKES CENTER STAGE

 

Savarkar, an atheist who labeled his vision as nonreligious and cultural, was unwilling to give the Muslims a separate homeland next to his Hindustan. He feared that even though they were only 25 percent of the total population, they could still someday reconquer Hindustan if allowed to have their own country. He was very much aware that the Muslims were a small band, too, in 712 when they conquered India and eventually built a vast empire. 

 

He feared that next time they would be in a much stronger position to repeat their past success because they would be supported by other Muslim nations. To nip that possibility in the bud, he favored the creation of Israel. He saw the Jewish state as a barricade against the Muslim Arab world.

 

Savarkar dreaded a Muslim resurgence so much that he wanted British rule in India to continue. He sought only dominion status for Hindustan to keep it under the British military umbrella. Only Britain, he felt, was powerful enough to keep the Muslims at bay if they ever tried to invade Hindustan again.

 

But to his chagrin the nationalist tide swept India, as independence stalwarts like Gandhi, Nehru and Azad pressed the colonial power to quit. Savarkar's idea took the back seat, but remained very much alive, even though malnourished.

 

After Prime Minister Indira Gandhi's murder in 1984, the Indian National Congress party, the champion of secular India, fell on hard times; it had no comparable charismatic leader to carry forward the torch. Savarkar's followers gradually gained ground and picked Modi, who was once condemned globally as the mastermind behind a Muslim massacre in his home state of Gujrat, as the reincarnation of their guru.

 

Armed with a huge re-election victory in May, Modi moved full-seam ahead to fulfill Savarkar's dream to appease his hardcore anti-Muslim saffron brigade. First, he nullified a centuries-old Muslim marriage law. India's constitution, however, protects religious laws of other minority groups, and Modi did not touch them, showing his bias against Islam. Even the Mughals or the British left India's religious laws unchanged. India is a nation of 1.3 billion people, with 14 percent Muslim and 2 percent Christian.

 

NIRVANA LIES IN SECRET PLAN

 

Modi’s highly controversial and dangerous power grab is unlikely to end the crisis. The nirvana lies in a blueprint that was secretly drafted a decade ago by two former leaders of India and Pakistan but failed to execute it because of one them was suddenly pushed out of office. 

 

The idea, developed by aides to former Prime Minister Manmohan Singh of India and former President General Parvez Musharraf of Pakistan through back-channel talks from 2004-2007, is the best plan ever produced in 70 years, and this formula is a win-win realistic approach for everyone—India, Pakistan and Kashmir.

 

Under the plan, India and Pakistan would pull out soldiers from Kashmir, Kashmiris would be allowed to move freely across the de facto border; Kashmir would enjoy full internal autonomy; and the three parties—India, Pakistan and Kashmir—would jointly govern the state for a transitional period. The final status would be negotiated thereafter. 

 

Given the region's history, the Musharraf-Manmohan concept is realistic. It gives the Kashmiris near independence, allows India to maintain sovereignty over Kashmir and lets Pakistan claim it has freed Kashmir from Hindu domination. Compromise is the art of politics, and India must not repeat Pakistan's mistakes in East Pakistan, which led to a war in 1971. Both India and Pakistan must dig themselves out of the mass hysteria of jingoism they have created during the past 70 years over Kashmir.

 

CHAUVINISM POISONS PUBLIC PERCEPTIONS

 

Pakistan's claim over Kashmir is more emotional than material. Pakistan was created based on the concept that Muslim-majority areas of British India would form the Muslim nation. If Pakistan gives up Kashmir, it will void the very ideology that supported its creation and pave the way for its eventual demise, with constituent parts going their own ways. Still, Pakistan has softened its position because it cannot match India's firepower and take over Kashmir by force; Islamabad now wants a face-saving solution that it can sell to the Pakistanis.

 

India, in the beginning, sought to keep Kashmir in its grip to prove that the two-nations theory was wrong. Some attribute it to Nehru's emotional attachment to Kashmir as his birthplace. But over the years India's mindset has taken a different twist. Now it is driven purely by its hatred of Muslims, principally because of the fact that Hindus have been subjugated by a gang of Islamic invaders for 1,000 years; the orthodox Hindus think the Muslims have polluted Hindu culture. If they could, they would wipe out this black spot from the face of Hindustan. Because that is an impossible task, the radical Hindus want to take revenge by driving the Muslims out of India or making them subservient to Hindus.

 

Many Kashmiris, meanwhile, nurture a dream of an independent country of their own. They argue that the Kashmiris are the masters of their fate and that both Pakistan and India must respect their universal right of self-determination. This thinking process ignores India's security concerns vis-a-vis China, and because of this reason, the vision of an independent Kashmir will remain elusive.

 

The main problem that stands today in the way of achieving peace in Kashmir is chauvinism in both India and Pakistan. It has cost tens of thousands of lives and prosperity of both the nations as well as their neighbors. Modi's Nazi-type extremist party has always opposed a negotiated settlement. It operates on a misguided dream of reuniting the subcontinent into one Hindu nation, if necessary, through violence.

 

Because of this faulty doctrine, when Singh invited his predecessor, former Prime Minister Atal Behari Vajpayee, to lead the peace talk with Pakistan, he refused. He cited stiff opposition from the Bharatiya Janata Party. The Indians have a hard-time to accept a negotiated settlement because they have the notion that Kashmir is already theirs, a notion that has resulted from decades-long, hyper-nationalist propaganda by news media.

 

To achieve lasting peace, the M-squared formula should be revived, even though it may be political suicide for any one who dares doing so, especially in India, where a hysteria of Hindu radicalism now reigns supreme. Still, one of the Himalayan gods must make the sacrifice for the sake of the people who have suffered too much for too long.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172972 https://historynewsnetwork.org/article/172972 0
“The thinking power called an idea”: Thomas Jefferson and the Right to Patents

 

As the duties of the various members of a president’s cabinet were much less fixed than they are now, Jefferson was often asked by President Washington to do many tasks that that would seem strange for today’s Secretary of State to do. One such task was to oversee an office directing the granting of patents for new inventions.

 

Jefferson, no stranger to invention and a lover of ideas, was singularly qualified for the job. Secretary of the Treasury, Alexander Hamilton, and the Attorney General, Edmund Randolph assisted him. A patent act—“An Act to Promote the Progress of the Useful Arts”—was introduced to the Congress in 1789, while Jefferson was still in France. After Jefferson’s return and acceptance of the position of Secretary of State in 1790, the act was passed (Apr. 10) and Jefferson was appointed head of the new-formed Board of Arts. If two of three members of the committee like the idea, the patent was approved. Unlike most European offices which tended to only favor ideas from the aristocracy, ideas from all citizens were considered. “The United States consciously created patent and copyright institutions that were intended to function as the keystone of a democratic society,” writes B. Zorina Khan.

 

Despite democratizing invention, the committee granted relatively few patents, just three in the first year and 57 in the three-year period when the law was in effect. In Jefferson and the Rights of Man, Dumas Malone writes that “Guiding Jefferson while patents came to him for review was the belief that patents should be given to particular machines, not to all possible applications or uses of them; that mere change in material or form gave no claim; and that exclusive rights of an invention must always be considered in terms of the invention’s social benefit.” Examples of accepted patents include improved sail-cloths for ships, distilling techniques for alcohol, and fire retardants as well as an improved steamboat.

 

Jefferson was, without question, a natural choice to head the Board of Arts, as no other politician of his day patronized the sciences as did he. “To all who knew this man of limitless scientific curiosity and inventive mind,” continues Malone, “his official connection with the promotion of the useful arts must have seemed eminently appropriate.” Jefferson himself wroteto Monsieur l’Hommande (9 Aug. 1787), and said he “found the means of preserving flour more perfectly than has been done hitherto”: “Every discovery which multiplies the subsistence of man, must be a matter of joy to every friend to humanity.”

 

Yet in another sense, it is bizarre that Jefferson headed an office issuing patents, given that he was always against monopolizing of any sort. He considered monopolizing a useful idea to be a crime, especially in a developing country in large need of useful ideas. 

 

Yet praxis and duty to his country triumphed. In a letter to Madison (28 Aug. 1789), he, critiquing the Bill of Rights sent to him while still in France, offers certain addenda to that bill. One addendum is a right to patent an idea, but for a limited period of time. “Monopolies may be allowed to persons for their own productions in literature & their own inventions in the arts, for a term not exceeding *** years but for no longer term & no other purpose.” He stated to Oliver Evans some two decades later (2 May 1807): “an inventor ought to be allowed a right to the benefit of his invention for some certain time. it is equally certain it ought not to be perpetual.” If perpetual, it would “embarrass society with monopolies for every utensil existing, & in all details of life.”

Jefferson’s cleanest expression of his views on patents came in a weighty letter to Isaac McPherson (13 Aug. 1813) about Oliver Evan’s proposed elevator patent—a string of buckets fixed on a leather strap, for drawing up water. Is Evans’ machine his own, “his invention,” or do others have right of usage? Jefferson wasc oncerned with the machine itself, not its usage. If one person, for instance, received a patent for a knife that points pens, another could not receive a patent for the same knife for pointing pencils.

 

Jefferson begins by noting he has seen similar contraptions used by numerous others—“I have used this machine for sowing Benni seed also” and intends to have other bands of buckets in use for corn and wheat—and even notes that such an elevator was in use in Ancient Egypt. He sums, “There is nothing new in these elevators but being strung together on a strap of leather.” If Evans is to be credited with anything new, “it can only extend to the strap,” yet even the leather strap was used similarly by a certain Mr. Martin of Caroline County, Virginia. There is, Jefferson is clear, nothing original in Evans’ machine.

 

Jefferson, however, had more to say: many believe that “inventors have a natural and exclusive right to their inventions,” which is “inheritable to their heirs.” Yet it “would be singular to admit a natural and even an hereditary right to inventors.” 

 

Why? “Whatever, fixed or movable, belongs to all men equally and in common, is the property for the moment of him who occupies it.” Yet when he relinquishes occupation, he relinquishes ownership. It would be strange to think that a person acquiring ownership of some property, thus, has a natural right to it. That would mean that no one has a right to the property after he perishes, and even more absurdly, that no one had a right to that property prior to him having acquired the land. “Stable ownership is the gift of social law,” and not of nature. The argument applies straightforwardly to ideas. Jefferson sums, “It would be curious then,” adds Jefferson, “if an idea, the fugitive fermentation of an individual brain, could, of natural right, be claimed in exclusive and stable property.” The argument for patenting ideas by appealing to nature is untenable.

 

Jefferson still has more to say. The analogy has its flaws. Ideas are singular. If there is anything that nature has made “less susceptible than all others of exclusive property, it is the action of the thinking power called an idea.” Each person possesses exclusively any idea so long as it is unshared. Once shared, it belongs to everyone.

 

Moreover, an idea shared is fully possessed by all who entertain it. “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” The same cannot be said for property shared. It is that power of an idea, to be shared without lessening its density, which makes it a special gift of nature for “the moral and mutual instruction of man.” He sums, “Inventions then cannot, in nature, be a subject of property.”

 

If there is no natural right for patenting an idea, it is now a matter of convention—that is, appeal to law. What sort of law ought there to be concerning patents? To Jefferson, the answer concerned the social benefits of patents.

 

England was the first country to patents ideas, and America copied it—an indication of some benefit to the patenting of ideas. Still most nations thought that such monopolizing of ideas engenders “more embarrassment than advantage to society.” Moreover, Jefferson believed nations that didnot monopolize inventions “are as fruitful as England in new and useful devices.” Patenting an idea didnot seem to extend its benefits.

 

Though Jefferson expressly settles on no conclusion apropos the utility of patenting ideas, his appeal to nature indicates disrelish of the notion of social benefits of patents. Ideas are too powerful to be imprisoned by patents, and to patent a useful invention, for instance, is to prevent it from full moral and mutual instruction. No better evidence of that is Jefferson’s own refusal to patent his plow moldboard, which was awarded a gold medal for it by the French Society for Agriculture.

 

Jefferson’s suggestion that ideas, because of their potential for social benefit, not be chained to patents but be freely shared, goes against the etymological legacy that ideas are personal. The Greek idea means “form,” “appearance,” “nature,” or “idea.” Its adjectival form, idios (m.) means “one’s own,” “personal,” or “peculiar.”

 

“The action of the thinking power called an idea” is a powerful, even intoxicating Jeffersonian sentiment. Ideas, thus, are potent and singular possessions. One, moved by an idea, is no less moved when he shares it with another, hence the warrant for sharing.

 

Jefferson’s own ideas on ideas, I maintain, are as pungent now as they were when he articulated them, and that pungency can nowise be diminished today by sharing them. They also signal much about the man who articulated them.

 

Just an idea!

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172970 https://historynewsnetwork.org/article/172970 0
How The Media Covered Woodstock – and How Woodstock Changed The Media

via Wikipedia

 

Initial media coverage of the Woodstock concert portrayed the event as a disaster in the making. However, a young generation of reporters saw the event differently. Harlan Lebo, author of 100 Days: How Four Events in 1969 Shaped America (AmazonBarnes & Noble), describes how coverage of Woodstock set a tone for reappraising American perceptions of young people in the 1960s. This is the second in a two part series on Woodstock. Click here to read the first. 

 

As the last stragglers from Woodstock drifting back to their normal routines on the third Monday in August 1969, reporters and editors in newsrooms across America struggled to characterize the era-changing events that had occurred over the weekend.    

At the offices of The New York Times, the mood was highly-charged, with the staff and editors divided by the same types of conflicts Americans at large were experiencing in the wake of cultural change.

 

Over the weekend, most of America’s newspapers – drawing on articles written by national wire services – had covered Woodstock by focusing on the disaster angle, with sprinklings of drug overdoses and mud thrown in. Typical of the headlines was the banner across the top of the New York Daily News on Saturday morning: “Traffic Uptight at Hippiefest.” A caption for a photo that showed a choked road began “Go-Go Is a No-No.” 

 

Perhaps the most flagrant of these stories was the report distributed across the country by United Press International (UPI), which covered the story entirely in a negative light by focusing on the “mud, sickness, and thousands of drug overdoses.” UPI quoted “one far-out music lover” as saying, “I don’t know, man – this thing is just one bad trip.”

 

Reporters from theTimes who were at Woodstock knew better. Writer Barney Collier fought by phone with the news desk to cover the event as an evolving cultural milestone.

 

“To me, it looked like an amazingly well-behaved bunch of folks,” Collier recalled in 2009.  “Every major Times editor up to and including executive editor James Reston insisted that the tenor of the story must be a social catastrophe in the making. 

 

“I had to resort to refusing to write the story unless it reflected to a great extent my on-the-scene conviction that ‘peace’ and ‘love’ was the actual emphasis, not the preconceived opinions of Manhattan-bound editors. 

 

“This caused a big furor at The Times in New York,” Collier recalled. “And this eventually went up to Reston. He said, ‘if that’s the way Barney sees it, that’s the way we’ll write it.’ Collier’s article ran as the lead story on page one of the Sunday Times, and was picked up by other papers across the country.

 

“Despite massive traffic jams, drenching rain storms, and shortages of food, water, and medical facilities, young people swarmed over this rural area today for the Woodstock Music and Art Fair,” Collier wrote.  “The young people came in droves, camping in the woods, romping in the mud, talking, smoking, and listening to the music.”

 

To Collier, “that was probably the most important thing I did – to get it to be seen as it was, rather than the preconceptions of a lot of editors back on the desks. After the first day’sTimes story appeared on page one, the event was widely recognized for the amazing and beautiful accident it was.”

 

Conflict on the editorial page

 

But Collier and other New York Times writers who had covered the event had no control over the editorials that appeared in the paper.

 

That Monday, in a staff editorial titled, “Nightmare in the Catskills,” the Times editors – none of whom had first-hand knowledge of what had occurred in Bethel – blistered the Woodstock Festival.

 

“The dreams of marijuana and rock music that drew 300,000 fans and hippies to the Catskills had little more sanity than the impulses that drive the lemmings to march to their deaths in the sea,” the Times’ editors pontificated. “What kind of culture is it that can produce so colossal a mess?” 

 

Reporters at the Times rebelled against the editorial, some reportedly threatening to quit if the paper did not revise its view. And incredibly, the Times did something rare for the paper: it recanted – reluctantly.

 

On Tuesday, August 19, the editors ran a second editorial – this one entitled “Morning After at Bethel,” which toned down the rhetoric from the day before and offered a somewhat more thoughtful appraisal.

 

“Now that Bethel [New York] has shrunk back to the dimensions of a Catskill village,” the editorial began, “and most of the 300,000 young people who made it a ‘scene’ have returned to their homes, the rock festival begins to take on the quality of a social phenomenon.”

 

The attendees endured the discomforts, the Times wrote, “to enjoy their own society, free to exult in a life style that is its own declaration of independence.”

 

TV network news enlightens the nation

 

While newspapers across the country continued to focus on the concert-as-disaster-area and “hippiefest” in their coverage during the weekend, network television news programs were quick to pick up on the message of Woodstock. At that time, the media with the broadest reach – some 20 million households nightly – were the half-hour evening news shows aired by ABC, CBS, and NBC.  

 

Each network had a crew at the concert site on August 18 to wrap up the coverage. While the reporting touched on the logistical problems over the weekend, the correspondents– each with experience covering youth and the issues of the ‘60s – focused most of their attention on the message of Woodstock.

 

“This weekend says a lot about the youth of America,” said Lem Tucker from NBC, standing near the stage surrounded by a sea of refuse. “More than 350,000 people came looking for peace and music. Many said they learned a lot about themselves, and learned a lot about getting along together, and priorities. And for most, that alone makes it all worthwhile.” 

 

On the CBS Evening News with Walter Cronkite, reporter John Laurence delivered a commentary that looked past the drugs and the traffic.

 

“What happened this weekend may have been more than an uncontrolled outpouring of hip young people,” said Laurence. “What happened was that hundreds of thousands of kids invaded a rural resort area totally unprepared to accommodate them, among adults who reject their youthful style of life. And that somehow, by nature of old-fashioned kindness and caring, both groups came together, in harmony and good humor, and all of them learned from the experience.”

 

Laurence described Woodstock as “a revelation in human understanding.” The local – and older – eyewitnesses, Laurence explained, “had not been aware, as the kids are, of the gentle nature of kids to one another. These long-haired, mostly-white kids in their blue jeans and sandals, were no wide-eyed anarchists looking for trouble.

 

“So what was learned was not that hundreds of thousands of people can paralyze an area…but that in an emergency at least, people of all ages are capable of compassion,” concluded Laurence. “And while such a spectacle may never happen again, it has recorded the growing proportions of this youthful culture in the mind of adult America.”

 

The realities of change

 

How would media coverage of youth culture change after Woodstock?   

 

To Ken Paulson – in 1969 a fifteen-year-old fledgling music writer who four decades later would become editor of USA Today– the reporting on Woodstock and the internal strife it produced at The New York Times were vivid examples of the disconnection at many media outlets that would continue to fester in their coverage of youth culture in America.

 

“The news media didn't know how to cover a cultural event like Woodstock, and they had no appreciation of the art involved,” Paulson said. “This was no surprise. Newspapers across the country were staffed with people who grew up on Elvis, and it is a giant leap from Elvis to The Who.”

 

This lack of appreciation says much about thoughtful discussions of news coveragein the 1960s, but speaks even louder about the financial needs that were emerging for media in that era. During the 1960s, news organizations had to court the emerging baby boomers – a generation strong in numbers and in buying power– but not without significant challenges.  

 

“Media slowly began to realize that they needed younger readers to buy their publications and buy from their advertisers,” said Paulson. “But most aspects of counterculture– such as alternative lifestyles and social protest – just didn't lend themselves to advertising revenue or support for general-interest publications. 

 

One method to reach that growing audience was through coverage of music, which created opportunities for a new, more diverse generation of reporters.  

 

“Music coverage was the most visible form reflecting the culture at the time,” Paulson said.  “Woodstock inspired a reexamination among the nation's news media about how they cover these events – and without appearing to be totally out of touch.”

 

“As a result, in the early seventies, major publications hired young people who could write about young people's music, as well as film.” 

 

This lack of awareness of the contemporary scene among mainstream media outlets continues to shape coverage.

 

“It's not unusual for news organizations to be clueless about emerging cultural developments,” Paulson said, “especially when it involves the art of young people." “I think the gap has intensified now – it’s just taken different forms today,” Paulson said. “For instance, it’s very difficult for mainstream news organizations to understand rap, or to truly appreciate hip hop.”

 

So although that gap has lingered in the fifty years since the summer of 1969, Woodstock was nevertheless a milestone – not just in coverage of the music scene, but also in broader media exploration of social and economic issues that affect younger audiences. After the Woodstock weekend, rock music and other topics concerning young people in the American experience were no longer oddities. It was clear that the future had arrived, when for three days, 400,000 people were part of an instant city that defined its own culture.

 

“Suddenly,” Said Collier, “we began to realize that we were coming into a different world.”

 

At ABC News the night Woodstock ended, veteran journalist Howard K. Smith agreed.

 

“Over the last few days we’ve had a glimpse of our future,” Smithtold his viewers, “and this is what it looked like.” 

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172951 https://historynewsnetwork.org/article/172951 0
Seeing Our Environment Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Two sets of opinions about our environment, the earth which makes our lives possible, are at war in our country. The scientific set is alarmed about the mounting effects of human activity on air, water, animal and plant life, and climate. As population and consumption grow, and industrial methods of doing everything proliferate, the earth has become unable to absorb the multiplying impact on its interlocking natural systems. The pollution of our water supplies, the increasing ferocity of storms, the warming of climate, the rising level of oceans, and the dying of species are already negatively affecting people around the globe. Projections of these trends into the near future predict severe problems for billions of people.

 

The ignorant set of opinions dismisses all evidence with stupid arguments. “The earth’s climate was much warmer long ago.” Yes, it was, before agriculture, before human life emerged, before millions of people lived on the edge of the oceans. “There is no scientific consensus.” Just a lie about the small number of isolated cranks who put forward specious contentions based on made-up evidence. “Computer models are unreliable.” You don’t need a computer to read a thermometer or see how the projections from 10 years ago have already come true. “The end times are coming, so don’t worry about climate change.” Religious dogma trumps science again.

 

Ignorant and stupid may be understatements. The political forces which have argued against doing anything to reduce our impact on the environment, and which now actively reverse previous efforts to protect the earth, deliberately lie about what has already happened. Republicans in Congress and the White House know that temperatures are rising. But they prioritize their own ideological short-term gains over the long-term prospects for our children and grandchildren. Their rich donors believe that their money can protect them against the disasters that will eventually befall the less wealthy, who have always borne the brunt of human-caused environmental disasters. The ignorant, stupid, dishonest set of opinions has been backed by billions of ideological dollars for decades.

 

Against the torrent of influence-buying, the willingness of ideologues like the Heartland Institute to twist the truth, and the self-interest of venal politicians, the honesty and empathy of someone like Greta Thunberg, the Swedish teenager who just sailed across the Atlantic to urge Americans toward bold action against climate change, stands little chance of success. It appears that no amount of evidence, neither scientific articles nor photographs of melting glaciers, can affect the deliberately ignorant.

 

The so-called age gap in climate consciousness might appear to be a hopeful sign for the future. A Gallup poll last year found that 70% of 18- to 34-year olds worry about global warming, but only 63% of 35- to 54-year olds, and 55% of people 55 and older. Nearly half of older Americans put themselves into the ignorant camp, not believing that most scientists agree that global warming is occurring, that global warming is caused by human activities, and that the effects of global warming have already begun. Maybe the key is that only 29% of older Americans think global warming will pose a serious threat in their lifetime. As Louis XV is supposed to have said, “Après moi le déluge.”

 

Politics has an even stronger effect on beliefs about climate than age. The most ignorant Americans are older Republicans, less than half of whom believe global warming is occurring, and less than one-third of whom believe that most scientists agree about global warming.

 

But young people are also not that worried. Only half believe that global warming will pose a serious threat in their lifetimes, which extend well past 2050, the nightmare date by which climate across the globe will be unrecognizable.

 

Opposition to efforts to ameliorate climate change comes not only from conservative politicians. A couple in Missouri who wanted to install solar panels on their roof had to fight for years with local politicians and neighbors who didn’t like the look. In some classic cases of “not in my backyard”, people in the most liberal places refuse to accept minor lifestyle changes. An attempt to construct a wind farm off the shores of Nantucket Island near Cape Cod resulted in years of controversy, litigation, documentaries, books, and polls, and was eventually shelved. The most significant argument against the tall turbines 15 miles offshore from Nantucket was that they would spoil the view. A 2013 law in Massachusetts that would have indexed the tax on gasoline to inflation was repealed by popular vote the next year.

 

Although my family believes I am a Luddite because of my reluctance to embrace cell phones, I blame their use for some of our environmental problems. I am constantly amazed when I walk around on a sunny day and most of the people I see are staring fixedly down at a tiny screen. A flock of ducks flies overhead, trees wave in the breeze, clouds march across the sky, but they earn not even a glance. The younger generations are turning away from the natural world in favor of virtual unreality. They may be watching videos of Greenland’s ice pack melting, but they miss what is happening to their own environment.

 

There is much discussion of the physical dangers of using smart phones while walking. I am more concerned about the intellectual danger of ignoring the physical environment during the short periods when most people are outside.

 

Some scientists are worried about the “human costs of alienation from the natural world”, which has been labeled “nature deficit disorder”. Biologists identify “plant blindness” as one symptom, “the inability to see or notice the plants in one’s own environment”. As our society has moved off the land into cities and suburbs, we have distanced ourselves from the natural world. Now the lure of rapidly changing images and instant communication distracts too many people from the slow degradation of the earth on which we stand.

 

The pace of environmental change is much faster than ever before, but slow in terms of human life span. It is difficult to convince anyone to accept something now that they don’t like in order to prevent a catastrophe decades away.

 

Looking down at our phones, we won’t see the cliff ahead.

 

Steve Hochstadt

Springbrook WI

September 3, 2019

 

Thanks to my cousins, Roger Tobin and Saul Tobin.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/blog/154243 https://historynewsnetwork.org/blog/154243 0
Roundup Top 10!  

Lessons from the UN peacekeeping mission in Rwanda, 25 years after the genocide it failed to stop

by Samantha Lakin

Despite the broader mission’s many well-documented failings, peacekeepers took risks to save lives, going beyond official orders to protect innocent Rwandans.

 

As Hurricane Dorian Threatens Florida, Gov. DeSantis & Trump—Who Haven't Curbed CO2 Emissions—Should Resign

by Juan Cole

If you don’t recognize the cause of a problem, you can’t fix the problem.

 

 

To rescue democracy, we must revive the reforms of the Progressive Era

by Ganesh Sitaraman

The playbook for taming industrial capitalism already exists. It’s the essential starting point for reform today.

 

 

Police and punitive policies make schools less safe, especially for minority students

by Kathryn Schumaker

The increase in school security is directly linked to the rise of student activism that started to transform schools 50 years ago.

 

 

The Original Evil Corporation

by William Dalrymple

The East India Company, a trading firm with its own army, was masterful at manipulating governments for its own profit. It’s the prototype for today’s multinationals.

 

 

Not having kids is nothing new. What centuries of history tell us about childlessness today.

by Rachel Chrastil

The long history of childlessness can help us to debunk myths, tell our stories and expand the range of our possibilities.

 

 

When Henrietta Wood Won Reparations in 1878

by W. Caleb McDaniel

She sued the man who had kidnapped her into slavery for damages and lost wages, offering lessons for today’s debate.

 

 

Could footnotes be the key to winning the disinformation wars?

by Karin Wulf

More than ever, we need what this tool provides: accountability and transparency.

 

 

 

The Price of Self-Delusion

by Ronald Radosh

Paul Robeson, the towering figure of American arts, athletics, and civil rights activism, was also an unapologetic Stalinist. Failing to acknowledge this checkered legacy ultimately does a disservice to the goals he fought for.

 
]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172962 https://historynewsnetwork.org/article/172962 0
Thou Shalt Not Ration Justice

 

With the passing of Norman Lefstein this week in Indiana where he had served as Dean of the IU McKinney School of Law, the country has lost a champion for indigent defense reform whose initiatives and advocacy have impacted the national conversation for generations. 

 

 Norm often quoted the late federal Judge Learned Hand: “if we are to keep democracy, there must be a commandment: Thou Shalt Not Ration Justice.” He argued that the promise of justice is hollow so long as public defense remains an unfunded mandate. The uneven dispensing of justice from state to state, county to county, and defendant to defendant has too often been the result. 

 

Norm was no idle critic. Norm took his first case as a public defender in 1963, only six months after the Supreme Court’s landmark decision in Gideon v. Wainwright.  In the early 1970s, Norm  headed the Public Defender Service of the District of Columbia, an agency which quickly became —and remains — a national model for delivering public defender services.  In the words of his colleague,  Barbara Babcock, “many of Norm’s ideas were novel for the time; today they are the hallmarks of excellence in a defender program.”

 

By the 1980s, Norm was an indispensable member of any state or national conversation about public defense. Until his dying day, Norm remained  a fixture of the American Bar Association’s Standing Committee for Legal Aid and Indigent Defendants. There, he co-authored the Association’s touchstone pronouncements on indigent defense work, including its Standards for Providing Defense Services and the Defense Function, and its Ten Principles of a Public Defense Delivery System.

 

Norm pioneered some of the first attempts to collect comprehensive data on indigent defense around the nation. In the wake of the 1963 Gideon decision, Norm knew that states were underfunding public defense – he just couldn’t prove it.  Norm’s 1982 study showed that defender systems in some states were funded less than 5% of the D.C. public defender’s  per-capita budget.

 

Through his life, Norm’s message on indigent defense policy was consistent. “Current financing is woefully insufficient,” and America’s legal system must do better.  Reflecting on the 50th anniversary of Gideon, Norm quoted John Lennon’s Imagine and described himself as a ‘dreamer.’ But Norm was remarkable precisely because he wasn’t just a dreamer.  Instead he was a “true believer.” From the courtroom to the classroom, Norm understood the importance of proof both inside and outside the courtroom.  He led public defense – and public defenders — into the brave new world of evidence-based reform. 

 

Norm’s work is unfinished. But every day, as they stand beside their clients in court, tens of thousands of lawyers carry on his legacy. At the Deason Center, located in the Dedman School of Law at Southern Methodist University in Dallas — as public defense researchers (and former public defenders) – we live and work in Norm’s long shadow. It is quite difficult to imagine what public defense will look like without him. But to honor him, we must try.

]]>
Mon, 23 Sep 2019 00:24:46 +0000 https://historynewsnetwork.org/article/172943 https://historynewsnetwork.org/article/172943 0