Google Questions Google Questions articles brought to you by History News Network. Fri, 26 Apr 2024 08:54:14 +0000 Fri, 26 Apr 2024 08:54:14 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://hnn.us/article/category/160 Understanding America's History Of Gun Control

Receiving a breaking news alert about a mass shooting in the United States is no longer shocking, but anxiously anticipated. As the prevalence of gun violence and mass shootings continues to exponentially increase, asking the question “Did you hear about that shooting?” is now often met with the question “which one?” Americans have become shockingly desensitized to the constant violence caused by guns, and the significance of a mass shooting is appraised by the number of casualties inflicted, rather than the frequency at which these shootings occur.

While all Americans mourn the lives lost, politicians remain divided about the best solution. Those who support gun control attribute the recent increase of mass shootings to the lack of federal regulations on the sale of guns. Contrastingly, those who advocate for gun rights argue that they have the right to be armed according to the Second Amendment and that the removal of weapons from American citizens would be unconstitutional. However, gun control in America has not always been a polarized, uphill battle. Historically, support for gun control has been largely influenced by how gun violence and gun ownership have effected and subsequently shaped our political and social spheres. Understanding the turbulent history of gun control in the United States can explain why Americans cannot agree on gun control legislation when it is most needed. 

The origins of gun control began in 1934 when the violence caused by crime boss Al Capone incentivized Congress to pass legislation that required all gun sales to be recorded in a national registry as a way to regulate who owned firearms in the country. About 25 years later, the first single gunman mass shooting occurred in Camden, New Jersey. Howard Unruh killed 13 people in his neighborhood in 1949, bringing large-scale awareness to gun violence in the United States. But it was the assassination of President John F. Kennedy in 1963 and Rev. Martin Luther King Jr. in 1968 that sparked unprecedented public support for gun control at that time. These events catalyzed The Gun Control Act of 1968 which became law on October 22, 1968. The federal law prohibited the sale of mail-order guns, banned all convicted felons, drug users, and those found “mentally incompetent” from owning a gun, and changed the age of legal purchase to 21. Upon signing the Gun Control Act of 1968, President Lyndon B. Johnson stated: “We have been through a great deal of anguish these last few months and these last few years - too much anguish to forget so quickly… We have made much progress--but not nearly enough.” 

However, President Johnson’s call to add more regulations on gun sales in America did not come to pass. The rising crime rates in the 1960s generated widespread concern about violence in the United States. Many believe this fear of crime was compounded by racialized fears of black people with guns. For many, this intensified the perceived need to obtain a firearm for personal protection. As concerns for personal safety escalated, the National Rifle Association utilized the Second Amendment and its large political influence to lobby against previously established gun control policy.

Surprisingly, the NRA was not always opposed to gun control legislation; in fact, they supported initial efforts that enforced gun control, including the Gun Control Act of 1968. But as more and more individual Americans began buying firearms, the NRA lobbied to equate gun ownership with American freedom, as they interpreted the Second Amendment as the right for all citizens to bear arms individually. The NRA tried to convince the public that owning a gun was more than just a way to ensure personal safety: it was patriotic and a constitutional right. This newfound change surrounding the appeal in gun ownership, which stemmed from the NRA’s social and political influence, led to the 1986 Firearms Owners’ Protection Act. Ultimately, this law rescinded the majority of the gun control legislation that was established in the Gun Control Act of 1968 and included a prohibition on the previously implemented national registry of gun owners. This dramatic transformation in gun control legislation combined with the growing patriotic sentiment surrounding gun ownership coalesced into the polarized gun control issue the United States has come to recognize.

Since the 1986 Firearms Owners’ Protection Act, the NRA has fought hard to keep gun control to a bare minimum and has continued to promote this patriotic culture within communities that advocate for gun rights. It is important to note that in recent mass shootings, the majority of weapons used in these situations were purchased legally, despite several gunmen having documented mental health issues or criminal histories.  Although 89 percent of Americans support expanding federal background checks and all 19 democratic candidates running for the 2020 presidency support an assault weapons ban, Congress is seemingly unable, or unwilling, to pass gun control legislation.

Unfortunately, the topic of gun control in the United States is one that is both painfully familiar and extremely taboo. It has become a topic that has permeated into our everyday lives, yet discussing its history and complexity is avoided at the dinner table. But we need sufficient gun control legislation now. What is strikingly clear is that our current gun control legislation is simply inadequate at protecting Americans and is allowing those who want to hurt or kill large numbers of people as quickly as possible to obtain the means to do so both legally and with ease. Our history demonstrates that gun control legislation is not impossible to achieve, but also warns that how we frame gun ownership impacts political outcomes. When we glorify guns, it is harder to pass new gun control legislation. We must learn from historical trends and consider them as we continue to take steps towards implementing effective gun control today.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/173342 https://historynewsnetwork.org/article/173342 0
Why Do Jews Celebrate Hanukkah?

Hanukkah table - By MathKnight - Own work, CC BY-SA 3.0

When asked which Jewish holidays they know, my students frequently name Hanukkah first. Indeed, it is probably the most public Jewish holiday in the U.S. Not only does it often coincide with Christmas, but there are also giant menorahs, delicious foods, and, of course, presents. But despite the ideal conditions for a great holiday — gifts, cuisine, and the right timing—this popular holiday had a rather slow start.

The eight-day long festival of Hanukkah, “dedication” in Hebrew, celebrates the purification of the Temple in Jerusalem. The events that led to it are among the greatest puzzles of ancient Jewish history. At that time, Judea was a part of the vast Seleucid kingdom ruled by Antiochus IV. His predecessors had no problem tolerating the diverse religious beliefs of their subjects. Yet in 167 BCE Antiochus banned Judaism and converted the Jerusalem Temple into a pagan shrine. To explain the king’s unprecedented actions, some scholars argue that he wished to unify his empire under one religion. Others suggest that Antiochus acted on the advice of the Jewish elites. These Jews embraced the Hellenistic culture and believed that a strict observance of their ancestral religion led to a dangerous isolation from the rest of the world. According to yet another view, the king’s forceful prohibition of Judaism was a punishment for a Jewish uprising fueled by their separatist religious worldview.  

One of the reasons for the lack of a clear-cut explanation of Antiochus’s policy is the scarcity of sources. Most of what we know about his decree, its implementation, and the subsequent Jewish revolt culminating in the first Hanukkah comes from two texts, the First and Second Books of Maccabees. Given the current popularity of this festival, it is surprising that neither of these Jewish books found its way into the Jewish Bible. Rather, we owe their preservation to Christian copyists. Of course, Jewish traditional lore also has stories about Hanukkah. Yet these are relatively few and late.

Curiously, among these late traditions is the one that most Jews are best familiar with—the story of the magical oil vessel known as the cruse. Found undefiled in the desecrated Temple, a jar with a mere day’s supply of sacred oil for the Temple candelabrum miraculously lasted for the eight days of purification. This celebrated miracle pales in comparison to the remarkable victories of the rebels led by Judas Maccabeus. However, since the action-packed accounts of the apocryphal Books of Maccabees were less known among later Jews, the miracle of the cruse, found in the authoritative Babylonian Talmud, became the hallmark of the divine presence in the Hanukkah story. 

The rededication of the Temple was undoubtedly an occasion for a celebration. However, there is a big difference between a one-time festival and an annual one. The Second Book of Maccabees opens with two letters by Jerusalem officials calling for Jews in Judea and Egypt to celebrate Hanukkah. The second letter is a masterfully constructed argument in support of the legitimacy of the new holiday. Pointing, among other things, to the eight-day long dedication of the First Temple by King Solomon, it concludes with a plea: “Please, celebrate the days.” It seems that not everyone was eager to add another festival to what was already a busy Jewish calendar.

Hanukkah’s slow start is not the only curious aspect of this festival. Some of its most cherished features have little to do with the first Hanukkah. The beloved potato pancakes, latkes, seem to have emerged as a Hanukkah dish among the poverty-stricken Jews of Eastern Europe only in the 19th century. The famous dreidel (spin top), a game embraced by many cultures throughout the ages, may have risen to prominence in Jewish communities living in the German-speaking countries. The four Hebrew letters traditionally inscribed on a dreidel correspond to the first letters of the words “nothing,” “all,” “half,” and “put in” found on a German spin top. Finally, the donuts filled with jam (sufganiyot in Hebrew), originally a German dessert, became a part of Hanukkah festivities in Palestine as late as the early 20th century. Still, who would want a Hanukkah packed with all these historical facts, but without latkes, sufganiyot, and dreidel? Not even this professor of Jewish history.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/167599 https://historynewsnetwork.org/article/167599 0
Is It Legal to Buy Automatic Weapons? Time Magazine reports:

Starting in the Prohibition era, Congress has restricted automatic weapons and gun ownership at key points. The National Firearms Act of 1934, the first federal gun-control law, did not outlaw automatic weapons, but it made them expensive and difficult to obtain. It required recording all sales in a national registry, as well as the tax, background check, and other requirements listed above.

After the assassinations of President John F. Kennedy and the Rev. Martin Luther King Jr., President Lyndon Johnson spearheaded the Gun Control Act of 1968. It implemented a lot of the fun ownership restrictions still in place today – preventing felons and the mentally ill from buying guns and raising the handgun purchase age to 21, among other things — but it had an important effect on machine guns, as well. The Gun Control Act prohibited selling imported guns to civilians “with no sporting purpose,” and automatic weapons were determined to fall in that category. So after this law was passed in 1968, average gun-owners couldn’t legally buy imported automatic weapons.

The Firearm Owners Protection Act of 1986 was passed to prevent the federal government from creating a registry of gun owners, but it had an amendment tacked on that banned civilian ownership of machine guns manufactured after May 19, 1986. There are reportedly fewer than 200,000 machine guns in the U.S. that meet that criteria, according to Slate

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/167097 https://historynewsnetwork.org/article/167097 0
Why Did the Neo-Nazis in Charlottesville Chant “Blood and Soil”?

"Blut und Boden"-Symbol of the German "Reichsnährstand", Third Reich

Related Link HNN's Google Questions Department

Recent events in Charlottesville have revived many old racial tropes and anti-Semitic slogans, most notable of which was “blood and soil” (German: blut und boden), an unfamiliar phrase to those without an interest in 19th and 20th century German political history. The phrase first came into popular use in Germany in the late 19th century by German nationalists as a populist slogan meant to emphasize the racial purity of the German people (blood) and their connection to a German homeland (soil). There is more to it though; the slogan was meant to evoke romantic notions of the ideal German as both racially pure and intimately tied to the land, a sort of agrarian puritanism. Two pronged, its use was meant to both mythologize the rural country German and to deemphasize the role of the urban elite in German society. Furthermore, the construction of the phrase; binary, monosyllabic and deathly simply also had the effect of being a primal galvanizing call to all Germans.

Richard Walther Darré, a prominent Nazi official and eugenics thinker, is largely credited with popularizing the Blood and Soil movement in early 20th century that specifically aimed to resettle German citizens from the city to the countryside, believing that there was a strong mutual relationship between the people (specifically Germans) and the land (Germany). This ideology placed a premium on the peasant life, and asserted that the purest Nordic blood coursed through the veins of German country farmers. In other words, the term implied that the peasantry represented German ethnicity in its most authentic form. As such, the primary objective of the Blood and Soil ideology was to ensure the preservation of pure German bloodlines and use the military as a tool to defend the land so inextricably linked to those bloodlines. The Blood and Soil movement was founded on an ethno-nationalist ideology that later formed much of the philosophical basis for future Nazi policies. According to Gerhard Weinberg, Professor Emeritus of History at the University of North Carolina at Chapel Hill and an expert on Nazi history, the phrase implied an extreme nationalism.

In the early 1930’s, as Hitler and the Nazi Party gradually rose to prominence, the "blood and soil" ideology directly influenced Hitler’s military conquests, particularly the invasion of large swaths of eastern Europe. As Professor Weinberg explains, the phrase “rather quickly came to be identified with the Nazis whose leader publicly in both speeches and writing advocated more wars at a time when most people outside Germany thought that what they called ‘The Great War’ (WW I) was quite enough.” Moreover, the Nazis used the ideology to tap into Germany’s economically ravaged rural class and helped to focus the blame for their hardships squarely on the shoulders of the elites of German society, particularly Jews. Again, this perception of racial minorities as threats to German heredity is very much in keeping with “blood and soil” ideology, as Weinberg notes “the term blood …. had a racial connotation from the beginning.”

Today the phrase has caught a foothold among both white nationalists and Neo-Nazis and has been used as a rallying cry to highlight the concern, as they see it, that the United States is moving away from its traditional values and adopting those of the immigrants who have gradually populated the country over the last half century. The use of “blood and soil” by white nationalists could also be meant to draw a parallel between the ethnic ideal of the German rural class and their ethnic ideal of a white American working class. In Charlottesville, with a confederate statue at the center of the rally, chanting “blood and soil” was a clear public declaration of the protestors’ support for white supremacy as well as white racial protectionism. 

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/166923 https://historynewsnetwork.org/article/166923 0
What’s the Great American Eclipse?

Total Solar eclipse 1999 in France. - By I, Luc Viatour, CC BY-SA 3.0

As Americans across the country get ready to view the upcoming total solar eclipse, it is important to remember that humans didn’t always have the luxury of knowing when the sun suddenly “disappeared” in the middle of the day. Ancient records of eclipses date back centuries from cultures all around the world. In fact, it is believed that the quest to understand this mystifying event is what led to modern astronomy.

From sky wolves to decapitated heads, there is no shortage of ancient myths about both solar and lunar eclipses. For centuries, indigenous people all over the world have tried to explain this seemingly unnatural phenomenon. According to Dr. E.C. Krupp, director of the Griffith Observatory, "Most of eclipse lore is based around the concept that there is something attacking the sun or the moon, and people have a role to play in stopping it.”

Despite major advances in science and technology since ancient times, people still assign a religious significance to eclipses. Dr. Krupp is frequently asked if there is a danger to pregnant women—the answer, of course, is no. In 2015, a couple of pastors linked the unusually high number of blood moons and eclipses with the proximity to recent Jewish holidays as evidence of an imminent apocalypse. While astrologers may not assign a religious significance to eclipses, some are quick to point out major events happened on or near a solar eclipse—like the devastating earthquake that hit Haiti in 2010 and Princess Diana’s death in 1997.

While it may seem like a huge coincidence that some historic events coincide with eclipses, it is essential to remember that day-to-day life does not stop for cosmic events. It would then stand to reason that something significant would eventually occur on a day that happens to document an eclipse. Additionally, it is not clear where people draw the line in associating an event with an eclipse—same day? Same week?

Even if a specific date does not hold particular cosmic significance, it’s worth contextualizing the few solar eclipses that occurred on US soil. Compared to the rest of the world, the US has relatively few documented total solar eclipses. The earliest known case in the US, or more accurately, what would later become the US, appeared in the Pennsylvania Gazette on April 30, 1752. Even on the eve of the Civil War, the Baltimore Daily Exchange reported on a total solar eclipse that was only visible from the west coast on July 18, 1860. Interestingly, the last time a total solar eclipse spanned across the contiguous states was June 8, 1918. While American soldiers were off fighting in the Great War, citizens on the home front were treated with a view of this unusual cosmic event.

Luckily, the next total solar eclipse is just around the corner for curious eyes across the contiguous United States. On August 21st, National Geographic reminds Americans to keep their eyes safe. Until the moon is completely blocking the sun, it is recommended that viewers use glasses with special filters to view the eclipse so as not to damage their eyes.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/166461 https://historynewsnetwork.org/article/166461 0
When Was the Department of Justice Established?

The Robert F. Kennedy Building which serves as the headquarters of the U.S. Department of Justice. CC BY-SA 3.0

The Judiciary Act of 1789 established the US federal justice system. Most notably, it created the Supreme Court and the post for the Attorney General (which began as a part-time job). As time went on, the responsibilities of these positions increased to adapt to a growing nation. The Supreme Court grew in membership from six to nine, and the Attorney General became the head of the Department of Justice.

The Department of Justice (DOJ) was created by the Act to Establish the Department of Justice in 1870. You would think we’d know why it was established. But we don’t. That is, we don’t know definitively. There are plenty of theories, though.

One widely accepted theory, reiterated by Fordham Law Professor Robert Kaczorowski, is that the DOJ was created to help deal with the general influx of litigation that occurred at the end of the Civil War. Another theory — backed by scholar and Stanford Law Professor Norman Spaulding in 2010 — is that the federal government created the DOJ during Reconstruction to help enforce former slaves’ civil rights. This narrative fits well with the DOJ’s goals in the 1960s under the leadership of Robert F. Kennedy, who fought to protect African American’s civil rights. But is this theory incomplete — or flat out wrong?

In 2014 Jed Handelsman Shugerman, scholar and Fordham Law Professor, provided an alternative interpretation of the DOJ’s purpose in post-Civil War America. Rather than emphasize the DOJ’s role in civil rights, Shugerman points to its role in the “professionalization of American legal practice.” At the time, it was thought that well-trained lawyers who practiced with non-partisan views were less likely to be corrupted. This same principle led congressman Thomas Jenckes and his associates to create the New York City Bar Association in 1870 — the first of its kind — to ensure the faithful practice of the law. Today, the modern bar association continues to grow as the demand for lawyers is on the rise. It may seem obvious now that lawyers should be trained and required to meet a high standard; however, that was not always the case. In the antebellum period lawyers (like Lincoln) apprenticed themselves to established attorneys.

As the DOJ was forming, so was the idea of “bureaucratic autonomy and expertise.” Theoretically, having an independent or non-partisan Attorney General at the head of the DOJ made it more difficult for executive officers to circumvent the law and get away with it. This was a big change from the spoils system that prevailed prior to the Civil War.

Unlike earlier scholars, Shugerman argues that the DOJ wasn’t established to help advance the goals of Reconstruction. He found that there was “no mention of how the new department would help … enforce civil rights legislation. The members of the Joint Select Committee on Retrenchment generally were unsympathetic to Reconstruction and to civil rights enforcement.”

Today the Department of Justice is a colossus. The 2017 fiscal year budget request is $29.0 billion, which includes 118,110 positions, including a Deputy Attorney General (post established in 1950) and an Associate Attorney General (1977). 

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/165620 https://historynewsnetwork.org/article/165620 0
Who was the Logan Act named for and what did he do?

George Logan

The Logan Act bears the name of Dr. George Logan, a private citizen who took it upon himself to ease tensions between the United States and France after the XYZ Affair during the Adams Administration. While he was well received by the French, he did not fare as well back home. Understandably, the government did not approve of Logan’s attempts to undermine its authority.

In order to prevent such an incident from happening in the future, Congress passed the Logan Act in 1799. According to a report published by the Congressional Research Service, there have been numerous accusations, but no convictions under the Act. There was, however, one indictment. Francis Flournoy, a farmer from Kentucky, was indicted by a grand jury in 1803 for promoting the creation of a separate Western nation that would ally itself with France.

 

Text of the Logan Act

In the two centuries since the law originated there have been numerous references to the Logan Act by the Department of State and judiciary officials. One such accusation occurred in 1976, following former President Nixon’s visit to the People’s Republic of China. Accusing a political foe of violating the Logan Act is not uncommon. Ross Perot, Jim Wright, and Jesse Jackson are among the many political figures who have been accused in the media of violating the law.

Most recently, President Donald Trump’s national security advisor Michael Flynn was accused of violating the Logan Act. In January 2017, the Washington Post reported that Flynn had contact with Russian Ambassador Sergey Kislyak prior to Trump’s inauguration. Top Republican officials did not deny that Flynn contacted Kislyak; however, Vice President Mike Pence explained that Flynn addressed personal subjects that were unrelated to the sanctions.

After further investigation, Flynn’s defense was called into question by the acting attorney general. Flynn has since admitted that he cannot be sure that sanctions were not discussed and apologized to Pence for his misleading explanation of the events that transpired between himself and Kislyak.

House Minority Leader Nancy Pelosi was among the list of leading Democrats who demanded that Flynn be relieved of his duty as Trump’s national security advisor. Even though The Hill reported on the morning of February 13th that Flynn had no plans to resign from his post, he did just that later the same day. Considering the history of conviction under the Logan Act, Flynn’s future is unknown. 

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/165191 https://historynewsnetwork.org/article/165191 0
Which President Held the First News Conference?

Related Link Presidential Press Conferences By Rick Shenkman 

When one thinks of a presidential news conference what comes to mind is seeing the president on TV. But there were press conferences long before TV. 

The press began to have a presence in the White House during the second Cleveland administration in the early 1890s. A group of correspondents had their own table inside the building. A Washington Star reporter by the name of William W. Price referred to it in a letter to the staff of President Grover Cleveland. Under President William McKinley, the area where reporters could roam was further expanded. By 1898, journalists were found sitting about on the White House porch, in the front lobby and on the landings, according to accounts of the time.

Woodrow Wilson, president from 1913 to 1921, held the first press conference in March 1913. Things did not go as planned. Wilson's private secretary, Joseph Tumulty, advised the press in Washington that at 12:45 p.m. on March 15th, 1913 the president would talk with them. The new president, expecting a small group, wanted to greet each man one by one to foster a personal relationship. When President Teddy Roosevelt spoke to reporters it was at his barber’s when he was getting a morning shave. But when the Wilson conference began it was obvious he had miscalculated.  One hundred twenty-five newsmen showed up. “I did not realize there were so many of you,” said Wilson. "Your numbers force me to make a speech to you en masse instead of chatting with each of you, as I had hoped to do, and thus getting greater pleasure and personal acquaintance out of this meeting." The New York Times headline reads: "Wilson wins newspaper men." From then on press conferences became a regular feature of presidential politics. All 17 of Wilson’s successors have held them.

But each president handled press relations differently. Calvin Coolidge, though he was known as Silent Cal, held the most press conferences: 521 sessions or an average of 93 a year. Franklin Delano Roosevelt was famous for inviting reporters in for a chat around his Oval Office desk.  He was on a first name basis with reporters and often charmed them – to his great political benefit. Harry Truman also got along well with reporters, but wasn’t as agile as FDR.  In a slip of the tongue, Truman gave his thoughts about the Republican senator from Wisconsin, Joseph McCarthy. Truman told the assembled press on March 1950, “ I think the greatest asset the Kremlin has is another McCarthy.”

The first televised press conference during the Eisenhower administration, happened January 19th, 1955, but it wasn’t broadcast live. It was taped so the footage could be edited. Select clips were then distributed.  Kennedy was more daring; he agreed to allow news conferences to be broadcast live.  Comfortable with the medium, he turned in stellar performances marked by grace and wit.  When difficult subjects came up he found clever ways to deflect the questions.  As HNN reported years ago: Kennedy was the maestro. Anytime he wanted to get the Washington press chorus to sing his tune, he called a press conference. Peter Lisigor, the Chicago Daily News reporter, complained, “We were props in a show. We should have joined Actors Equity.” At one memorable press conference—his ninth in his first four months in office—JFK wittily remarked about his negative press coverage that he was “reading more and enjoying it less.”

Televised formal news conferences have receded in recent years as presidents have learned to use other formats to get their message out to the public:  Ronald Reagan inaugurated weekly radio broadcasts, Bill Clinton appeared on TV talk shows, and Donald Trump has turned to Twitter.  

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/165169 https://historynewsnetwork.org/article/165169 0
Why Was the Minimum Wage First Established?

The minimum wage was conceived as a way to help bolster wageworkers and decrease class stratification. It was first introduced in the United States with the Fair Labor Standards Act of 1938 (FLSA). Passed under President Roosevelt, this act called for the first national minimum wage of 25 cents an hour. This created a floor on wages in the labor market and overall helped to create fairer labor standards throughout the country.  

Classical economic theory suggests that the minimum wage would have a stabilizing effect on the economy. As economist Richard Freeman explains in his 1996 book, Uneven Tides: Rising Inequality in America, there are three main ways that the minimum wage affects income disparity by distributing more earning power to people at the lower end of the economic spectrum. He refers to this as the “redistribution theory.”

The first outcome of increased minimum wages is that the cost of producing goods and services increases, which results in higher prices. These increased prices mean that everyone is paying more for goods, including the middle and upper classes, yet only the lower class is simultaneously increasing its income, thereby increasing the purchasing power of low-wage workers. In addition, higher wages decrease company profits, while simultaneously increasing the income of the poor. Increased wages also cost (some) jobs; often those are on the middle or higher end of the income spectrum. Overall, raising the minimum wage acts to decrease the wealth of the wealthier classes while increasing the wealth of lower paid workers.

The FSLA was also found to temporarily harm regional economies. Economist John F. Moloney, found in 1942 that after the law was implemented, southern plants experienced some adverse effects: “the value of output [was] 18 percent lower per plant and 21 percent lower per worker in the South than in the rest of the nation.”

Despite some negative regional effects, the FLSA empowered workers and decreased income inequality in America, as seen in the declining value of the Gini coefficient, a measure of inequality. Overall the first implementation of the minimum wage benefited American workers, especially when combined with increased labor demand that resulted from the onset of World War II.

Despite Roosevelt’s intentions, the minimum wage did not continue to be a means of guaranteeing an economic floor for wageworkers. At the time, the actual poverty line was determined by multiplying estimated food costs by three. At its high point, the federal minimum wage could support a family of three above the poverty line, but by the 1980’s it could not even support a family of two. From January of 1981 to April of 1990, the federal minimum wage was not increased at all. In fact, in these nine years, the real value of the minimum wage, adjusted to 2012 dollars, decreased from $8.29 to $6.66.

Surprisingly, poverty declined during this period. The number of people living under the poverty line during the 1980s peaked in 1983, when the minimum wage was worth approximately $7.59 in 2012 dollars. Why didn’t poverty increase? It was because poverty is more than a measure of hourly wages; it involves a variety of factors including federal assistance in the form of Food Stamps, welfare, inflation, and the unemployment rate.

REFERENCES

"Minimum Wage - U.S. Department of Labor - Chart1 | United States Department of Labor." Minimum Wage - U.S. Department of Labor - Chart1 | United States Department of Labor.

Moloney, John F. “Some Effects of the Federal Fair Labor Standards Act Upon Southern Industry” Southern Economic Journal 9.1 (1942): 15–23.

Litwin, Benjamin S., "Determining the Effect of the Minimum Wage on Income Inequality" (2015). Student Publications. Paper 300.

Lee, David S. “Wage Inequality in the United States During the 1980s: Rising Dispersion or Falling Minimum Wage?”The Quarterly Journal of Economics 114.3 (1999): 977–1023.

Persons Below Poverty Level in the U.S., 1975–2010” Poverty and Income. Infoplease. © 2000-2016 Sandbox Networks, Inc., publishing as Infoplease. 21 Jan 2016.

"Reflecting on SNAP: Purposes, Spending, and Potential Savings." Brookings Institution. 08 May 2012.

"History of Washington Minimum Wage."

U.S. Census Bureau. Historical Income Tables: Income Inequality. Table H-4. Gini Ratios for Households, by Race and Hispanic Origin of Householder. [Excel Spreadsheet]. Web. 21 Jan 2016

Stack, Carol B. All Our Kin: Strategies for Survival in a Black Community. New York: Harper & Row, 1974.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/164635 https://historynewsnetwork.org/article/164635 0
Do Countries Have a Legal Obligation to Protect Refugees Fleeing Oppression?

A recent article in the New York Times reports that Donald Trump’s proposed ban on refugees from select countries “would be in blatant violation of international law, which requires countries to offer protection to all those fleeing war and persecution.” The international laws that the NYT is likely referring to are the United Nation’s 1951 Convention Relating to the Status of Refugees and the 1967 Protocol Relating to the Status of Refugees.

The 1951 Convention followed the precedent set in 1948 by Article 14 in the Universal Declaration of Human Rights, which states that “everyone has the right to seek and to enjoy in other countries asylum from persecution.” Initially, the Convention’s purpose was to protect World War II refugees in Europe who were displaced prior to January 1, 1954. More than a decade later, the UN revisited the refugee crisis with the 1967 Protocol. The Protocol expanded the scope of refugee protection by amending the geographic and time constraints set by the 1951 Convention.

The Convention defines the term “refugee” as “someone who is unable or unwilling to return to their country of origin owing to a well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group, or political opinion.” It also details the protections that refugees should be granted in the country where they are seeking asylum, including the right to work, education, and freedom of religion.

It is difficult to determine whether or not the New York Times is correct in asserting that Trump’s refugee ban would violate international refugee law. The 1951 Convention left the process of determining whether a displaced person meets the criteria to be considered a refugee up to the states—as long as the process is “fair and efficient.” According to Dr. Craig Arceneaux, coordinator of the Model United Nations program and a Political Science professor at California Polytechnic State University, “the very definition of a refugee can be open to debate” because of the vague wording of who may currently fear persecution and how a country determines the origin of it. Whether, for example, these laws apply to refugees fleeing a country on account of unequal gender rights is not clear.

In addition, the United Nations High Commissioner for Refugees (UNHCR) does not require that countries accept all refugees. A country may deny asylum if there is serious reason to believe that a person is guilty of, for example, a “crime against peace” or is “guilty of acts contrary to the purposes and principles of the United Nations.” Even if displaced persons are determined to be refugees, the UNHCR cannot force a country to accept them.

In general, according to James Milner, a political science professor at Carleton University and the co-author of UNHCR: The Politics and Practice of Refugee Protection, “states are bound by the principle of non-refoulement.” As defined by the UNHCR, non-refoulement is the practice of not expelling or returning “a refugee in any manner whatsoever to the frontiers of territories where his life or freedom would be threatened on account of his race, religion, nationality, membership of a particular social group or political opinion.” But it’s violated frequently. Just 10 countries host 60% of the world’s refugees.

As of April 2015, there are 145 parties to the 1951 Convention and 146 parties to the 1967 Protocol. The United States is among the few that only adopted the 1967 Protocol.

According to the UN, “As long as people continue to be persecuted, there will be a need for the 1951 Convention and its 1967 Protocol.” These laws paved the way for additional refugee legislative action like the 1969 OAU Refugee Convention in Africa and the 1984 Cartagena Declaration in Latin America. The future of international refugee law will develop as global conflicts continue to rise and fall.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/164628 https://historynewsnetwork.org/article/164628 0
Why Do We Vote on Tuesdays?

For the average working American, voting on a Tuesday can be a major inconvenience to the routine business week. In a TED Talk about Election Day, Jacob Soboroff features interview clips of prominent politicians who are incapable of explaining why the American people vote on Tuesdays. Rick Santorum, Ron Paul, Newt Gingrich, and John Kerry – all of them had trouble answering this question. Fortunately, a quick review of America’s agrarian roots provides the answer to this mystifying question.

The tradition of voting in presidential elections on Tuesdays in November began in 1845.  Before then the Congress gave states the power to hold elections at any time within a 34-day period before the first Wednesday in December. This system had many flaws; for example, early voting in some states frequently affected late voting in others.

In Democracy Despite Itself: Why a System That Shouldn’t Work at All Works So Well, Danny Oppenheimer and Mike Edwards explain that Congress took a number of factors into consideration before sanctioning an official Election Day in 1845. First, legislators wanted to set a date that was after the fall harvest, but before the cold set in, to accommodate the large number of farmers who were unable to take time off from tending to their crops. For the religious population, Congress tried to keep Election Day from falling on the Sabbath and All Saints Day. Oppenheimer and Edwards also note that Congress wanted to avoid Market Day, which typically fell on Wednesdays. Lastly, the 1st of November was also eliminated because it was a popular time for men to balance their account books. To avoid these conditions, and satisfy the majority of the eligible voters, Election Day was scheduled as the first Tuesday of November, following the first Monday.

In a nation that values tradition over change, it is difficult to modify the election process. According to the 1845 standard of living, the placement of Election Day was extremely practical; however, in the modern era, not all people believe that this is still the case. Why Tuesday?, a non-partisan organization, is one of many that is committed to creating a dialogue about present voting conditions. Its ultimate goal is to make it easier for Americans to vote.

In a study done by the International Institute for Democracy and Electoral Assistance eligible voters were asked why they were unable to vote on Election Day. This report, along with U.S. Census data and a study done by the Pew Research Center, show that the number one reason voters gave for failing to cast a ballot was that they had a scheduling conflict with work or school. This may account for America’s legendary low turnout rates. In the midterm elections of 2014 barely 36 percent of eligible voters voted. The turnout rate is so low in the United States that out of 172 nations the US ranks 138.  It’s dead last among the G7 countries.

There are three main alternatives to voting on the first Tuesday in November, following the first Monday. The most popular solution among the states is to implement some form of early voting that does not require a physical presence in a voting booth on Tuesday; all but fourteen states have established some form of early voting. Another popular solution is to move Election day from the business week to the weekend. Voting on Saturday or Sunday is not a unique idea; in fact, five G7 countries, all with higher turnout rates than the United States, vote on the weekend. A bill to establish Weekend Voting has been introduced in Congress, but has failed to pass. Lastly, some organizations and individuals, like Senator Bernie Sanders, suggest that Election Day be made a national holiday to ensure that everyone has a chance to vote.

The debate over Election Day transcends political party affiliation. Former Secretary of State Hillary Clinton and former Arkansas Governor Mike Huckabee have both stated their willingness to move Election Day to the weekend. President Barack Obama is one of many politicians who endorses policies that would make it easier to vote. While some politicians support the date change, there are others who do not. Before Vice President Joe Biden is willing to commit to a change, he wants to see more conclusive data that supports the relationship between turnout and Election Day. Former Speaker of the House Newt Gingrich stated that he does not believe voter turnout is related to the date that America votes.

It is clear that America in 2016 is vastly different from America in 1845. The number of eligible voters has immensely increased since; citizens can now vote regardless of race, sex, or socioeconomic status. As a nation, we have greater access to information and transportation. It is now up to Congress to decide if it is time to keep, or abolish, the agrarian tradition and determine a new fate for Election Day.

Works Cited

Andrews, Evan. “Election 101: Why do we vote on a Tuesday in November?” History Channel (accessed September 19, 2016).

Oppenheimer, Danny and Mike Edwards. Democracy Despite Itself: Why a System That Shouldn’t Work at All Works So Well (Cambridge: MIT Press, 2012).

Pintor, Rafael López, Maria Gratschew, and Kate Sullivan, “Turnout Rates from Comparative Perspective,” International Institute for Democracy and Electoral Assistance (accessed September 19, 2016).

Simmons-Duffin, Selena, “Why Are Elections On Tuesday?” NPR (accessed September 19, 2016).

Voting and Registration, United States Census Bureau (accessed September 19, 2016).

Why Tuesday? Frequently Asked Questions (accessed September 19, 2016).

Why Tuesday? Quotes (accessed September 19, 2016).

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/164332 https://historynewsnetwork.org/article/164332 0
Why Do We Vote on Tuesdays?

For the average working American, voting on a Tuesday can be a major inconvenience to the routine business week. In a TED Talk about Election Day, Jacob Soboroff features interview clips of prominent politicians who are incapable of explaining why the American people vote on Tuesdays. Rick Santorum, Ron Paul, Newt Gingrich, and John Kerry – all of them had trouble answering this question. Fortunately, a quick review of America’s agrarian roots provides the answer to this mystifying question.

The tradition of voting in presidential elections on Tuesdays in November began in 1845.  Before then the Congress gave states the power to hold elections at any time within a 34-day period before the first Wednesday in December. This system had many flaws; for example, early voting in some states frequently affected late voting in others.

In Democracy Despite Itself: Why a System That Shouldn’t Work at All Works So Well, Danny Oppenheimer and Mike Edwards explain that Congress took a number of factors into consideration before sanctioning an official Election Day in 1845. First, legislators wanted to set a date that was after the fall harvest, but before the cold set in, to accommodate the large number of farmers who were unable to take time off from tending to their crops. For the religious population, Congress tried to keep Election Day from falling on the Sabbath and All Saints Day. Oppenheimer and Edwards also note that Congress wanted to avoid Market Day, which typically fell on Wednesdays. Lastly, the 1st of November was also eliminated because it was a popular time for men to balance their account books. To avoid these conditions, and satisfy the majority of the eligible voters, Election Day was scheduled as the first Tuesday of November, following the first Monday.

In a nation that values tradition over change, it is difficult to modify the election process. According to the 1845 standard of living, the placement of Election Day was extremely practical; however, in the modern era, not all people believe that this is still the case. Why Tuesday?, a non-partisan organization, is one of many that is committed to creating a dialogue about present voting conditions. Its ultimate goal is to make it easier for Americans to vote.

In a study done by the International Institute for Democracy and Electoral Assistance eligible voters were asked why they were unable to vote on Election Day. This report, along with U.S. Census data and a study done by the Pew Research Center, show that the number one reason voters gave for failing to cast a ballot was that they had a scheduling conflict with work or school. This may account for America’s legendary low turnout rates. In the midterm elections of 2014 barely 36 percent of eligible voters voted. The turnout rate is so low in the United States that out of 172 nations the US ranks 138.  It’s dead last among the G7 countries.

There are three main alternatives to voting on the first Tuesday in November, following the first Monday. The most popular solution among the states is to implement some form of early voting that does not require a physical presence in a voting booth on Tuesday; all but fourteen states have established some form of early voting. Another popular solution is to move Election day from the business week to the weekend. Voting on Saturday or Sunday is not a unique idea; in fact, five G7 countries, all with higher turnout rates than the United States, vote on the weekend. A bill to establish Weekend Voting has been introduced in Congress, but has failed to pass. Lastly, some organizations and individuals, like Senator Bernie Sanders, suggest that Election Day be made a national holiday to ensure that everyone has a chance to vote.

The debate over Election Day transcends political party affiliation. Former Secretary of State Hillary Clinton and former Arkansas Governor Mike Huckabee have both stated their willingness to move Election Day to the weekend. President Barack Obama is one of many politicians who endorses policies that would make it easier to vote. While some politicians support the date change, there are others who do not. Before Vice President Joe Biden is willing to commit to a change, he wants to see more conclusive data that supports the relationship between turnout and Election Day. Former Speaker of the House Newt Gingrich stated that he does not believe voter turnout is related to the date that America votes.

It is clear that America in 2016 is vastly different from America in 1845. The number of eligible voters has immensely increased since; citizens can now vote regardless of race, sex, or socioeconomic status. As a nation, we have greater access to information and transportation. It is now up to Congress to decide if it is time to keep, or abolish, the agrarian tradition and determine a new fate for Election Day.

Works Cited

Andrews, Evan. “Election 101: Why do we vote on a Tuesday in November?” History Channel (accessed September 19, 2016).

Oppenheimer, Danny and Mike Edwards. Democracy Despite Itself: Why a System That Shouldn’t Work at All Works So Well (Cambridge: MIT Press, 2012).

Pintor, Rafael López, Maria Gratschew, and Kate Sullivan, “Turnout Rates from Comparative Perspective,” International Institute for Democracy and Electoral Assistance (accessed September 19, 2016).

Simmons-Duffin, Selena, “Why Are Elections On Tuesday?” NPR (accessed September 19, 2016).

Voting and Registration, United States Census Bureau (accessed September 19, 2016).

Why Tuesday? Frequently Asked Questions (accessed September 19, 2016).

Why Tuesday? Quotes (accessed September 19, 2016).

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/163923 https://historynewsnetwork.org/article/163923 0
Who Was the Red Baron?

Richthofen's all-red Fokker Dr.I.

The World War I German flying ace who would become known as the Red Baron received this nickname after painting his plane a bright red color. While often eager for the bloodshed that he knew would be necessary in order to bring down his enemies, his willingness to place a stone on the grave of an Allied soldier who fell at his hands reveals a more compassionate side that was often lost amidst the horrors of total war.

The Red Baron was born Manfred von Richthofen in the Bavarian town of Kleinburg in 1892. He was from a well-to-do noble family, and his father was a former officer in the Uhlan Regiment. At the young age of 11, Richthofen, along with his two brothers, entered the Cadet Institute at Wahlstatt. He proved to a talented gymnast and also became a skilled hunter, a fact that would foreshadow his ruthless and calculated approach to warfare in the future.

Discontent with the assignments that he was given while serving in the infantry on the Western front, Richthofen applied for a transfer to the Imperial Air Service, saying, “I have not gone to war to collect cheese and eggs, but for another purpose.” There was little reason to hope that such a transfer would be approved. Much to Richtofen's surprise and delight, he was allowed to join the 69th Flying Squadron as a reconnaissance flier in May of 1915. Richthofen's love for the thrill that flying gave him is evident from his description of his first time accompanying a pilot as an observer:

"It was a glorious feeling to be so high above the earth, to be master of the air. I didn't care a bit where I was, and I felt extremely sad when my pilot thought it was time to go down again."

A few months later, in August, Richtofen met the renowned German flying ace Oswald Boelcke. Richthofen began training to become a pilot later that year and passed the examination on Christmas Day. The two returned to the Somme where the pupil served in one of the expert's newly formed squadrons. Boelcke was idolized by Richthofen, who wrote that the leader's words were "taken as Gospel" by his men. Ironically, he would receive no credit for his first kill, which occurred in April of 1916 while he was flying over Verdun.

Soon after Boelcke's death at the end of October, Richtofen encountered the "British Boelcke," Major Lanoe George Hawker, a British flying ace who had been responsible for the demise of many German planes. After a fierce battle, Richtofen shot Hawker and was able to claim his most important victory to date. Continued success quickly allowed him to form his own group, which became popularly known as "The Flying Circus." When Richtofen was alerted to a bombing raid that several British opponents had planned for him, he refused to leave and instead dined with his senior pilots as they prepared to wait out the ambush in their dugout bomb shelter. Richtofen became a desperately needed symbol of hope for the Germans, whose prospects for victory were becoming increasingly dismal.

In July of 1917, the seemingly invincible Richthofen was wounded when a bullet struck him in the head and splintered part of his skull. Even though he continued to suffer from debilitating headaches, Richthofen returned to work within two months and continued to be remarkably successful. Richthofen would ultimately go on to be credited with eighty victories during the war, a number that placed him on the same level as the leading British flying ace of the day.

On April 21, 1918, Richthofen was shot and killed while taking part in a mission over the Somme region in France. This occurred the day after he secured that eightieth victory. The identity of the person who fired the fatal shot remains unclear. Many suspect that the captain responsible was the Canadian Arthur R. Brown, who went on the attack after Richthofen began to target Brown's less experienced friend Wilfred May. Others believe that one of the Australian troops stationed on the ground fired the shot. Regardless of who was responsible, there was a deep respect among the Allied soldiers for the enemy flying ace who had finally met his match. The British held elaborate funeral and burial ceremonies for the man who had been responsible for the deaths of their comrades, with full military honors.

Some have speculated that Richthofen did not in fact achieve all of the victories that were attributed to him. However, because German and British records include so many of the same downed planes, many historians are inclined to believe that the Red Baron's legendary success was not hyperbole.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/163786 https://historynewsnetwork.org/article/163786 0
Who Are the Masons?

Prudence, charity, obedience. The first thing that comes to mind might be important virtues for medieval monks, or they might simply be the qualities that any parent hopes to teach his children. Yet they are in fact the values that were advocated by the presider at a nineteenth century Masonic initiation ceremony (824).

Although the Freemasons are often considered members of a secret society, a more accurate description of the organization would be a fraternity. Individual chapters of Masons are known as lodges, and a Grand Lodge oversees the smaller lodges in a given area. Unlike individual chapters of college fraternities, which usually adhere to standards set forth by a larger national organization, individual Grand Lodges are independently run and do not adhere to any international standards. Masons are initiated into the organization by completing a series of degrees, and important officers include the president and master of ceremonies. It was the organization's use of certain symbols and the secretive nature of their initiation ceremony that shrouded the organization in mystery. Yet not all of the symbols associated with the Masons are used solely by them. The pyramid with the eye on the one dollar bill? Not necessarily a strictly Masonic symbol. And the 33 on Rolling Rock beer bottles that was thought to refer to the thirty-third degree of Scottish Freemasonry? A printing error.

While the precise origin of Freemasonry is not certain, it seems that the organization emerged out of medieval stonemason guilds. Many guilds during this time were formed by professionals who were able to work and remain in one town or city for most of their lives. This was often not possible for stonemasons, who traveled from place to place in search of new and promising building ventures. Therefore, these masons, many of whom were illiterate, began using a secret password to identify themselves as members of a guild that needed to be unusually far-reaching. The first known document that discusses Freemasonry is the Regius Poem, a six hundred year old verse poem which explores the origins of masonry as a craft and lists the qualities that a good mason should possess. Above all, this poem reveals the importance of loyalty in the eyes of Masons.

The year 1717 marked the founding of the first Grand Lodge in London. Grand Lodges followed in Ireland in 1725 and Scotland in 1736. Some believe that such lodges were meant to provide spaces where individuals could meet without fear in a kingdom that was still reeling from the religious upheaval of the English Civil War. In addition to serious meetings, socializing, forming friendships and taking part in philanthropic activities were essential components of the Masonic experience. It is likely this openness to others and their ideas that attracted renowned philosophers and writers such as Voltaire, John Locke, and Johann Wolfgang von Goethe, as well as George Washington, Paul Revere, and Benjamin Franklin, who was the head of the Pennsylvania chapter in America. The profoundly influential Supreme Court Justice John Marshall was also a Mason.

Since the beginning, Masonic organizations have often been at odds with organized religion. One of the chief objections of religious officials was that mingling with members of such varied backgrounds would lead Masons, who inclined toward cosmopolitanism, to disregard matters of religious doctrine. Religious officials were also concerned that the acceptance of Freemasonry could lead to the formation of potentially subversive and dangerous secretive organizations in the future. Interestingly, Freemasonry was first condemned by Protestant nations, and the Catholic Emperor Francis I of Germany took steps to protect the organization in his realm. Support from the Holy Roman Emperor nevertheless did not prevent Pope Clement XII from issuing a papal decree denouncing the Masons in April of 1738, and the Catholic powers of Spain, Portugal and Italy soon implemented laws curbing Freemasonry. Papal documents reaffirming the Catholic Church's opposition to Freemasonry were published by Bishop Joseph Ratzinger, who would become Pope Benedict XVI, as recently as 1983.

The opposition of churches is somewhat ironic considering that the only requirement to become a mason is that the individual profess a belief in some Supreme Being. The discussion of one's religious and political views is prohibited during meetings in order to prevent disputes among members. It is also ironic given the Masonic connection between stonemasonry and the building of Solomon's Temple. Nevertheless, the objections of religious organizations did not entirely prevent lodges from expanding. The willingness of lodges to admit individuals of many different religious backgrounds would have been appealing for members influenced by the ideas of the Enlightenment.

Women remain unable to formally become members of many lodges; nevertheless, their influence in society should not be overlooked. In France, lodges of adoption first offered female relatives of Masons the chance to obtain degrees similar to those obtained by their husbands or fathers. These lodges would become the forerunners of mixed lodges and eventually lodges meant exclusively for females. While Mason women were by no means given complete liberty to defy set gender roles in these early lodges of adoption, they were give new freedoms that were seen as compatible with their unique identity as women (785). For example, women were sometimes responsible for directing induction meetings (803), and bylaws issued by the Grand Orient Loge in France did not distinguish between rituals for men and women (806). Although participation in Masonic social activities became a chief way of being involved for many women living after the reign of Napoleon, many para-Masonic orders supported these lodges (817). In 1894, Maria Deraismes successfully created a new obedience in which women and men would be initiated in the same way (825). The efforts of Freemason women have in many ways set a precedent for modern women's movements (831).

In the early years of the United States, common people were made more wary of the Masons after Captain William Morgan disappeared after being dragged into a carriage upon his release from a New York prison in 1826. It was rumored that Morgan's captors were Masons who feared that Morgan and a fellow publisher, David Miller, planned to publish an unflattering "tell-all" about Masonic corruption. In the wake of the increased suspicion and hysteria that resulted from Morgan's disappearance, the Anti-Masonic Party was formed, and candidate William Wirt became the first third-party presidential candidate in United States history. Despite this period of fear in the United States, Masonic lodges around the world continued to gain members, and the establishment of military lodges was an especially efficient way of facilitating the spread of Freemasonry to other places throughout the vast British Empire. The appeal of belonging to a Masonic lodge can readily be understood when one considers that many members lived abroad in colonies such as India or Australia: membership provided an immediate source of support for anyone living so far from home (244).

There are as many as six million Masons in the world today. There have been and continue to be occasional disputes between competing lodges, and three of the most prominent French lodges are not yet recognized by the United Grand Lodge of England. Certain lodges are more secretive than others: in France, Masons tend to focus more on their interior spiritual journeys and are less inclined to openly admit and discuss their membership. Still, while lodges around the world operate according to different rules, their involvement has been invaluable in the establishment of schools, orphanages, and other important public institutions. Lodges continue to guided by the Enlightenment principle of respect for human dignity that the earliest lodges hoped to embody. This is one requirement that no Mason would not want to keep secret. 

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/163250 https://historynewsnetwork.org/article/163250 0
5 Quick Questions About: The Free State of Jones

Click HERE for HNN's full coverage of the movie, "Free State of Jones"

HNN's 5 Quick Questions gives readers a brief background on hot topics in the news. In this edition historian Victoria E. Bynum tells us the real story behind Matthew McConaughey’s new movie, Free State of Jones. Bynum is a Distinguished Professor Emeritus of history at Texas State University, San Marcos. She is the author of Unruly Women: The Politics of Social and Sexual Control in the Old South and The Long Shadow of the Civil War: Southern Dissent and Its Legacies. Her book, The Free State of Jones: Mississippi's Longest Civil War, chronicles the life of Newt Knight and the fellow rebels who denounced the Confederacy and declared allegiance to the Union. In this interview, Bynum discusses Knight’s motivation to rebel and what he was able to accomplish during and after the Civil War.

1. Sussan Ayala Rodriguez: What was the 'Free State of Jones'?‎

Victoria Bynum: For many years, the term “Free State of Jones” has been used to designate the Jones County uprising against the Confederacy that took place in Mississippi between 1863 until 1865. It refers particularly to the period from between February-April 1864, when Knight Company deserters were described as having taken over Jones County, leaving it “free” of civil (Confederate) government.

The phrase itself is said to have originated around the time the county was founded (1826) because of its “freedom” from all but a rudimentary government.

2. What motivated Newton Knight to rebel against the Confederacy?

In his 1895 deposition to Congress in support of his compensation claim, Newt Knight claimed that he opposed secession before the war and that he voted for his county’s anti-secession delegate. In a 1921 interview with Meigs Frost of the New Orleans Item, Newt reiterated that he supported the Union before the war began. A reluctant soldier according to his testimony, he cited passage of the Twenty-Negro Law as convincing him to desert once and for all this “rich man’s war and poor man’s fight.”

Newt’s son, Tom Knight, cited personal motives as well as Unionism for Newt’s desertion. Tom claimed that Newt’s brother-in-law was abusing his family, and that Newt’s wife, Serena, had written Newt to tell him that the brother-in-law had given their only horse to the Confederacy.

It’s likely that both political and personal reasons motivated Newt to desert. Voters in Jones County elected a cooperationist delegate to the Mississippi secession convention. Military records confirm that Newt deserted directly following the battle of Corinth and passage of the Twenty-Negro law, as he claimed. There are also corroborating (published, but undocumented) claims that Newt murdered his brother-in-law for passing information about deserters to local Confederate authorities.

3. What damage did Knight do to the infrastructure of Jones County, Mississippi?

Official letters and reports among Confederate officers indicate that by early 1864, Jones County deserters, estimated variously at 300, 500, and 1,000 men strong, had seized control of Jones County’s government. They were reported to have chased tax officials out of town and to have killed several Confederate officials.

Even after the devastating raid by Col. Robert Lowry in mid-April 1864, during which many Knight Company men fled to the Union in New Orleans after other band members were executed or forced back into the Confederate Army, deserters were still feared by Jones County officials. In June, 1864, Ellisville enrolling officer B. C. Duckworth described them to Gov. Clark as “thinned out,” thanks to the Confederate raid, but also stated that “We have not had a Justice Court Since the war commenced and if a man is found dead, the civil authorities pays no attention to it any more than if it was a dog.” Duckworth’s continued fear of deserters was expressed in his final sentence:

“. . . . Retain the contents [of this letter] as I am in a settlement that I am afraid to speak my sentiments on the account of the Deserters.”

4. In the movie we see slaves join his rebellion, but was Knight a supporter of abolition?

Several Jones County Knights, beginning with patriarch John “Jackie” Knight, were slaveholders. Newt’s father Albert, however, refused to own slaves, and Newt followed suit. In an obvious reference to him, Anna Knight, a younger member of Newt’s mixed-race family, wrote in 1951 that her grandmother (Rachel) and family “went with one of the younger Knights (Newt) who did not believe in slavery.” There is no evidence, however, that Newt publicly supported the abolition of slavery before or during the Civil War.

Newt Knight’s views on slavery seemed to have evolved over time. Around 1892, he credited northern abolitionists with stimulating the South’s “common people” to steal slaves and lead them to freedom via the Underground Railroad. He did not include himself as one who participated in the Underground Railroad, but in hindsight wished that nonslaveholding farmers had risen up and killed the slaveholders rather than be “tricked” into fighting their war.

5. What became of him after the Civil War? ‎

Newt Knight remained politically active for about twenty years after the Civil War. In late 1865, he was appointed relief commissioner for the destitute of Jones County. Within that position, he carried out several tasks assigned by U.S. military officers that thwarted the power of local pro-Confederate citizens.

On July 6, 1872, under the administration of Republican Governor Adelbert Ames, Newt was appointed deputy U.S. Marshal for the Southern District of Mississippi. On March 18, 1875, Gov. Ames appointed him Colonel of the 1st [Colored] Regiment Infantry of Jasper County. Newt’s last documented political appointment was in 1884 as a supervisor of elections for Albritton’s precinct, Jasper County, MS.

Between 1870 and 1900, Newt unsuccessfully filed several claims for federal compensation on behalf of himself and 54 members of the Knight Company for their wartime service to the Union. Gov. Adelbert Ames and Republican Senator Blanche K. Bruce, among others, supported his claims before Congress.

Until his death in 1922, Newt continued to farm and tend to his ever-growing family. Around 1908, he contributed land to the building of a private school for his descendants, who were forbidden by law to attend white public schools.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/163157 https://historynewsnetwork.org/article/163157 0
Who Are the Gypsies?

Gypsies: tramps and thieves? They have been described in literature as pernicious and mutinous robbers, swindlers willing to lie about others' futures in order to make money. In the seventeenth century, Juan de Quinones went so far as to call their customs evil and depraved. Even now, individuals are often warned to be especially careful when buying something from a Gypsy. But a more careful study of history calls this perception of Gypsies, many of whom prefer to be called Romani or Roma, into question.

Despite the stereotypes that are frequently associated with them, the people stereotypically known as Gypsies have a long and often divergent history. Believed by some Europeans to have come from Egypt, the Gypsies were a nomadic people whose migratory lifestyle made it easier for Europeans to view them as a people close to nature but without a home. This perception has helped perpetuate the continuation of stereotypes to this day.

The first document written about the Spanish Gypsies dates to 1425. In this document King Alfonso XIII allowed for the safe passage of Gypsies throughout Spain, especially for the purpose of pilgrimage. This all changed with the ascension of Ferdinand and Isabella, Los Reyes Catolicos, to the Spanish throne. In keeping with the action they had taken against the Jews, Ferdinand and Isabella decreed in 1499 that any Gypsies who did not seek official permission to stay within the kingdom must leave or be severely punished with a minimum of one hundred lashes for the first offense. Laws passed over the next three centuries were designed to eradicate Gypsy culture. Gypsies were prohibited from using their own language and made to wear the same clothing as their fellow Spaniards.

During the period of the Spanish Inquisition, Gypsies were stigmatized as heretics. As Spanish explorers made inroads into the New World, Gypsies were depicted in literature as cannibals similar to the supposedly "savage" Native American tribes encountered in America. Most commonly, Gypsies were described by many authors as thieves willing to take advantage of vulnerable members of society. As a result they were even subject into forced labor on ships. This travesty continued through the end of the 18th century.

Yet even as the gitanos, the name for Romani people in Spain, endured such oppressive conditions under the law, they paradoxically became irreversibly associated with both Spanish culture and the exotic Orient. Few were aware that distinct groups of Gypsies had emerged throughout Europe, as the Spanish and Oriental association proved most fascinating. In George Borrow's The Zincali the author is surprised to learn that Gypsies in Spain and Gypsies in England do not use the same familial naming systems.

The growing interest in Gypsy culture during the nineteenth century is perhaps no better evident than in French author Prosper Mérimée's novella Carmen. This novel, which would become the basis for the more famous opera by Georges Bizet, tells the story of the enchanting Gypsy woman Carmen and the hidalgo Don Jose who becomes consumed with jealousy and kills her. Carmen was a femme fatale, a fiercely independent woman who embodied the opposite of the Victorian ideal, and so Carmen became the image of the Gypsy in the minds of many Europeans. The unusual style of music and dance that was embraced by the Romani intrigued many Europeans and composers in particular. Debussy, among others, was profoundly influenced by Gypsy flamenco music and cante jondo, the vocal music that Spanish author Federico Garcia Lorca recognized as a deeply meaningful part of Spain's rich cultural heritage.

Significant change took place when Francisco Franco became the dictator of Spain following the Spanish Civil War in 1939. Cinema during the dictatorship tended to emphasize the potential of the gitanos to be assimilated into Spanish society, as is evident in films such as Carmen de La Triana (1938); certain laws passed during this time period, however, were used against the Spanish Gypsies, even though these laws were not directed specifically at Gypsies. Nevertheless, gitanos living in Spain under the reign of Franco fared much better than did Romani living in Eastern Europe during World War II. As many as two hundred thousand Gypsy individuals who were not exempt for reasons of assimilation or "pure blood" died in labor camps at the hands of Franco's former ally Adolf Hitler. Early in the war, Gypsy camps in Austria known as Zigeunerlager became temporary holding places for the Gypsies during a brief time when deportations into the Reich were suspended. Many Gypsies died on account of the abysmal conditions in these camps, which Germans pointed to as proof of the Gypsies' moral inferiority.

Nazi crimes against the Romani were not recognized as such for several decades. However, progress is gradually being made toward a deeper understanding of the Romani people. The 1975 Spanish documentary "Los Gitanos" was unique in its use of interviews with Gypsies who offered their own insight into their way of life, and works written by Gypsies such as Jose Heredia Maya have garnered increased interest in recent years. The Union Romani works internationally to promote and understanding of Gypsy culture and protect Gypsies from discrimination. Several distinct groups who identify as Gypsies came to the United States during the 1800s, and as many as one million live here today. These include Rom from Austria-Hungary and Russia, Ludar from Romania, and Romnichels from England. This diversity is a true testament to the many Gypsy settlements that have sprung up throughout history. The recognition that "Gypsy" is a term used to describe a heterogeneous and worldwide group of individuals will perhaps encourage others to look beyond the traditional stereotypes and consider the unique history of different groups of Gypsies around the world.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/163051 https://historynewsnetwork.org/article/163051 0
Why Does the Democratic Party Have Superdelegates?

The Ballot Box” by Thomas Nast

Superdelegates are the current focus of both liberal and conservative political commentators due to their central role in the 2008 and 2016 Democratic National Conventions. Where did these superdelegates come from? Why does the Democratic Party have superdelegates?

The rise of the superdelegates began in 1968 after a highly contentious, and on occasion violent, Democratic nomination process. Then Vice-President Hubert Humphrey won the Democratic Presidential nomination without participating in a single primary election. This outcome disturbed many Democrats including Senators Eugene McCarthy and Robert F. Kennedy. Both senators were anti-Vietnam, and believed that the Democratic nominee did not represent the majority of the Party nor their priorities, especially with regard to the Vietnam War. Republican nominee Richard Nixon defeated Humphrey, and the Democrats began a period of introspection as they began identifying what went wrong during the nomination process.

The McGovern-Fraser Commission was formed in 1968 to create a nomination process that would include more minority voices of the Democratic Party. The new process would also limit the influence of the party elites, who held the levers of power in 1968, which allowed Humphrey to become the nominee without a single primary vote. The result of the Commission’s work was a process that included open delegate selection, required a proportion of a state’s delegation to include racial minorities and women, and limited the number of delegates named by the State Democratic Committee to 10 percent of the total. Following the McGovern-Fraser Commission’s reforms, the number of states holding primaries to choose delegates increased each election cycle. Primaries allowed for party members, including party activists, to be more involved in the selection of delegates to the National Convention.

Despite Jimmy Carter’s success in the 1976 general election, the Democrats still grappled with trying to keep a balance between the more progressive party activists and the party’s elite. This push-and-pull culminated in the contested nomination of 1980, when Senator Edward Kennedy challenged the re-nomination of Jimmy Carter. Carter won that fight but went on to lose to Ronald Reagan in the general election. While Senator Kennedy was proud that the Democrats had a contested and more competitive convention than the Republicans, the issue remained that the process was creating Democratic nominees who were not competitive against the Republicans. Many of the Democratic Party’s elite, including members of Congress, felt they needed to have a bigger role in the nomination process in order to remain competitive.

The 1980 Hunt Commission set out to write the delegate rules for the 1984 nomination process after Carter’s re-election defeat. The new rules culminated in the creation of delegates who would be “unpledged,” meaning that they would be members of their respective state’s delegation to the National Convention regardless of which candidate they supported. In fact, these delegates did not have to declare a candidate preference until the national convention and could change their preference at the convention. These unpledged delegates are called “superdelegates” because their selection to the national convention is not tied to whom they support for President. Rather their selection depends on who they are and what they mean to the Democratic Party, unlike “pledged” delegates who cannot be chosen without declaring who they are supporting.

The driving idea behind the creation of this new group of delegates was to prevent highly contested nomination processes from producing a non-competitive candidate at the expense of the Party as happened in 1968 and 1972. As Commission Chairman Jim Hunt noted in a speech at the JFK School of Government in 1981: “We would then return a measure of decision-making power and discretion to the organized party and increase the incentive it has to offer elected officials for serious involvement.” Party elites would keep the long-term health and goals of the party in mind when casting their votes as unpledged delegates, preventing more embarrassing general election losses to the Republican Party.

Since the first use of so-called superdelegates in the 1984 Democratic Convention, the proportion of superdelegates to pledged delegates has steadily increased. Unpledged delegates now include: The Democratic President or Vice-President, Democratic members of the US House of Representatives or the Senate, Democratic Governors of states in the US, former Democratic Presidents or Vice-Presidents, former Democratic Leaders of the Senate, former Democratic Speakers of the House, former Democratic Minority Leaders, and former DNC chairs. Originally, about two-thirds of Democratic members of the House of Representatives were considered superdelegates. This number was a compromise between the activist and establishment factions within the Democratic Party following the introduction of superdelegates for the 1984 National Convention.

According to Rules 8.A and 8.B (1984) of the Delegation Selection Rules, the House and Senate Democratic Caucus chose 60% of their total number members to be superdelegates. In 1988, the Fowler Fairness Commission of the DNC increased the amount to 80% of the combined members of the House and Senate. In 1996, the DNC’s Rules & By-Laws Committee changed the rule to include 100% of the combined members of the House and Senate as superdelegates. The number of Democrats in Congress and Democratic Governors varies by election year, which is a major reason the number fluctuates. In 1984, about 14% of the total delegates were superdelegates, in 2008, about 19% (796) were superdelegates and going into 2016, they represent about 15% (712) of the total number of delegates.

In a recent interview about superdelegates with CNN’s Jake Tapper, DNC Chairwoman Debbie Wasserman Schultz argued, “Unpledged delegates exist really to make sure that party leaders and elected officials don’t have to be in a position where they are running against grassroots activists.” Liberal and conservative critics, who question the role of superdelegates in the Presidential nomination process, have used her comments to support their arguments. Superdelegates drew importance and national attention during the 2008 Democratic Primaries. Senators Hillary Clinton and Barack Obama competed for the Democratic Nomination, and were close in the pledged delegate count by the 2008 National Convention.

Many party activists, in both the 2008 and in 2016 primary seasons, argue that the superdelegates are not democratic and lack legitimacy because the people do not elect them as they do pledged delegates. In regards to superdelegates and their role in the 2008 primary elections, former Representative and VP Candidate Geraldine Ferraro stated, “These superdelegates, we reasoned, are the party’s leaders. They are the ones who can bring together the most liberal members of our party with the most conservative and reach accommodation.” Contrary to Ferraro, then-Senator Obama remarked, "The American people are tired of politics that is dominated by the powerful, by the connected." However, he eventually gained enough of the superdelegate votes to secure the party’s presidential nomination.

Article Citations

Piroth, Scott. “Selecting Presidential Nominees: The Evolution of the Current System and Prospects for Reform”. Social Education 64, no. 5 (September 2000). http://www.uvm.edu/~dguber/POLS125/articles/piroth.htm (accessed May 01, 2016).

Edwards III, George C., Martin P. Wattenberg, and Robert L. Lineberry, Government in America: People, Politics, and Policy (Addison-Wesley Educational Publishers Inc., 2002), 268.

CNN Politics, “Democratic Party Convention Rules Changes”. http://www.cnn.com/ALLPOLITICS/1996/conventions/chicago/facts/rules/index.shtml (accessed May 03, 2016).

Smith, Steven S. and Melanie J. Springer. “Choosing Presidential Candidates.” In Reforming the Presidential Nomination Process, edited by Steven S. Smith and Melanie J. Springer 1-22. Washington, DC: Brookings Institution Press, 2009. 6.

Ferraro, Geraldine A, “Got a Problem? Ask the Super,” New York Times, February 25, 2008. http://www.nytimes.com/2008/02/25/opinion/25ferraro.html?_r=0 (accessed May 05, 2016).

Kamarck, Elaine. “A History of ‘Super-Delegates’ in the Democratic Party.” https://www.hks.harvard.edu/news-events/news/news-archive/history-of-superdelegates (accessed April 30, 2016).

CNN Politics, “Democratic Party Convention Rules Changes.” http://www.cnn.com/ALLPOLITICS/1996/conventions/chicago/facts/rules/index.shtml (accessed May 03, 2016).

Democratic National Committee, 9-10.

Stanek, Becca, “Superdelegates, explained”, The Week, April 4, 2016, http://theweek.com/articles/615261/superdelegates-explained (accessed May 05, 2016) and Nagourney, Adam and Carl Hulse, “Neck and Neck, Democrats Woo Superdelegates,” New York Times, February 10, 2008. http://www.nytimes.com/2008/02/10/us/politics/10superdelegates.html (accessed May 05, 2016).

Debbie Wasserman Schultz, interview by Jake Tapper, February 11, 2016. http://www.cnn.com/videos/tv/2016/02/11/the-lead-talking-dem-tone-debbie-wasserman-schultz.cnn (accessed May 05, 2016).

Allen, Mike, “Obama claims delegate lead”, Politico, February 06, 2008. http://www.politico.com/story/2008/02/obama-claims-delegate-lead-008358 (accessed May 05, 2016). And New York Times, “Results: Democratic Delegate Count”, http://politics.nytimes.com/election-guide/2008/results/delegates/index.html (accessed May 05, 2016).

http://www.huffingtonpost.com/2010/08/03/superdelegates-retained-b_n_669171.html

Schor, Elana and Dan Glaister, “Superdelegates switching allegiance to Obama” The Guardian, February 22, 2008. http://www.theguardian.com/world/2008/feb/23/uselections2008.barackobama (accessed May 05, 2016).

Michael, Terry. “The Democratic Party’s Presidential Nominating Process”. March 2000. http://terrymichael.net/PDF%20Files/DNC_PrezNomProcess.pdf (accessed May 10, 2016).

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/162801 https://historynewsnetwork.org/article/162801 0
How Many Contested Conventions Have There Been?

Attendees at the 1952 convention

The growing possibility of a contested Republican Party convention in July draws more interest in examining the history of contested party conventions, as to whether it is common or unusual. The clear cut conclusion is that they are more the norm historically, if not recently.

Ten Republican conventions, fifteen Democratic conventions, and three Whig conventions between 1840 and 1952, went to multiple ballots, with only thirteen of the nominees winning the Presidency, and the other fifteen nominees losing the White House. It should be pointed out that the Democratic Party had more contested conventions due to the two thirds rule that was in effect from the first Democratic National Convention in 1832 until 1936, so only Adlai Stevenson in 1952 did not have to face this difficult challenge on percentage of delegates, that the Whigs and Republicans never had to deal with.

Nineteen of these twenty eight contested conventions occurred in the 19th century, between 1840 and 1896, a very tumultuous and divided time in American politics, where Presidential elections were often very close. Three Whig Party nominees had contested nomination battles over twelve years, including William Henry Harrison in 1840; Zachary Taylor in 1848; and Winfield Scott in 1852, with Scott the only loser of the Presidency. We see six Republican nominees having to fight for the Presidential nomination over 32 years, including John C. Fremont in 1856; Abraham Lincoln in 1860; Rutherford B. Hayes in 1876; James A. Garfield in 1880; James G. Blaine in 1884; and Benjamin Harrison in 1888, all winning except for Fremont and Blaine.

At the same time, we have ten Democratic nominees engaged in battles for the nomination of their party over 52 years, including James K. Polk in 1844; Lewis Cass in 1848; Franklin Pierce in 1852; James Buchanan in 1856; Stephen Douglas in 1860; Horatio Seymour in 1868; Samuel Tilden in 1876; Winfield Scott Hancock in 1880; Grover Cleveland in 1884; and William Jennings Bryan in 1896, with Polk, Pierce, Buchanan and Cleveland occupying the White House.

Then from 1912 to 1952, another nine contested conventions occurred with multiple ballots, and we see four Republican nominees having a struggle for the nomination of their party, including Charles Evans Hughes in 1916; Warren G. Harding in 1920; Wendell Willkie in 1940; and Thomas E. Dewey in 1948, with only Harding winning the Presidency. Meanwhile, five Democratic nominees fought for their party’s nomination, including Woodrow Wilson in 1912; James Cox in 1920; John W. Davis in 1924; Franklin D. Roosevelt in 1932; and Adlai Stevenson in 1952, with only Wilson and FDR winning the Presidency.

So the thirteen nominees in contested conventions who won the Presidency were William Henry Harrison in 1840; James K. Polk in 1844; Zachary Taylor in 1848; Franklin Pierce in 1852; James Buchanan in 1856; Abraham Lincoln in 1860; Rutherford B. Hayes in 1876; James A. Garfield in 1880; Grover Cleveland in 1884; Benjamin Harrison in 1888; Woodrow Wilson in 1912; Warren G. Harding in 1920; and Franklin D. Roosevelt in 1932. Therefore, two Whigs, five Republicans, and six Democrats were elevated to the White House. The fifteen losing candidates included one Whig, five Republicans, and nine Democrats.

Twenty national elections in total faced a contested convention without a nominee on the first ballot in the 112 years between 1840 and 1952, a total of 29 elections, or slightly more than two thirds of the time! Both parties had multiple ballots to select nominees in 1848, 1852, 1856, and 1860, before the Civil War; 1876, 1880, and 1884 during the Gilded Age; and in 1920. In twelve of the fifteen national elections between 1840 and 1896, all but three (1864, 1872, and 1892), faced contested conventions. Then from 1912 to 1952, over eleven election cycles, all but three (1928, 1936 and 1944) were years of contested conventions. Interestingly, in the three election years of 1900 to 1908, three consecutive election cycles, contested conventions were avoided.

The contested conventions with the most ballots required were the 1924 Democratic convention which took 103 ballots to nominate John W. Davis; the 1860 Democratic convention which took 57 ballots at Charleston and two more in Baltimore to nominate Stephen Douglas in a bitterly divided party in which Southern Democrats had walked out; the 1852 Democratic convention which took 49 ballots to nominate Franklin Pierce; the 1912 Democratic convention which took 46 ballots to nominate Woodrow Wilson; the 1920 Democratic convention which took 44 ballots to nominate James Cox; the 1880 Republican convention which took 36 ballots to nominate James A. Garfield; the 1868 Democratic convention which took 22 ballots to nominate Horatio Seymour; the 1920 Republican convention which took 10 ballots to nominate Warren G. Harding; and the 1844 Democratic convention which took 9 ballots to nominate James K. Polk. Five of these nine nominees went on to become President, including Democrats James K. Polk, Franklin Pierce, and Woodrow Wilson; and Republicans James A. Garfield and Warren G. Harding. Notice that the Democrats had seven of these nine most contested conventions, and both Republicans in such situations won the White House.

Since the last truly contested convention in 1952, three later conventions have been memorable, although not technically contested. The 1976 Republican convention is remembered because Gerald Ford won only slightly over Ronald Reagan, but he had the ability to win on the first ballot. The same applies to the 1968 Democratic convention, which was tumultuous, but Hubert Humphrey won on the first ballot over Eugene McCarthy and George McGovern. And the challenge by Ted Kennedy to Jimmy Carter in the 1980 Democratic convention did not prevent Carter from being nominated, although Carter lost the Presidency as a result of the intraparty split! Finally, realize that these more recent conventions that were somewhat contentious led to the defeat of all three Presidential candidates, including two Presidents, Ford and Carter, running for reelection!

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/162693 https://historynewsnetwork.org/article/162693 0
What's a Lame Duck?

According to Google Trends every two years there is a surge in search interest for the term, “lame duck.” These surges of interest coincide with presidential and legislative elections in the United States.  As we approach Election Day this November, the spike in interest has quickly returned.  Everyone is asking, what is a lame duck?

Merriam-Webster defines a lame duck as “an elected official or group continuing to hold political office during the period between the election and the inauguration of a successor.” This definition is generally referred to as the original meaning of the term and would also mean President Obama is not technically a lame duck until November 8, 2016.  Jordan Weissmann, senior business and economics correspondent for Slate, has said that Obama is currently a second term president, not a lame duck, according to this definition.

Using this definition, a president is only a lame duck for a little over two months, from the election in November until January 20, when the president is inaugurated.  It used to be a lot longer.  Until the 20th Amendment to the Constitution (1933) inauguration day took place on March 4.  The amendment moved up the inauguration date expressly to shorten the period when the incumbent was serving as a lame duck. 

The two-term tradition established by George Washington affected the perception that all presidents in their second term were, in effect, lame ducks, because they weren't expected to run for re-election. Then Democrat Franklin Roosevelt shattered the two-term tradition by running for a third and then a fourth term.  Republicans afterwards proposed an amendment to the Constitution to limit presidents to two terms.  The amendment was ratified in 1951. So we were back to where we started after George Washington.  Since then all presidents in their second term, both Republican and Democratic, have worked hard to demonstrate that though they might not be eligible for a third term they were by no means lame. Their political opponents, of course, have sought to shape the perception that they were.

This is what Republicans have been doing in Barack Obama's final years in office, much to the administration’s frustration. This past October President Obama, speaking at a Democratic National Committee Fundraiser in Chicago, said, “About a year and a half ago, people we're saying I was a lame duck. We've been flapping our wings a lot over that year and a half.”

Although most presidents usually find it difficult to rack up accomplishments in their final year in office, President Obama has been successful in doing so. In many ways his final year is proving to be one of his most consequential.  James Goldgeier, dean of the School of International Service at American University, recently told Politico Magazine that Obama’s final year will be remembered for building relations with Cuba, the nuclear deal with Iran, and major agreements on trade and climate change.

Republicans have tried to stigmatize Obama as a lame duck to curb his power.  Their most energetic use of the phrase came in the wake of the death of Supreme Court Justice Antonin Scalia in February 2016.  Within hours Majority Leader Senator Mitch McConnell announced that he would refuse to hold hearings on any nominee President Obama submitted to the Senate for “advice and consent.”  The reason, he suggested, was that President Obama is now a lame duck.

 Are the Republicans right to call Obama a lame duck? Their use of the term departs from the dictionary definition, but politicians frequently define words in their own way when it’s convenient.  This is true of both Democrats and Republicans.

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/162474 https://historynewsnetwork.org/article/162474 0
Why is the Republican symbol an elephant? Why is the Democratic symbol a donkey?

Thomas H. Nast

It is not because humans are, by nature, political animals, although some such as Aristotle have argued that this is the case.

The association of these two animals with the dominant political parties in the United States can actually be attributed largely to the work of the German cartoonist Thomas Nast, whose drawings helped illiterate voters in the 19th century better understand politics. At the age of six, Nast moved from Germany to New York City where he studied art. After finding work for both Frank Leslie's Illustrated Newspaper and the New York Illustrated News, he took a job with Harper's Weekly in 1862. His cartoons had the power to change public opinion. His illustration, "Compromise with the South," was considered by President Lincoln a contributing factor to his re-election as president in 1864.

"Compromise with the South"

The two cartoons that were responsible for perpetuating the images of the Democratic Donkey and Republican Elephant appeared in Harper's Weekly in the 1870s. In a cartoon published on January 15, 1870, Nast depicted a donkey kicking a dead lion that is identified as Lincoln's late secretary of war, Edwin Stanton. The donkey was associated with "Copperheads," a group of Northern Democrats who opposed the Civil War. The use of the donkey imagery reflected Nast's hostility to the Copperheads. Donkeys had been popularly associated with the Democratic Party since Andrew Jackson, who had been referred to as a jackass by his opponents during the 1828 campaign. Nast's continued association of the donkey with Democratic organizations represented a revival of this symbol.

Cartoon depicting the Democratic Donkey, Thomas Nast, 1870

In 1874 Nast published a cartoon that was meant to dispel concerns that Republican President Ulysses S. Grant, who was then in his second term, might run for a third and become a dictator.  The Democratic Party press had been playing on voters’ fears. In this cartoon, Nast depicts an elephant fleeing in fear from a donkey that is wearing a lion's skin.

"The Third Term Panic"

The elephant had been associated with the Republican Party since it was shown celebrating Union victories in an advertisement that appeared in an 1864 issue of the newspaper Father Abraham. In creating this cartoon, it is possible that Nast took inspiration from the phrase "seeing the elephant." This phrase was commonly used by men traveling West during the Gold Rush of 1849, and it can refer both to participation in combat and the completion of difficult goals, especially those that are achieved in foreign places. It is also possible that a powerful but potentially dangerous animal simply seemed to Nast to be the best choice.

Unfortunately for Nast, his effort to influence readers with his cartoons proved less successful during the 1874 campaign, when the Republicans lost the majority in the House of Representatives. Nast responded by publishing a cartoon in which an elephant is caught in a trap that was set by a donkey, the Capitol Building looming in the background.

Post-campaign Cartoon, Thomas Nast, 1874

Only the Republican Party has officially endorsed the symbol used by Nast. While Democrats might not want to embrace an association with a stereotypically stubborn animal such as a donkey, neither animal is a particularly flattering model. However, Andrew Jackson was able to associate the donkey with positive qualities such as loyalty and perseverance, and it seems that politicians today try to identify with the most positive qualities of their party's "mascot."

References

Blitz, Matt. "How a Donkey and an Elephant Came to Represent Democrats and Republicans." Today I Found Out. April 4, 2014. Accessed March 10, 2016.

Botkin, B.A. "The Elephant." In A Treasury of American Folklore. New York: Crown Publishers, 1944. Accessed March 10, 2016. Jonah World.

Conti, Gerald. "Seeing the Elephant." Civil War Times Illustrated. June 1984. Accessed March 10, 2016. Jonah World.

Miller, Fred. "Political Naturalism." Stanford Encyclopedia of Philosophy. Last modified 2011. Accessed March 10, 2016.

Stamp, Jimmy. "Political Animals: Republican Elephants and Democratic Donkeys." Smithsonian,October 23, 2012. Accessed March 10, 2016.

"Thomas Nast Biography." The Ohio State University Billy Ireland Cartoon Library and Museum. Last modified 2002. Accessed March 10, 2016. 

]]>
Fri, 26 Apr 2024 08:54:14 +0000 https://historynewsnetwork.org/article/162331 https://historynewsnetwork.org/article/162331 0