Extra, Extra: History Repeats Itself
Railing against I-Pods, blogs, sitcoms, DVDs, the Internet, and the general scourge of popular culture, Emory Professor of English Mark Bauerlein in the January 6 Chronicle of Higher Education provides a barrage of statistics to show that today’s youth are ignorant of history. But before Bauerlein dons sackcloth and ashes over Homer Simpson’s progeny, he would be well advised to do his own history homework first.
“Ignorance of U.S. History Shown by College Freshmen” trumpeted the New York Time’s headline on April 4, 1943, a day when the main story reported that General George Patton’s troops had overrun Erwin Rommel at Al-Guettar. Providing support for Allan Nevins’s claim that “young people are all too ignorant of American history,” the survey showed that a scant 6% of the 7000 college freshman could identify the 13 original colonies, while only 15% could place McKinley as president during the Spanish-American War. Less than a quarter could name two contributions made by either Abraham Lincoln or Thomas Jefferson.
Often, students were simply confused. Abraham Lincoln “emaciated the slaves” and, as first president, was father of the Constitution. One graduate of an eastern high school, responding to a question about the system of checks and balances, claimed that Congress “has the right to veto bills that the President wishes to be passed.” According to students, the United States expanded territorially by purchasing Alaska from the Dutch, the Philippines from Great Britain, Louisiana from Sweden, and Hawaii from Norway. A Times editorial excoriated these “appallingly ignorant” youth. “Either the college freshman, recently out of high school, were poorly prepared on the secondary level,” surmised Times reporter Benjamin Fine, “or they had forgotten what they learned about United States history.” And, of course, there were no I-Pods to kick around back in 1943.
Similar hand wringing came into fashion just in time for the nation’s Bicentennial, this time with Bernard Bailyn doing the Simon-says. With the aid of the Educational Testing Service (ETS), the Times surveyed nearly two thousand freshmen on 194 college campuses. On May 2, 1976 the results rained down on the bicentennial parade: “Times Test Shows knowledge of American History Limited.”
The fact is that teenagers have never done well on the tests that Professor Bauerlein uses to worry himself into a tizzy. Fortunately we can take solace in the fact that historical knowledge, to use Michael Schudson’s apt phrase, “seeps into the cultural pores” even if such knowledge is not “readily retrievable by seventeen-year-olds answering a quiz” (see Schudson’s Watergate in American Life, Basic Books 1992). Large-scale tests may tell us what we already know: that young people (and most adults) become confused staring down ETS’s version of historical knowledge. But to assume that multiple-choice tests constitute the alpha and omega of historical knowledge thwarts any serious investigation of American intellectual life and culture.
Bauerlein puts great faith in the results of the National Assessment of Educational Progress (NAEP) examinations in history. But as I have shown elsewhere (Journal of American History, “Crazy for History,” March 2004), the NAEP is rigged. Items that students overwhelmingly get right during the pilot phase of the exam are eliminated from the item bank in favor of those that create a statistical “spread” in the distribution. So don’t hold your breath, Professor Bauerlein. No matter how many I-Pods you succeed in decommissioning, you won’t wake up to a headline that says, “US Students Score Well on New History Test.” The outcome of such tests is as fixed as a Vegas slot machine. In this rare case, history does repeat itself
In that spirit, let me end with a test question of my own:
Identify the source of the following characterization of student knowledge: “Surely a grade of 33 in 100 on the simplest and most obvious facts of American history is not a record in which any high school can take pride.”
Is this yet another indictment of today’s OC-addled, Eminem intoxicated, “tight-chill-dude” blathering students? Sorry. What about those zoned-out hippies from 60s? Wrong again. No, we would have to go all the way back to 1917, when the first “objective” history test was given to American high school and college students, for the source of this quotation. And think for a moment who went to high school and college in Texas in 1917.
Instead of blaming students, maybe it is time for us -- as teachers -- to rethink our own broken methods. Now that’s something to put out on a Pod-cast.
comments powered by Disqus
Irfan Khawaja - 8/4/2006
I have no ax to grind about the validity of NAEP results, and I'm very interested to read your paper, but the reason you give here to assert that "NAEP is rigged" is not a good one.
I worked for NAEP off and on between 1999 and 2005, mostly as an editor, but also as a scoring leader for one iteration of their US history scoring. I happen to know the entire NAEP staff personally, I've scored and done fact-checking for items, and I've seen on a first-hand basis how the tests are put together and scoring is done.
The reason you give for excluding questions at the pilot-test level is just one of several reasons for excluding questions. The other reasons for exclusion are not consistent with your case. Some questions are excluded because they're too hard. Others because they're too easy. Others because they're unclear. Others because the scoring can't consistently be operationalized. Others because they raise "sensitivity" complaints. Etc. That's not "rigging"; it's fine-tuning. Watching the NAEP process up close, I often found myself wishing academia was half as rigorous.
If there is "rigging" involved here, the problem for your thesis is that the rigging is not consistently in one direction. It's all over the place.
In any case, the NAEP data base is almost literally an open book. (I know, because I helped edit parts of it for online release.) To sustain the claim that NAEP US History is "rigged" in the sense you describe, we'd need to see evidence that the exam is too hard for the relevant grade level, i.e., systematically set out to exclude what students know. I'd challenge the historians here to take a look at the NAEP US history exams and judge for themselves. In what respect does the result of the NAEP process show evidence of "rigging"? Or more precisely: where is the evidence that NAEP is rigged so as to produce the result that students come out looking ignorant?
Maybe the answer is in that paper you mention, but from my (admittedly limited) experience at NAEP, I don't see it.
I'll put a link to NAEP in a separate post.
Irfan Khawaja - 8/4/2006
Here is a link to the main US history page for the NAEP assessment:
The relevant links within the preceding link are on the right side of the page, 2/3 of the way down (one link goes to "questions", the other goes to the "data tool").
Irfan Khawaja - 8/4/2006
I'll read the JAH paper, but on distractors, I'd just say that sometimes Prof. Wineburg's criticism applies, but sometimes it doesn't. It depends on the item and on the distractors.
The purpose of a distractor is to provide an answer that has the semblance of correctness without being correct. It's a test of whether the test-taker can distinguish the right answer from one that approximates correctness but is decidedly incorrect.
That's a useful skill to have, and it is in fact used on doctoral students at oral exams. The examiner asks a question and asks the examinee which of a range of possible answers is the right one, where the incorrect ones are distractors designed to approximate correctness.
In fact, the distractor technique harks back past the 1930s to the Socratic dialogues, where Socrates uses it all the time, and has a good rationale for doing so. In my experience, sometimes NAEP did, too.
Lendol Calder - 1/14/2006
McClay says: "[Memory] requires that we possess stories and narratives..." If a metanarrative of American history is now deeply problematic, how much can we expect students to recall in the way of "essential" information? Has anyone ever sat down with other historians and attempted to draw up a list of items that every student should know? Some years ago I tried to do this with some colleagues; it was an illuminating exercise. Our different interpretations of the course of American history left us with a very short list of items, short enough that when the exercise was over, we laughed and said, It would be silly to test for this! So we shelved the idea of assessing students' knowledge of history with a multiple-choice competency exam. When historians of the future write a story about this moment in time, I wonder if they will invoke the decline of metanarrative to explain the renewed interest in attending to the cognitive habits of historical mindedness?
Sam Wineburg - 1/10/2006
Thank you Jonathan. Much of this has to do with the psychometrics of item analysis and its folkways, and what are known as p values (probability values) for the items – a sense of the efficiency of the "distractors" (or non-keyed) answers. I try to explain such abstrusities in lay persons' terms in my JAH paper, and suggest that interested readers go there for more information. On a moral level, I would ask why historians would want to be involved in trying to "distract" students by writing "good" multiple-choice items, i.e., those in which the "distracters" work (e.g., snare unsuspecting students with the 'almost-right' answer). To me, the entire enterprise is fraught with 1930-era assumptions about human abilities that few of us subscribe to today. If multiple-choice items are so vaunted, why don't we use them when it matters most: at our doctoral exams?
Jonathan Rees - 1/10/2006
I understand what you mean by rigged because I read that JAH article you cite and remember it goes well-beyond your book in explaining the problems with the NAEP. It was a huge eye-opener and I don't think you're doing yourself justice in the short sentence you have here summarizing that argument.
Perhaps a better explanation would help for those who haven't read it or don't remember.
Sam Wineburg - 1/10/2006
I think we are in agreement – it would be wonderful if students emerged from high school with some core knowledge that informs their decision-making as citizens. Right now, that is often not happening. One (but not the only) cause surely must be today’s textbooks, paper behemoths exceeding 1000 pages, that provide little sense of peaks and valleys, little sense of when to delve deeply and when to dart quickly. I address this issue in the J. American History piece referred to in my blog entry. I can do no better than quote the crystalline prose of Wilfred McClay:
Memory is most powerful when it is purposeful and selective. It requires a grid, a pattern of organization, a structure within which facts arrange themselves in a particular way, and thereby take on significance. Above all, it requires that we possess stories and narratives that link facts in ways that are both meaningful and truthful, and provide a principle of selection—a way of knowing what facts are worth attending to . . . . We remember those things that fit a template of meaning, and point to a larger whole. We fail to retain the details that, like wandering orphans, have no connection to anything of abiding concern . . . . The design of our courses and curricula must be an exercise in triage, in making hard choices about what gets thrown out of the story, so that the essentials can survive . . .. We need to be willing to identify those things that every American student needs to know and insist upon them . . . while paring away vigorously at the rest.
Stephen Tootle - 1/10/2006
In the past we didn't have to listen to young people-- and they couldn't vote.
Robert KC Johnson - 1/10/2006
I quite agree. But I do think that one (not the only) task of HS history is to ensure that all students come away from the course with a basic and broad factual knowledge of key events and people in US history. Otherwise, once these students get to college, they're not prepared for college-level History work.
This is what troubles me about the stats that Bauerlein cites, although I disagree with his interpretation of the cause.
Caleb McDaniel - 1/10/2006
In constructing a dichotomy between "student connectedness" outside of the classroom and academic engagement, I think Bauerlein also misses an opportunity to see how his observations about student priorities might be pedagogically useful. If students do have a greater facility than ever before for forging social networks online, then perhaps it would be worthwhile for teachers to consider using such networks more often as sites of academic discussion, whether through electronic forums on Blackboard, group blogs, or online chatting to supplement office hours. Such pedagogical adjustments would not just be a case of caving to popular culture: they would be a way of using skills that students already have to give them skills that they do not.
Sam Wineburg - 1/10/2006
To restrict the range of options to either convential standardized tests or loosey-goosey dispositions is impovershed indeed. The range is much broader than a such a binary. For example, what characterizes the AP test for most students is not its MC items, but the fact that they have to interpret, think about, and synthesize 8-13 documents in an analytic essay. This is what we provide to our wealthy (in material or cultural capital) students; for the rest, we doom them to an asssembly-line education. For an interesting comment on this (which does, and does not, align with my thesis) see Fareed Zakaria's Newsweek column, Jan. 9, contrasting Singapore and the US.
Robert KC Johnson - 1/10/2006
I agree with Ralph that Bauerlein's attacks on youth culture aren't convincing. But the data he cites is surely troubling.
Standardized tests are not the best way to determine how much history is learned in high school, but given what I've seen from the "dispositions" movement in teacher-training programs, I'd shudder to think of what sort of HS history curriculum we'd see without them.
Ralph E. Luker - 1/9/2006
Sam, I've been reading your book, so I wasn't surprised by the 1917 statistics. Still, I'm not sure what to make of them (and the periodic results that we've had in the interval), except that today's students are not notably _more_ uninformed about fairly basic American history than their grandparents and great-grandparents were. The fact that a broader segment of the population is included in the data in the later years is an important one. Still, for all their relativizing of Bauerlein's hysteria, the data sets do suggest that there's a failure of the promise of a democratic education, don't they?