Blogs > Cliopatria > The Two Cultures, 50 years later

May 14, 2009

The Two Cultures, 50 years later




7 May was the 50th anniversary of C. P. Snow's famous lecture The Two Cultures. Snow, a novelist who had studied science and held technology related government positions, decried the cultural rift between scientists and literary intellectuals. Snow's argument, and his sociopolitical agenda, were complex (read the published version if you want the sense of it; educational reform was the biggie), but, especially post-"Science Wars", the idea of two cultures resonates beyond its original context. The current version of the Wikipedia article says:

The term two cultures has entered the general lexicon as a shorthand for differences between two attitudes. These are

  • The increasingly constructivist world view suffusing the humanities, in which the scientific method is seen as embedded within language and culture; and
  • The scientific viewpoint, in which the observer can still objectively make unbiased and non-culturally embedded observations about nature.
That's a distinctly 1990s and 2000s perspective.

Snow's original idea bears only scant resemblance to the scientism vs. constructivism meaning. As he explained, literary intellectuals (not entirely the same thing as humanities scholars) didn't understand basic science or the technology-based socioeconomic foundations of modern life, and they didn't care to. Novelists, poets and playwrights, he complained, didn't know the second law of thermodynamics or what a machine-tool was, and the few that did certainly couldn't expect their readers to know.

Humanistic studies of science (constructivist worldview and all) would have undermined Snow's argument, but humanists were only just beginning to turn to science as a subject for analysis. (Kuhn's Structure of Scientific Revolutions was not until 1962. Structure did mark the acceleration of sociological and humanistic studies of science, but was actually taken up more enthusiastically by scientists than humanists. Widespread constructivism in the humanities only became common by the 1980s, I'd guess, and the main thrust of constructivism, when described without jargon, is actually broadly consistent with the way most scientists today understand the nature of science. It's not nearly so radical as the popular caricature presented in Higher Superstition and similar polemics.) Rather than humanists understanding the scientific method or scientists viewing their work through a sociological or anthropological lens, Snow's main complaint was that scientific progress had left the slow-changing world of literature and its Luddite inhabitants behind (and hence, scientists found little use for modern literature).

Snow wrote that"naturally [scientists] had the future in their bones." That was the core of the scientific culture, and the great failing of literary culture.

Looking back from 2009, I think history--and the point in it when Snow was making his argument--seems very different than it did to Snow. Who, besides scientists, had the future in their bones in 1959? In the 1950s academic world, literature was the pinnacle of ivory tower high culture. Not film, not television, certainly not paraliterary genres like science fiction or comic books. History of science was a minor field that had closer connections to science than to mainstream history.

Today, in addition to scientists, a whole range of others are seen as having"the future in their bones": purveyors of speculative fiction in every medium; web entrepreneurs and social media gurus; geeks of all sorts; venture capitalists; kids who increasingly demand a role in constructing their (our) own cultural world. The modern humanities are turning their attention to these groups and their historical predecessors. As Shakespeare (we are now quick to note) was the popular entertainment of his day, we now look beyond traditional"literary fiction" to find the important cultural works of more recent decades. And in the popular culture of 1950s through to today, we can see, perhaps, that science was already seeping out much further from the social world of scientsts themselves than Snow and other promoters of the two cultures thesis could recognize--blinded, as they were, by the strict focus on what passed for high literature.

Science permeated much of anglophone culture, but rather than spreading from high culture outward (as Snow hoped it might), it first took hold in culturally marginal niches and only gradually filtered to insulated spheres of high culture. Science fiction historians point to the 1950s as the height of the so-called"Golden Age of [hard] Science Fiction", and SF authors could count on their audience to understand basic science. Modern geek culture--and its significance across modern entertainment--we now recognize, draws in part from the hacker culture of 1960s computer research. Feminists and the development of the pill; environmentalists; the list of possible examples of science-related futuremaking goes on and on, but Snow had us looking in the wrong places.

Certainly, cultural gaps remain between the sciences and the humanities (although, in terms of scholarly literature, there is a remarkable degree of overlap and interchange, forming one big network with a number of fuzzy division). But C. P. Snow's The Two Cultures seems less and less relevant for modern society; looking back, it even seems less relevant to its original context.



comments powered by Disqus

More Comments:


Andrew D. Todd - 5/20/2009

Ah, my further comment is below. I was asleep at the switch while posting, and pushed the wrong button.


Andrew D. Todd - 5/20/2009

In the first place, "science fiction" is something of a misnomer. Perhaps it should be "technology fiction," because the emphasis is not primarily on science-for-the-sake-of-knowing, but on applied science. Charles Darwin on the Beagle doesn't really qualify. Nor does someone who spends a year following a troop of monkeys around and describing their ecology and sociology.

I should explain that my research in the history of computing is mostly in the period 1940-1980, not in the age of the internet. I used the Babbage Center Oral Histories for the earlier period, and trade magazines and professional journals when they became available, circa 1960. The percentage of women in many aspects of computer programming, broadly speaking, ran about 25-35%, at least three or four times greater than that in engineering. At this date, this representation included bachelors degrees in computer science. The comparatively low rate of women earning advanced degrees in computer science was an anomaly. One point of caution is that one should not equate academic Computer Science with computer programming. Computer Science is something of a failed discipline, in the sense that it was never able to dominate its industry in the sense that Mechanical Engineering dominated the machine-building industries. People who spoke about Computer Science as a profession usually had the ulterior motive of banning programming by people who did not have Computer Science degrees. There were always vast numbers of people who learned what Computer Science they needed to know, and started programming, but resisted indoctrination, so to speak.

It also depends on whether you view Computer Science from the standpoint of biology, or from the standpoint of engineering. I was originally trained in a branch of mechanical engineering, Engineering Science, back in the early 1980's, and I pulled strings to be allowed to take a sophomore-level Computer Science sequence. This was just before personal computers became widely available, and a lot of the difficulty of covering the material had to do with what one might describe as "friction," meaning for example, that the keypunches used for writing programs on punched cards were about as difficult to use as a Linotype (one couldn't see what one were typing). They were considerably more difficult to use than a typewriter, and there weren't enough of them, so that one had to wait until the middle of the night. The vending machines in the computer labs sold blank punch cards in packets of fifty for a quarter (like selling paper one page at a time), which gives one some idea of the contemporary scale of programming. When I talked my way into the Computer Science sequence, I had taken five programming courses in the engineering school, for about fifteen quarter-hours (ten semester-hours), and had written less than five hundred lines of code, distributed over at least a dozen programs. That was enough to get me a license to break the rules, because other people had done still less. Obviously, the rules changed when one had a computer of one's own. The material could have been covered much faster if the instructor had been in the position to assume that the students would be able to try things out during lunchtime. A whole series of factors like this meant that the subject matter of computer science was tending to vanish down into high school. Under these circumstances, nine semester-hours of college courses could be enough to teach most of the useful techniques of Computer Science.

In approximately 1980-85, there was a discontinuity in Computer Science, due to the advent of the personal computer. Big machines went into economic decline, as work was offloaded onto cheaper little computers. Big machine companies downsized. As the little computers grew, there was a disconnect between commercial development and research. The designers of a microcomputer with, say, a million transistors would inevitably look backwards about twenty years to a proven mainframe design of the same size, as codified in undergraduate textbooks, not to current research, which was likely to be about computers with a billion transistors or more. Young men like Bill Gates realized that they did not have to stay in school or go to work for big companies, that they could just take coursework assignments and commercialize them. If I were to pick one iconic event for academic computer science, it would be the University of Michigan's transfer of its computer science department from the liberal arts college to the engineering school, and John Holland's subsequent employment difficulties, which led to his becoming involved with the Santa Fe institute in self-defense.

I suspect that what happened after 1985 was the "failed science" phase. On a lot of campuses, computer science was simply being absorbed into Electrical Engineering. They came up with a major called Computer Engineering, which was effectively a sub-major within Electrical Engineering, with a few computer science courses added. Alternatively, Computer Science could be subsumed into Applied Mathematics and Statistics, but that was less common. Under the circumstances, Computer Science was more than usually susceptible to being taken over by people who needed the Green Card. This happened in large sections of science and engineering anyway. Whole departments became occupied by people from Mumbai or Taipai, whose outlook was essentially Victorian, who still regarded the abolition of arranged marriages and dowry as a major step forward.

See my previous comment:
http://hnn.us/readcomment.php?id=111041&;bheaders=1#111041
in:
http://hnn.us/roundup/entries/40331.html

Now, the origins of Women's Science Fiction belong to the period before 1980, not afterwards. Ursula Le Guin did her most creative work circa 1970. The period after 1980 is the period in which Marion Zimmer Bradley was running her writing school, and publishing anthologies of her students' work. I have located examples of ironic fantasy fiction in a computer trade journal, Datamation, from the 1970's. Parenthetically, Datamation was increasingly written and edited by women.

Here is my considered take on the "woman question" in computers. I have to say that the position taken by professional feminists is excessively simplistic:

http://rowboats-sd-ca.com/adtodd1a/free_am.htm
http://rowboats-sd-ca.com/adtodd1a/sm_3_fr.pdf


Sage Ross - 5/18/2009

Thanks for the enlightening comment!

Indeed, SF was a marginal genre in the supposed Golden Age. And as the alternative formulation goes, "the Golden Age of Science Fiction is 13".

I like your point about widespread illiteracies, but I would put it differently. It's no longer really possible to be literate in the full range of important cultural expressions (or even anywhere close, without devoting oneself full-time to achieving such literacies), but certain literacies (scientific among them) are the closest things that remain to the sort of modestly coherent cultural baseline that could be expected of earlier generations of literary intellectuals. And "media" (the catchall for these domains of literacy), broadly speaking, is more demonstrably related to power than anything.

One of my favorite children's books is Dr. Seuss's "The Butter Battle Book", a wonderful example of the 'two levels' approach.

(I take exception to characterizing Harry Potter as bad children's books, but they do indeed read more or less the same to children and adults. But they aren't the kind of books one reads to one's child.)

I'm not sure what to make of your linkage of soft science fiction, information sciences, and women's science fiction, but I see some problems with it. Computer science is among the most male-dominated fields (along with physics, but not the SF mainstays of astronomy and astrophysics), numbers-wise. Biology in general actually reached parity recently in terms of undergraduates and grad students, if I recall correctly. But it's dangerous to think of biology in general (or molecular biology in particular) as an information science, especially if one wants to characterize it as a women's field. Molecular biology, as imagined and created by scientists in the 1950s and 1960s as an information science, has been one of the most self-consciously masculine and testosterone-fueled sciences, in distinction to the types biology that molecular biologists denegrated and tried feminize (i.e., anything remotely smacking of natural history). (From this perspective, an ecologist isn't a rugged outdoorsman, he's a stamp collector; information-based molecular biology has too much math, physics and chemistry for him.) The reconfigurations of biology since the 1960s aren't easy to make sense of in broad cultural terms, but I don't think it works to equate information sciences with a) soft SF, and b) women's SF, at least without a whole lot of caveats.


Sage Ross - 5/18/2009

Well, this was a history of science course at Yale, which a number of outspoken history of science majors. Probably not safe to extrapolate too far.

Times have surely changed, but the devolution you discuss probably still happens quite often when Snow is assigned.


Ben W. Brumfield - 5/18/2009

I'm surprised you're able to discuss Snow in a classroom without seeing the discussion devolve into science & engineering majors accusing humanities & social science majors of laziness and lack of intellectual curiosity, while the h/ss majors accuse the s/e majors of "vocational" focus and lack of intellectual curiosity.

Maybe times have changed.


Andrew D. Todd - 5/17/2009

There is an interesting book, L Sprague de Camp and Catherine Crook de Camp, _Science Fiction Handbook: Revised: How to Write and Sell Imaginative Stories_ (1975). On page 69, they cite market research done by John W. Campbell of _Astounding Science Fiction_ in the late 1930's, and in 1949. The typical reader was a thirty-year-old male scientist or engineer with a college degree. Alternatively, this could be viewed as a bimodal distribution, consisting of high school or college students in science and engineering on the one hand, and and somewhat older qualified scientists and engineers, say about forty years old, one the other. We are talking about people who were far from being representative of the general population. Oddly enough, the science fiction writers, being a bit older, and pre-GI-bill, had less impressive formal educational credentials than the readers had.

Isaac Asimov, one of the great hard science fiction writers, not only wrote science fiction, but also a line of popular science paperbacks, covering the exciting new areas such as modern physics, biochemistry and molecular genetics, neurology, etc., at a level more accessible than, say, Scientific American, notwithstanding that Scientific American had color illustrations and the books didn't. As a teenager, back in the early 1970's, I was given these books by my parents, even while I was covering the standard high-school science curriculum. If you view science fiction as children's books, there is a technique of writing children's books. They have to be written at two levels, one for the adult reading aloud, and the other for the child being read to. Otherwise, the child senses that the adult is bored, more or less at the level that a dog senses things, and tunes out in sympathy. Good children's books are full of things which the adult understands, but the child doesn't need to. You can see this fairly clearly in a book like _The Wind in the Willows_ (1908). For example, in the Wild Wood, the Mole meets a rabbit who is experiencing the classic symptoms of acute shell-shock. This was before the First World War, so I suppose Kenneth Grahame must have borrowed from Stephen Crane's _Red Badge of Courage_. If you view science fiction in fathers-and-sons terms, one contained text is written for someone about fourteen years old, and the other contained text is written for an adult who happens to be scientifically educated.

Bad children's books (eg. Harry Potter, the Stratemeyer Syndicate stuff-- Tom Swift, Hardy Boys, Nancy Drew, etc.) are not written at two levels. They are written down to what so-called experts think children can understand. The basic fallacy in this approach is that it assumes that the child's parents are illiterate wretched peasants. At a certain level, the whole point of the public school system, in the age of Jane Addams, was to remove the child from the home and prevent the parents from exploiting the child as cheap labor. The school was a kind of orphanage on the installment plan, and it produced things like the Stratemeyer Syndicate.

Soft science fiction is archetypically women's science fiction, and it has linkages to the information sciences. I don't know about biochemistry, but computer science is a comparatively feminine field, at times, and under the right circumstances, approaching equal representation. Here's a piece I put up a couple of years ago:

http://hnn.us/readcomment.php?id=105241&;bheaders=1#105241
in:
http://hnn.us/blogs/comments/35114.html#comment


To: Ralph M. Hitchens: I think you have to realize that people in high places are not just scientifically illiterate. They are artistically illiterate, and literarilly illiterate, and philosophically illiterate, etc. They are power junkies, in short, with very little interest in anything which is not demonstrably related to power.


Sage Ross - 5/16/2009

Textbook authors expect their readers to understand the science, and they spend a lot of time explaining it (even the basics their readers are supposed to already know). I think the same or similar dynamic is at work in a lot of hard SF. Sometimes the infodumps serve to show science-literate readers where the science *fiction* begins.

But I feel bad invoking the hard/soft distinction in the first place, since it's mostly lost meaning in recent SF. The often-preferred modern terminology is telling: "SF" or "speculative fiction" rather than "science fiction", or worse "sci fi". (Both the ontological and cultural borders between science and not-science are much more porous, it turns out, than scientists, philosophers and SF authors thought in the 1950s.)


Jonathan Dresner - 5/15/2009

I've sometimes managed to integrate a discussion of speculative fiction into my 20th century culture discussions. There is still a distinction in the field between the "soft" SF, in which the science has a kind of magic handwaving feel to it, and "hard" SF which is based on real science or something very close to it.

I don't know whether the Golden Age writers really expected their readers to understand the science: their characters spend an awful lot of time explaining it to each other.


Ralph M. Hitchens - 5/15/2009

I think Snow's "Two Cultures" is still useful, at least as metaphor. Several years ago I folded some words from Snow into a briefing on cybersecurity, to illustrate the fact that people in high places making government policy on IT and telecommunications issues were completely lacking (and generally uninterested) in learning much technical detail.


Sage Ross - 5/15/2009

Rob, thanks.

The last time I was in a class that used The Two Cultures (the only time, actually), something similar happened. It's also a good screen for who actually does the (fairly short) reading, since Snow's description is so different from the way people experiences sciences vs. humanities cultures in universities today.

It might be useful for a course to read Snow against selections from Shapin's The Scientific Life. As opposed to some sort of hybrid scientific-literary culture filling the role of the yearned-for bridge, has business become the third culture that mediates between (and threatens to encompass) the sciences and humanities?


Rob MacDougall - 5/15/2009

Great post, Sage.

In the history of/and science & technology course I just co-taught, we began the year by discussing Snow's two cultures and questioning the difference between them. (A difference, we tried to show, that has only existed for a short historical time. Did Isaac Newton or Ben Franklin distinguish have to choose between the two cultures?) I fear Snow became a bit of a punching bag as the year went on, every time we wanted to argue for greater interconnection between the disciplines.

I'm glad to see you include "geeks of all sorts" in your list of people with the future in their bones. The trope has been overdone, but I do think there's something to the idea (espoused by Kevin Kelley et al at Wired) that "geek culture" in some ways fulfills Snow's call for a kind of third culture that can bridge the old gap between the two cultures.