[Cross-posted to Cliopatria & Digital History Hacks]
I've been invited to join the crack team of bloggers at Cliopatria, so I will be cross-posting here and at Digital History Hacks from time-to-time. I'm excited by the opportunity to develop a series of posts on a topic of general interest to historians, while keeping enough technical content to satisfy my regular readers. So... let's build a time machine!
At some point in the early nineties I copied down a quote by Loren Eiseley in a commonplace book:
A man who has once looked with the archaeological eye will never quite see normally again. He will be wounded by what other men call trifles. It is possible to refine the sense of time until an old shoe in the bunch of grass or a pile of nineteenth-century beer bottles in an abandoned mining town tolls in one's head like a hall clock. This is the price one pays for learning to read time from surfaces other than an illuminated dial. It is the melancholy secret of the artifact, the humanly touched thing. The Night Country 1971:81.
I made a note of the source, but not how I came upon it. I know I wasn't reading Eiseley's work because I used to keep lists of the books that I read. At the time I was studying linguistics and cognitive science, and in the early summer of 1994 I dipped into ecological anthropology. I assume that I came across the quote then. Now I don't really remember the context as clearly as it sounds. I'm making inferences from my old notebooks and from Usenet posts that have been archived online for 15 years. Reading through those old posts reminds me of what I was doing at the time, although I remember being quite a bit cooler than some of my posts make me sound. I wish that that were my own melancholy secret, but at some point in the 1990s I realized that everything that I had ever typed into a computer was going to be saved forever and eventually made available to everyone.
The Eiseley quote stuck with me, and occasionally I would imagine what it would be like to have an 'archaeological eye.' Being given more to science fiction than fantasy, I tended to imagine a mechanism or instrument or device of some sort, rather than a magical object like a crystal ball. Now at this point I should probably stop and reassure you that I know that it may well be impossible to build a time machine in general, and that it is certainly impossible for me to build one. But I think it can sometimes be quite productive to start with something that you know is impossible, and think through some of the implications anyway. As a genre, fiction is ideally suited to this kind of gedankenexperiment; academic monographs less so. Blogs lie somewhere in between. As my fellow Cliopatrian Timothy Burke once wrote, a blog is an ideal"place to publish small writings, odd writings, leftover writings, lazy speculations, half-formed hypotheses." Plus, time machines are a heck of a lot of fun.
When most people think of a time machine, I suspect they probably imagine something like the H. G. Wells version: jump in, set the dial to whenever, hit a button and you are there. This kind of time machine allows (or requires) you to alter the course of events. Sometimes the results are tragic. In the classic Ray Bradbury story"A Sound of Thunder," one of the characters steps on a prehistoric butterfly and changes the future decidedly for the worse. Sometimes the results are comic, as in Connie Willis's re-take of Jerome K. Jerome. A skeptic might point out that if this kind of time travel were ever going to be possible, we'd already be surrounded by people whizzing back from the future to take our fresh water or oxygen, or buy stock in Google, or exhort their younger selves to study harder, or whatever. For historians, the real problem with being able to alter the past is that it would seem to allow for Bill & Ted-style rewriting on a grand scale, and thus make history utterly pointless. The mutability of history, after all, crucially depends on the immutability of the past.
In fact, physicists are split on the possibility of time travel. Some of those who think time travel might be possible suggest that there could be some law of physics that prevents the creation of weird causal loops--you know, the kind where you go back in time to become your own great-great-grandfather or -mother. Stephen Hawking, for example, postulates a" chronology protection conjecture." (For more, see the article by Paul Davies in Scientific American or his subsequent book.) So when I think of an 'archeological eye' I usually imagine something more voyeuristic: the ability to see or hear or in some way measure the events of the past without affecting the outcome.
Years later, let's say around Y2K, I was studying history. Reading Carlo Ginzburg's essay"Clues" reminded me of the Eiseley quote once again. Wouldn't it be cool to write a history based on virtuoso readings of material evidence? (Like Ginzburg, I read a lot of Sherlock Holmes as a kid.) Unfortunately, the only thing that I was arguably a virtuoso at reading was books, and even that was a stretch. Fortunately I was also reading the work of New Institutional Economists at the time. My head was full of ideas of information costs and transaction costs. Since it costs something to learn something, we can never know very much. I had about the same chance of learning to read old shoes or nineteenth-century beer bottles as I did of learning to read sheet music: fairly low. Choosing to specialize in reading one kind of material evidence would preclude learning to read an almost infinite number of other kinds of traces.
What to do? The key word is 'specialize'. As with other kinds of work, there is a division of interpretive labor. In order to make use of material trace evidence, you don't necessarily need to be able to read it yourself, you simply need to be able to find someone who can. With the traditional tools of scholarship it would have been very difficult to assemble a synoptic view of other people's reconstructions of the past from physical evidence. The emergence of search engines like Google drastically lowered those information costs, however. If you type interpret"wear marks" into Google, you will find a reference to a 1958 paper in the British Chiropody Journal on using shoe wear marks to diagnose foot troubles. You'll find a white paper on how to use scattered light to assess surface and bulk defects in various materials, a paper on the use-wear of stone tools, and so on. You'll find, in other words, a world of chiropodists, materials scientists, forensic scientists, engineers, archaeologists and thousands of other kinds of specialists busy reconstructing the past from its material traces. These are people in search of usable past. They care about past events because they have consequences in the present, and the only way they can access that past is by looking for its indexical signs. These experts don't always agree with one another; the mutability of history also depends on the fact that learning is costly. But since our environment is comprised entirely of survivals from the past, it is a kind of time machine, constantly transporting everything from some past into the present. It is one kind of time machine that is worth having... even if it does seem to work in one direction only and is remarkably difficult to use. (For more on the idea of the environment as an archive of material traces see my new book The Archive of Place.)
Next time: the archive as time machine.
Our story so far: even though we know that it's probably impossible, we've decided to think through the problem of building a time machine. In the last episode we decided that we wouldn't want one that allowed us to rewrite the past willy-nilly... because what would be the point of history then? It turned out, however, that the world itself is a pretty awesome time machine, tirelessly transporting absolutely everything into the future. Today we look at the archive widely construed: one small portion of the world charged with the responsibility of preserving our collective representational memory.
As every schoolboy used to know (at least back when there were 'schoolboys' who knew the Classics), Thucydides wanted his work to be"judged useful by those inquirers who desire an exact knowledge of the past as an aid to the interpretation of the future ... an everlasting possession, not the showpiece of an hour." The fact that we know this twenty-five centuries later speaks pretty well for the potential of preserving representations for long periods of time. Precisely because they can be readily transferred from one material substratum to another, written words, well, remain. Of course, since languages change over time there can be difficulties of decipherment or translation, and exactly which words survive can be a real crap shoot.
With the relatively recent spread of optical, magnetic, and other media, it became necessary to archive media readers, too. The endurance of the written word (or new cousins like photographs and phonographic records) now also depended on devices to amplify, transduce or otherwise transform signals into a form that is visible or audible to human users. Along with the obsolescence of media, librarians and archivists now had to worry about the obsolescence of reading devices.
Enter the computer. Representations are now being created in such quantity that the mind boggles, and they can be transformed into one another so easily that we've taken to referring to practically all media as simply"new." This, of course, poses librarians and archivists with a class of problems we could also refer to as"new." My students and I were talking about this in my digital history grad class a few weeks ago. How do we store all of this born-digital material in a form that will be usable in the future, and not just the showpiece of an hour? One possibility, technically sweet but practically difficult is to create emulators. The archive keeps only one kind of machine: a general-purpose computer that is Turing-equivalent to every other. In theory, software that runs on the general-purpose machine can emulate any desired computer.
My students are most familiar with systems that emulate classic video and arcade games, so that framed our discussion. One group was of the opinion that all you need is the 'blueprint' to create any technological system. Another thought that you would be losing the experience of what it was like to actually use the original system. (Here I should say that I'm solidly in the latter camp. No amount of time spent on the CCS64 emulator can convey the experience of cracking open the Commodore 64 power transformer and spraying it with compressed air so it wouldn't overheat and crash the machine while you were hacking.)
More than this, however, the idea that a blueprint is all you need to recreate a technical system shows how much more attention is focussed on the ghost than on the machine these days. The showiness of new, endlessly plastic media obscure their crucial dependence on a systematic colonization of the nanoscale. I might be able to read a microfiche with sunlight and some strong lenses, but never a DVD. The blueprint for a DVD reader is completely useless without access to some of the most advanced fabrication techniques on the planet. So we're in the process of creating all this eternally-new stuff, running on systems whose lifecycles are getting shorter every year. What would Thucydides say?
Next time: how and why to send messages way into the future.
Tags: Cliopatria | gedankenexperiment | time machines
Last week some time, my eyes popped open in the middle of the night and I realized that it's been quite a while since I blogged. I was too tired to get up and rectify the situation, but, of course, that didn't stop me from lying there half-awake and thinking about blogging. My mind turned to the fact that I've been even more remiss about cross-posting to Cliopatria from time to time. I imagined that some Cliopatrians (O.K., Ralph E. Luker) were probably posting more than a hundred times for each one time that I managed to.
From there I got to thinking about the students in my digital history grad class. They have to blog as the written component of their coursework. Although I'm very explicit about my preference for quality over quantity, you'd think that they would be motivated to produce approximately the same amount of written work as one another. Nevertheless, I had a sense that there could easily be an order of magnitude difference in output between the most and least-frequent posters. I tried to visualize what the distributions would look like: probably a power law. Since that night, I've had a chance to check. The figure below shows the number of times that various members of Cliopatria and of my grad class posted between the beginning of September and now.
I think most academics, including my students, quickly learn that they have strong preferences for some kinds of writing rather than others. One person likes to write abstruse monographs, one popular books, one carefully-crafted essays. Some of us have found that we're able to blog and some people seem to be especially good at it. There's an ecology of scholarly production, and we all have to find our niche.
So I was lying there thinking about blogs and I realized that it reminded me of something, what was it? Oh yeah, frog communication. (It was the middle of the night.) Many years ago I read an utterly charming paper on the subject in Scientific American, and it's stuck with me (Peter M. Narins,"Frog Communication," Sci Am, Aug 1995, 78-83). In its efforts to attract females, the male coqui, a tiny Puerto Rican frog, makes a chirping call that is louder than a jackhammer. This raises many questions, not the least of which is"how [does] such a small creature protect itself from its own racket?" The answer turns out to be a fascinating lesson in evolution and engineering, so be sure to read the paper. What's interesting from the point of view of blogging, or scholarly production more generally, is that the frogs also have a special neural mechanism that follows the periodic calls made by other creatures, predicts windows of relative silence, and allows them to blast their own calls into the gaps.
Now based on my own experience to date, I rarely blog in response to external factors. Instead, I blog when I can get up the gumption to do so. Like many scholars, I've grown used to the idea that when you write something, you're adding it to a body of knowledge that is growing, if not monotonically, at least pretty steadily. On that view, the relative timing of different contributions doesn't matter so much, unless you're in a race for the Nobel prize or something. As historians, we can usually afford to take the long view.
Frogs, however, don't take the long view. As Charles F. Hockett argued in another classic Scientific American article, human language is apparently unique among animal communication systems because it allows us"to talk about things that are remote in space or time (or both) from where the talking goes on" ("The Origin of Speech," Sci Am, Sep 1960, 89-96). For the frog, there's right here, right now, give or take a few hundred milliseconds to squeeze in the call where it is most likely to be heard.
Thinking about blogging as a contribution to an infinite archive pushes us a bit too close to the frog's view of the world for comfort. Imagine having to squeeze your post in right here, right now, the only place where it has a hope of making any difference for anybody. The history blogosphere is already too vibrant, too far-flung for most people to monitor effectively. As more voices are added to the cacophony it's going to become harder and harder to be heard. How can we hope to get it right? Here's where we have a real advantage over the frog. We have the ability to create machines which simulate neural and evolutionary processes. Imagine the blogger of the future, augmented by an artificial system that monitors discourse, predicts gaps and pops in your contribution when and where it's most likely to be cited. Over time, the system learns what you are capable of, and becomes more effective at getting your message out. Does that sound crazy? Ribbit!
Tags: blogs | Cliopatria | Eleutherodactylus coqui | findability | machine learning
One of the distinctions that applied mathematicians make is between linear and nonlinear problems. In a linear problem, you have a set of variables that you can tweak, and as you adjust each variable you can get ever closer to an optimal configuration. Using techniques such as linear programming, it is straightforward to determine precisely how many scoops of raisins to put in your box of bran, or how many Cherries will make a Garcia. Many problems, alas, don't admit of this kind of solution. In the days before digital everything, it was all too common to futz around with the brightness knob, color balance, rabbit ears, and position of pets and small children to try and get a TV signal that didn't look like it was being relayed from the dark side of the moon. The slightest change could make things drastically better or worse, with no apparent logic.
The problem with nonlinear problems is that you pretty much have to get every variable right at the same time. Think of the space of all possible states of your problem as a kind of dark landscape, and the optimal solution as the highest point in that space. Linear problems have smooth landscapes. If you start groping your way up a hill, you end up at the top and that's the best you can do overall. Nonlinear problems have jagged landscapes. It is easy to feel your way up a low peak and get stuck there, unaware of higher peaks elsewhere.
There are different methods for solving nonlinear optimization problems; one of the more popular makes use of genetic algorithms. First you find a way of representing all of the possible solutions to your problem. In the TV example, you might want to represent the angle of each of the two antennas, the xy coordinates of the napping cat, the rotational angle of the brightness knob, and so on. A list of each of these variables is known as a genome, and a list of particular values as a genotype. Generate a small random population of genotypes, and test each one to see how good it is. This test is called the fitness function. In our example, it is the person sitting on the couch shouting"not bad,""pretty good" or"awful" each time an adjustment is made. Once you know how well each of your solutions performed, you make a new generation of solutions by mutating and recombining the genomes of your old ones. Over time, the fitness of the population increases, and the artificial selection mechanism eventually finds solutions that are near optimal. (If you want to start programming your own GAs, I recommend Mitchell's Introduction and Goldberg's Genetic Algorithms as good places to start).
One of the perennial tragedies of academia is that we constantly pretend that our careers or those of our students are linear optimization problems. Grades are the most obvious way that we do this. Students learn that their mark on one test is independent of their mark on another, that it is better to have a high GPA than to risk taking hard courses that interest them, that exploration and failure will usually be punished. Teachers justify marks by appealing to rubrics, bemoaning grade inflation and students"who look good on paper." Too many of us think of a good career in terms of lines on a CV, a list of so many independent accomplishments, each of which can be attained and then forgotten.
On a rainy day in 1992, I wandered into a Vancouver technical bookstore on my way home from school. I think I was probably avoiding a problem set or some other homework, as I've never been very good at doing what I should be doing rather than what I want to be doing. Anyway, I remember finding a copy of John Holland's Adaptation in Natural and Artificial Systems on the shelf of new releases and really wanting to buy it. I stood in the store holding the book for the longest time. It was more than I could afford, it was a distraction from my school work, I had a bad habit of buying books and losing interest in them. I had been doing a lot of exploring and a fair bit of failing. I finally made the decision that was, in context at least, sub-optimal. I bought the book and went home to read it rather than doing my schoolwork.
I often tell my students that they should follow their curiosity, take chances and not be afraid to fail. You never really know what whim, what chance encounter or distraction is going to change your life. In my case, I read a lot of science fiction and graphic novels and ate a lot of guacamole. I played role playing games and got married early and happily. I watched TV. I got bad grades in linear algebra and analysis, but I liked math enough to keep trying until I got better at it. And my first published work was on a subject that was novel and trendy enough that my reputation as an up-and-coming researcher outweighed my uneven transcript: genetic algorithms. It's tempting to look back at that moment in the bookstore as a crucial inflection point in my life, but that would be too linear. The choices that we make affect our fitness, but never in a way that makes it easy to assign credit or blame.
Tags: feedback | genetic algorithms | nonlinear optimization | pedagogy
Introductory lessons teach you how to
- install Zotero, the Python programming language and other useful tools
- read and write data files
- save web pages and automatically extract information from them
- count word frequencies
- remove stop words
- automatically refine searches
- make n-gram dictionaries
- create keyword-in-context (KWIC) displays
- make tag clouds, and
- harvest sets of hyperlinks
The Programming Historian is a work-in-progress. We are constantly adding new material, much of it driven by reader request. Upcoming topics will include indexing, scraping projects, simple spiders, mashups and much more.
Given that relatively few of our colleagues are familiar with digital history yet--and that those of us who practice some form of it aren't sure what to call it: digital history? history and computing? digital humanities?--it may seem a bit perverse to start talking about computational history. Nevertheless, it's an idea that we need, and the sooner we start talking and thinking about it, the better.
From my perspective, digital history simply refers to the idea that many of our potential sources are now online and available on the internet. It is possible, of course, to expand this definition and tease out many of its implications. (For more on that, see the forthcoming interchange on"The Promise of Digital History" in the September 2008 issue of The Journal of American History). To some extent we're all digital historians already, as it is quickly becoming impossible to imagine doing historical research without making use of e-mail, discussion lists, word processors, search engines, bibliographical databases and electronic publishing. Some day pretty soon, the"digital" in"digital history" is going to sound redundant, and we can drop it and get back to doing what we all love.
Or maybe not. By that time, I think, it will have become apparent that having networked access to an effectively infinite archive of digital sources, and to one another, has completely changed the nature of the game. Here are a few examples of what's in store.
Collective intelligence. Social software allows large numbers of people to interact efficiently and focus on solving problems that may be too difficult for any individual or small group. Does this sound utopian? Present-day examples are easy to find in massive online games, open source software, and even the much-maligned Wikipedia. These efforts all involve unthinkably complex assemblages of people, machines, computational processes and archives of representations. We have no idea what these collective intelligences will be capable of. Is it possible for an ad hoc, international, multi-lingual group of people to engage in a parallel and distributed process of historical research? Is it possible for a group to transcend the historical consciousness of the individuals that make it up? How does the historical reasoning of a collective intelligence differ from the historical reasoning of more familiar kinds of historian?
Machines as colleagues. Most of us are aware that law enforcement and security agencies routinely use biometric software to search through databases of images and video and identify people by facial characteristics, gait, and so on. Nothing precludes the use of similar software with historical archives. But here's the key point. Suppose you have a photograph of known provenance, depicting someone in whom you have an interest. Your biometric software skims through a database of historical images and matches your person to someone in a photo of a crowd at an important event. If the program is 95% sure that the match is valid, are you justified in arguing that your person was in the crowd that day?
Archives with APIs. Take it a step further. Most online archives today are designed to allow human users to find sources and read and cite them in traditional ways. It is straightforward, however, for the creators of these archives to add an application programming interface (API), a way for computer programs to request and make use of archival sources. You could train a machine learner to recognize pictures of people, artifacts or places and turn it loose on every historical photo archive with an API. Trained learners can be shared amongst groups of colleagues, or subject as populations to a process of artificial selection. At present, APIs are most familiar in the form of mashups, websites that integrate data from different sources on-the-fly. The race is on now to provide APIs for some of the world's most important online archival collections.
Models. Agent-based and other approaches from complex adaptive systems research are beginning to infiltrate the edges of the discipline, particularly amongst researchers more inclined toward the social sciences. Serious games appeal to a generation of researchers that grew up with not-so-serious ones. People who might once have found quantitative history appealing are now building geographic information systems. In every case, computational processes become tools to think with. I was recently at the Metropolis on Trial conference, loosely organized around the 120 million word online archive of the Old Bailey proceedings. At the conference, historians talked and argued about sources and interpretations, of course, but also about optical character recognition and statistical tables and graphs and search results generated with tools on the website. We're not yet at a point where these discussions involve much nuanced analysis of layers of computational mediation... but it is definitely beginning.
Tags: computational history | digital history
Like many people who blog at Blogger, I was recently notified by e-mail that my blog had been identified by their automated classifiers"as a potential spam blog." In order to prove that this was not the case, I had to log in to one of their servers and request that my blog be reviewed by a human being. The e-mail went on to say"Automatic spam detection is inherently fuzzy, and occasionally a blog like yours is flagged incorrectly. We sincerely apologize for this error." The author of the e-mail knew, of course, that if my blog were sending spam then his or her e-mail would fall on deaf ears (as it were)... you don't have to worry about bots' feelings. The politeness was intended for me, a hapless human caught in the crossfire in the war of intelligent machines.
That same week, a lot of my e-mails were also getting bounced. Since I have my blog address in my .sig file, I'm guessing that may have something to do with it. Alternately, my e-mail address may have been temporarily blocked as the result of a surge in spam being sent from GMail servers. This to-and-fro, attack against counter-attack, Spy vs. Spy kind of thing can be irritating for the collaterally damaged but it is good news for digital historians, as paradoxical as that may seem.
One of the side effects of the war on spam has been a lot of sophisticated research on automated classifiers that use Bayesian or other techniques to categorize natural language documents. Historians can use these algorithms to make their own online archival research much more productive, as I argued in a series of posts this summer.
In fact, a closely related arms race is being fought at another level, one that also has important implications for the digital humanities. The optical character recognition (OCR) software that is used to digitize paper books and documents is also being used by spammers to try and circumvent software intended to block them. This, in turn, is having a positive effect on the development of OCR algorithms, and leading to higher quality digital repositories as a collateral benefit. Here's how.
- Computer scientists create the CAPTCHA, a"Completely Automated Public Turing test to tell Computers and Humans Apart." In essence, it shows a wonky image of a short text on the screen, and the (presumably human) user has to read it and type in the characters. If they match, the system assumes a real person is interacting with it.
- Google releases the Tesseract OCR engine that they use for Google Books as open source. On the plus side, a whole community of programmers can now improve Tesseract OCR. On the minus side, a whole community of spammers can put it to work cracking CAPTCHAs.
- In the meantime, a group of computer scientists comes up with a brilliant idea, the reCAPTCHA. Every day, tens of millions of people are reading wonky images of short character strings and retyping them. Why not use all of these infinitesimal units of labor to do something useful? The reCAPTCHA system uses OCR errors for its CAPTCHAs. When you respond to a reCAPTCHA challenge, you're helping to improve the quality of digitized books.
- The guys with white hats are also using OCR to crack CAPTCHAs, with the aim of creating stronger challenges. One side effect is that the OCR gets better at recognizing wonky text, and thus better for creating digital books.
Tags: machine learning | optical character recognition (OCR) | Turing test
The Stock Market Skirt is a robot of sorts. Created a number of years ago by Toronto-based media artist Nancy Patterson, it consists of a party dress on a dressmaker's mannequin and a number of monitors displaying stock tickers. As prices fluctuate,"these values are sent to a program which determines whether to raise or lower the hemline via a stepper motor and a system of cables, weights and pulleys attached to the underside of the skirt. When the stock price rises, the hemline is raised; when the stock price falls, the hemline is lowered." I can only assume that the edge of the dress is rumpled up on the floor these days, and that the motors are somewhat the worse for wear.
The exhibit, of course, is a playful reinterpretation of George Taylor's hemline index. In the 1920s, Taylor, an economist at the Wharton school, observed that skirt lengths were correlated with the state of the economy. Since then, the observation has continued to be relatively robust, and these days has been extended into many other domains, like music and movie preferences, the water content in foods, and even the shapes of Playboy playmates.
I think the stock market skirt is a great example of what I call a"history appliance." The idea is supposed to be whimsical: what if a device could dispense historical consciousness the way a tap dispenses water? I've found that academic historians have a much harder time entertaining this question than public historians do. After all, the latter have a long tradition of trying to build events, exhibits and situations that communicate interpretations of the past in ways that supplement the written word. A diorama, for example, represents the past faithfully along some dimensions, but not all. You can do scientific tests on an artifact--if it isn't a fake, its material substance can be informative about past events. (Ditto if it is a fake.) You can't necessarily do scientific tests on a diorama, and yet it is possible for it to communicate information about the past veridically.
For a historian, the correlation between stock prices and hemlines raises questions of agency, and we feel comfortable exploring those on paper. Nothing foregrounds agency like a robot, however, and historians shouldn't shy away from building them into their historical interpretations.
Tags: history appliances | public history | thing knowledge
In December 2004, I bought a copy of Joe Martin's Tabletop Machining to see what would be involved in learning how to make clockwork mechanisms and automata. It was pretty obvious that I had many years of study ahead of me, but I had just finished my PhD and knew that publishing that would take a few years more. So I didn't mind beginning something else that might take ten or fifteen years to master. Since then, I've been reading steadily about making things, but it wasn't until this past fall that I actually had the chance to set up a small Lab for Humanistic Fabrication and begin making stuff in earnest. Since it's December again, I thought I'd put together a small list of books to help other would-be humanist makers.
- Alexander, Christopher. Notes on the Synthesis of Form (Harvard, 1964).
- Ball, Philip. Made to Measure: New Materials for the 21st Century (Princeton, 1999).
- Barrett, William. The Illusion of Technique (Anchor, 1979).
- Basalla, George. The Evolution of Technology (Cambridge, 1989).
- Bryant, John and Chris Sangwin. How Round is Your Circle? Where Engineering and Mathematics Meet (Princeton, 2008).
- Dourish, Paul. Where the Action Is: The Foundations of Embodied Interaction (MIT, 2004).
- Edgerton, David. The Shock of the Old: Technology and Global History since 1900 (Oxford, 2006).
- Frauenfelder, Mark and Gareth Branwyn. The Best of MAKE (O'Reilly, 2007).
- Gershenfeld, Neil. Fab: The Coming Revolution on Your Desktop--from Personal Computers to Personal Fabrication (Basic, 2007).
- Gordon, J. E. Structures: Or Why Things Don't Fall Down (Da Capo, 2003).
- Gordon, J. E. The New Science of Strong Materials: Or Why You Don't Fall through the Floor (Princeton, 2006).
- Harper, Douglas. Working Knowledge: Skill and Community in a Small Shop (Chicago, 1987).
- Igoe, Tom. Making things Talk: Practical Methods for Connecting Physical Objects (Make Books, 2007).
- Ingold, Tim. The Perception of the Environment: Essays on Livelihood, Dwelling and Skill (Routledge, 2000).
- Marlow, Frank M. Machine Shop Essentials (Metal Arts, 2004).
- Martin, Joe. Tabletop Machining (Sherline, 1998).
- McDonough, William and Michael Braungart. Cradle to Cradle: Remaking the Way We Make Things (North Point, 2002).
- Molotch, Harvey. Where Stuff Comes From: How Toasters, Toilets, Cars, Computers and Many Other Things Come to Be As They Are (Routledge, 2005).
- Mims, Forrest M., III. Electronic Sensor Circuits and Projects (Master Publishing, 2004).
- Mims, Forrest M., III. Science and Communication Circuits and Projects (Master Publishing, 2004).
- Napier, John. Hands (Princeton, 1993).
- Oberg, Erik, et al. Machinery's Handbook, 28th ed. (Industrial Press, 2008).
- O'Sullivan, Dan and Tom Igoe. Physical Computing: Sensing and Controlling the Physical World with Computers (Thomson, 2004).
- Polanyi, Michael. Personal Knowledge (Chicago, 1974).
- Powell, John. The Survival of the Fitter (Practical Action, 1995).
- Pye, David. The Nature and Art of Workmanship (A&C Black, 2008).
- Rathje, William and Cullen Murphy. Rubbish! The Archaeology of Garbage (University of Arizona, 2001).
- Schon, Donald A. The Reflective Practitioner: How Professionals Think in Action (Basic, 1984).
- Sennett, Richard. The Craftsman (Yale, 2008).
- Slade, Giles. Made to Break: Technology and Obsolescence in America (Harvard, 2007).
- Sterling, Bruce. Shaping Things (MIT, 2005).
- Suchman, Lucy. Human-Machine Reconfigurations: Plans and Situated Action (Cambridge, 2006).
- Thackara, John. In the Bubble: Designing in a Complex World (MIT, 2006).
- Thompson, Rob. Manufacturing Processes for Design Professionals (Thames & Hudson, 2007).
- Woodbury, Robert S. Studies in the History of Machine Tools (MIT, 1973).
Tags: bricolage | critical technical practice | DIY | fabrication | humanism