Technology and Politics: Neither good, nor bad, nor neutral
tags: democracy,technology,digital data,Mel Kranzberg,Great Firewall
The Economist concluded its “Technology and Politics” special report on March 26 with “Technology is neither good nor bad; nor is it neutral,” said the late Melvin Kranzberg, one of the most influential historians of machinery. The same is true for the internet and the use of data in politics: it is neither a blessing, nor is it evil, yet it has an effect. But which effect? And what, if anything, needs to be done about it?”
As a historian of technology, I was, of course, thrilled to see one of the field’s founders cited. But I was also slightly irritated to find Mel described as a historian of machinery, the equivalent to describing an Economist correspondent as someone who writes for a living. While both descriptions are accurate, they significantly understate the reality.
Mel, one of the founders of the Society for the History of Technology (SHOT), of course would have been amused and used the occasion to show how the history of technology encompassed not just machinery but a much broader set of practices and activities that affect every aspect of our lives. Indeed, the “Technology and Politics” article takes his “Big Tent” perspective to show how the technologies of digital information are changing politics and governing on multiple levels. Like any technology, the real benefits come not from the actual hardware and software but the “wetware” – how individuals and institutions organize and operate them.
One key conclusion is the importance of creating a framework of regulations, assumptions, and restrictions to strengthen democratic norms and not authoritarian regimes. Governments, especially authoritarian regimes, are increasingly effective in their abilities to collect, analyze, communicate, contaminate, and control digital data.
Often, those out of power or lower in the power structure are the first to employ new technologies to change the status quo. They have less invested in the existing information infrastructure. Communications technologies enable them to organize collective action locally and at a distance.
The excitement about the role played by digital media in the 2009-10 Iranian Green Movement and the 2011 Arab Spring has ample precedents in the antiestablishment roles of cellphones played in the 1992 Thai Black May protests, fax machines in the 1989 Tiananmen Square in China and the heated 1990-92 political debates in Saudi Arabia, and photocopiers and printing presses in the 1980s rise of Solidarity in Communist Poland. More recently, the gory Daesh online videos and Donald Trump’s command of Twitter demonstrate how smart use of new media can shape public perceptions and debate.
The telephone as well as the fax machine were heralded as revolutionary in their time for enabling people to organize and communicate far more effectively and inexpensively than previously. Those of a certain age may remember the telephone tree, where each person contacted agreed to call others, quickly spreading the message. Tweeting is much faster and easier.
But governments quickly found digital data easier to track and analyze. Indeed, intercepting a fax or electronic message provided an exact copy of the original without the potential of incorrectly transcribing a phone conversation. Surveillance can occur in real time.
Possibly more ominous than surveillance are the state attempts to reshape reality, not just by censorship but by drowning out dissenting opinions, promoting skewed perspectives, and attacking opponents. Unlike Communist propaganda of the Cold War, the tools and techniques are far more nimble, reflecting both technological advances and, more importantly, more thoughtful application of them.
China’s Great Firewall provides an effective model for internet censorship, with its monitors proving savvier and more selective in what they censor. Gary King and his Harvard colleagues have estimated that bloggers working for the Chinese government generate approximately 450 million messages annually to distract and divert the public from what the state considers dangerous topics. Similarly, in a variation of Gresham’s law, the Russian government and its fellow travelers have effectively deployed disinformation, spambots, and trolls to overwhelm good information with bad data in its propaganda offensive against Ukraine and the West.
These efforts (sadly not marked ‘fake’), decrease the value of social media multiple ways including the time needed to discard those messages, the decrease of trust, the increase of skepticism, and the creation of alternate realities. None of this is new – “Astroturf” campaigns in the 1990s appeared spontaneously generated but in reality were manipulated by PR firms working for clients. The German government subsidized some French newspapers in the 1930s to encourage public dissatisfaction with the French government and society.
By deft editing of the Ems telegram, German chancellor Otto von Bismarck manipulated French public opinion to push France into the 1870 Franco-Prussian War, which ended disastrously for France. And who could better British science fiction writer Arthur C. Clarke in envisioning a state subverting a foe by broadcasting sex, scandal, and violence in his 1960 “I remember Babylon”?
Digital data, like its analog counterparts, needs an ethical framework to ensure people, organizations, and governments use data ethically. The Economist urged greater discussion about data ownership, access, transparency, and the range of possible private-public relationships. And that requires the active involvement of citizens.
Ultimately, technologies limit and enable, but people make the decisions and rules. As Mel Kranzberg’s fourth law stated, “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.”
comments powered by Disqus