Blogs > Cliopatria > Panopticons: Security, Privacy, Opportunity

Sep 21, 2004

Panopticons: Security, Privacy, Opportunity




There was a time when police did not exist, and there was a time when police existed but were very rare. In those times, crime was largely prevented by a community's intense awareness of its own members' activities, needs and resources. Someone cheating or stealing would be recognized as such very quickly, and a strong sense of communal responsibility would require any such observers to intervene. Overwhelming force, of course, could obviate the control of the community, but not its self-awareness.

Still, as communities grew in size and became more interconnected, more formal structures of control grew. Some societies -- Early Imperial China being the best example -- formalized communal self-policing with collective punishments; others simply allowed communities to continue to govern themselves as long as they paid tribute to overlords who, in theory, protected them from other outside predations. In the largest communities, towns and cities, 'neighbor' no longer necessarily meant someone who knew everything about you, or felt any responsibility to you. This is where police forces were developed, from the Thracian Slaves of Athens on; sometimes self-governance remained the norm, as in the medieval cities where guilds policed their districts. But that was on a very small scale compared to the cities as a whole.

Even the development of police forces did not significantly affect the anonymity of urban life and the relative freedom with which crime can occur in densely populated areas. Social values against dishonesty, violence, theft were battered by the success of liars, brutes and thieves; the very concept of social order was threatened by populations of rootless, poor, desperate people. Urban renewal helped -- massive infrastructure projects in the West; the occasional fire or earthquake in Japan; etc. -- by clearing out the slums and allowing for a planned and structured urban environment, at least for a time. But the chief tool was more and more police.

Eventually, though, there is a limit to how many police a community can or will support. Then technological shifts expanded the capacity of police: transportation to expand patrols; communications to allow faster responses; filing systems -- paper, numeric, digital -- to aid investigations and processing of suspects; weapons systems (including defensive tools like bulletproof vests and non-lethal techniques) which allow police to present a credible threat and do their job without becoming victims of violent crime themselves; social and psychological theory to address root causes of crime and criminals. All of these things streamlined policing, industrialized it, if you will.

All of this is well and good, but it is remote surveillance which is the wave of the present and future. Though humans can only be in one place at a time, they are capable of processing multiple streams of information, particularly visual information, simultaneously; our hunter-gatherer heritage, compensation for our atrophied sense of smell. This is why building security is so often centralized, with a camera bank and a single security person. Alarm systems are a robotic version of the surveillance camera: eyeless, mechanical sentries that monitor constantly.

Now, though, the robots have eyes, and what's more, they have something resembling our hunter-gatherer brains: Chicago is going to adapt anti-fraud techniques from Las Vegas and anti-terror techniques from London into an integrated surveillance camera information processing system:

Sophisticated new computer programs will immediately alert the police whenever anyone viewed by any of the cameras placed at buildings and other structures considered terrorist targets wanders aimlessly in circles, lingers outside a public building, pulls a car onto the shoulder of a highway, or leaves a package and walks away from it. Images of those people will be highlighted in color at the city's central monitoring station, allowing dispatchers to send police officers to the scene immediately. . . .
Many cities have installed large numbers of surveillance cameras along streets and near important buildings, but as the number of these cameras has grown, it has become impossible to monitor all of them. The software that will be central to Chicago's surveillance system is designed to direct specialists to screens that show anything unusual happening.
. . . video images will be instantly available to dispatchers at the city's 911 emergency center, which receives about 18,000 calls each day. Dispatchers will be able to tilt or zoom the cameras, some of which magnify images up to 400 times, in order to watch suspicious people and follow them from one camera's range to another's.
Leaving aside the technical aspects (how many false alarms are they going to get from the system and how will they handle them without losing legitimate threats) and the civil liberties aspects (yes, it is intrusive; is it more or less intrusive than posting police officers in all the places they now have cameras?), it is clear that this melding of policing and computing is a logical next step, and likely to become common in the near future, unless the system is a disaster which can't be blamed on human [operator; Thanks, Oscar] error or slightly under-resourced infrastructure.

One clear effect of this in the near and medium term is the development of both technologies and experts in computerized surveillance. Not quite police officers, but not simple dispatchers, the technicians who monitor the system will have to evaluate threats quickly and accurately, but will not be"on the street" where they might get valuable feedback on their assessments (nor will they need to meet the physical requirements of policing, though). Though I dabbled in computer programming in the charmingly simple 1980s, I cannot imagine the logic engines and image processing algorithms and raw computing power that would allow computers to do what they say the system will do: clearly new techniques and technologies are at play, which not only will spawn subfields of their own as these systems spread, but which will probably have spin-offs we can only begin to imagine (remote wildlife monitoring; artificial intelligence; prosthetic eyes; robotics; etc.). New industries, new fields of employment, new problems.

Update: My father, who has been programming longer than I've been alive, including real-time image processing and multi-system integration in complex environments, sent this response:

I hope that the technology we currently have will live up to the expectations of those employing it, but I have my doubts. I say this not as a concerned, private citizen, but as a very active member of the self-same technological community that's developing the products they hope to employ. I know what the current state of computer software development is, not as a customer in a supermarket who sees an occasional wrong price flicker across the screen, but as an active practitioner of that art. And yes, it's still art, it's not even engineering, let alone science yet.

If I have any concerns about this project, it's not as someone who sees personal liberty being eroded, but as a technocrat who's aware of how bad the skills of even average programmers are. Nevertheless, it's a step in the right direction toward using technology to substitute for body count on our ever diminishing police forces.
As Oscar Chamberlain reminded me in the comments, all computer errors are human errors. My father taught me many years ago that computers are not smart: they will do whatever you tell them to, no matter how dumb, how self-defeating, how wrong. We have to be the smart ones.



comments powered by Disqus

More Comments:


Jonathan Dresner - 9/22/2004

Asimov's stories weren't really set in cities (Asimov's earth in the age of robots was mostly factories, laboratories, corporate offices and suburbia, plus the space colonies).

But then, Will Smith's movie isn't really set in an Asimov story, either.

Chicago is a fantastic place to premier this technology, though. It's high profile and large enough to draw in federal money, but not central enough to people's awareness to seem like a front-line act of desparation; people there have some reason to be nervous about terrorism (as opposed to, say, Cedar Rapids, where I spent 9/11/2001 calming students [and myself] by discussing the unlikelihood of attacks on Quaker Oats) and less dramatic crime; it doesn't have as strong a liberal/civil liberties community as NYC, SF, LA; Illinois is a swing state....


Jonathan Dresner - 9/21/2004

You're right: 'operator error' is more the term I was looking for.


Oscar Chamberlain - 9/21/2004

I think you are right about the police/robot trend. However, I take issue with this statement,

"it is . . . likely to become common in the near future, unless the system is a disaster which can't be blamed on human error or slightly under-resourced infrastructure."

Robot error, like computer error, is all human error. I'm not just being pedantic. Humans set up these systems, monitor theses sytems, and continually interact with these systems at multiple levels, from minor alterations to doing the legwork that computers can't do now. If there is a disaster, humans will be responsible for it.


W. Caleb McDaniel - 9/21/2004

It seems, retrospectively, strangely appropriate that the movie "I, Robot" was set in Chicago.