TCS Daily


Outlaws and Databases

By Arnold Kling - December 9, 2002 12:00 AM

A folk song that was popular in my childhood describes a utopia of ineffective jails and crippled policemen.

"In the Big Rock Candy Mountain the cops have wooden legs"
--"The Big Rock Candy Mountain" (attributed to Harry McClintock, popularized by Burl Ives)

The notion that we are better off with impaired police is widespread, particularly with regard to surveillance and databases. The recent revelation that the Defense Department has a project headed by John Poindexter to create a surveillance database called the "Total Information Awareness System" provoked a hostile response from William Safire and the Washington Post editorial page, among others.

I agree with Greg Buete, who is willing to approach the concept of a nationwide surveillance database with an open mind. I would argue that such a database, if properly regulated, would be far less threatening to our freedom and privacy than the likely alternatives.

If we don't want government security agencies to know too much, we could pass laws banning anyone with an IQ over 85 from playing any role in homeland security. The way I see it, prohibiting a surveillance database is like enacting an IQ limit for our security systems.

Our Founding Fathers had a better idea for keeping individual government agencies under control. They set up a system of Constitutional checks and balances. We should embed database usage in a system of checks and balances, rather than engage in a Big Rock Candy Mountain fantasy that we can ban it altogether.

Security, Statistics, and Economics

We can usefully apply the terminology of statistics and economics to the problem of security, although we will not be able to quantify the values involved. My hope is that this will tone down the rhetoric (including my own), and lead to a more rational discussion.

The trade-off between security and freedom is part of a general framework of two types of errors, or risks. A Type I error is a deadly terrorist attack. We can think of the cost of Type I errors as the number of deaths. If we multiply the probability of a type I error times the likely cost, we have a measure of the expected number of deaths. This is, of course, impossible to measure precisely, but conceptually what we mean by the benefit of a better security measure is a reduction in the expected number of deaths, presumably by reducing the probability of a successful attack.

A Type II error is the abuse of power by government officials. Ordering IRS audits of political opponents is one example. There are many ways in which government officials could be tempted to engage in blackmail, intimidation, or suppression in order to further their personal interests. Once again, we can (conceptually) evaluate a security measure's affect on Type II errors by multiplying the probability of errors times their magnitude.

In the absence of intelligence, the authorities will commit both Type I and Type II errors. They will let terrorists operate unimpeded; moreover, when the authorities flail blindly, as in the "dragnets" that were an exercise in futility when the D.C. sniper was at large, civil liberties are most at risk. A database, which enables the authorities to sort suspects from innocent people, is needed in order to minimize both types of errors. Indeed, it was the fingerprints of John Malvo in a database that enabled the sniper investigators to finally break the case.

(However, credit card companies were better equipped than law enforcement agencies to deal with the sniper suspects. Routine screening caused a credit card to be canceled after one $11 charge for gasoline. The police, on the other hand, let the snipers' car go several times after stopping it, including once when it was clear that one of the suspects had been sleeping in it.)

Another element of homeland security is cost. Costs include not only security expenditures but also such items as the costs to passengers of having to wait in line at airports.

Thus, we have three objectives for any security measure. One objective is to reduce Type I errors. Another objective is to minimize Type II errors. A third objective is to minimize costs.

Whenever we can introduce a security measure that improves one objective without making either of the other objectives worse, this can be called an efficient security measure. For example, one might argue that allowing pilots to be armed is an efficient security measure. It could reduce Type I errors, without appearing to impose costs or increase the probability of Type II errors.

Similarly, a passenger-screening program that is database-driven could be efficient. High-risk passengers would be allowed to fly, but not together and not without a sky marshal. It is likely that such a program would reduce Type I risk dramatically, at far less cost than our current policies of extensive search and random screening.

The main point of introducing this economic and statistical terminology is to emphasize that we have to choose from among imperfect solutions. Of course, a surveillance database will not work perfectly. However, neither does any other security measure. The criteria for evaluating a security measure should be its costs and benefits. The calculus is bound to be imprecise, but in a world of imperfect alternatives it is irrational to reject one particular approach just because it is imperfect.

Weapons Under Moore's Law

You will never see effective gun control. The NRA is not to blame (or to thank, depending on your point of view). Neither is the Constitution. Technological trends are at work.

In the future, the only law that will govern armaments is Moore's Law. Moore's Law says that weapons will get smaller, cheaper, and varied in form. This phenomenon will pose a challenge not just for disarmament advocates, but for all of us.

Ray Kurzweil, in The Age of Spiritual Machines, forecasts how the nature of warfare will change as Moore's Law progresses.

"[in 2009] warfare is dominated by unmanned intelligent airborne devices. Many of these flying weapons are the size of small birds, or smaller...[in 2019] Most flying weapons are tiny - some as small as insects - with microscopic flying weapons being researched..."

Even today, our biggest fears are of small weapons, including biological weapons and "suitcase nukes." On airplanes, we are afraid of tweezers. It is no accident that terrorism is emerging as a significant threat. In our "faster, better, cheaper" world, powerful capabilities accrue to decentralized organizations and even small groups.

We simply have to get over the notion that we can prevent violence by controlling weapons. We are destined to live in a world with an endless variety of arms that are inexpensive and concealable. This trend, as undesirable as it may be, is unavoidable.

With weapons becoming powerful and difficult to detect, the phrase "guns don't kill, people kill" is going to become the only realistic principle for security. In order to avert catastrophic attacks, we will have to focus our attention on villains and potential villains.

One might argue that our focus on villains should take us overseas. Certainly it is true today that our most dangerous enemies are in other countries. However, once we make foreign soil inhospitable, terrorists will find that their best sanctuary is here, inside our porous borders. Moreover, as weapons continue to improve, the number of organized terrorists that it takes to plan and execute a major attack will shrink, so that going after large groups overseas will cease to be sufficient.

Once we realize how vulnerable we are (and it may take one or more additional major attacks to bring us to that point), the public will not be able to tolerate the thought of enemies in our midst. Fear of the unknown could lead to vigilante attacks on illegal immigrants or massive internment of feared ethnic groups. These are the sorts of Type II errors that are far worse than a database that is developed under a system of checks and balances.

Get It Right

The concerns about a surveillance database include:

  • the possibility that hackers will gain access
  • the possibility that officials with access to the database will abuse the power they gain from having the information
  • the possibility that it will be used to catch petty offenses

My suggestions for minimizing these risks are these:

  • Articulate clear rules for database use, including Constitutional protections.
  • Develop a robust system for monitoring database use for compliance with the rules.

As an example of a Constitutional protection, I would propose that no one be allowed to search the database for information about a specific person without a warrant. My data would have the same 4th Amendment sanctity as my house.

On the other hand, searches based on general criteria that are not presumed to apply to one person would be allowed. This would permit searches for the purpose of screening airline passengers. You could run an entire list of passengers through the database, and any high-risk passengers would be flagged.

I would propose that use of the database be restricted to attempts to prevent mass murder. It would not be available to track down people with unpaid parking tickets, "deadbeat dads," or other potential targets. Perhaps legislators will want to add crimes other than terrorism to the list of uses for the database. This should be done very carefully, perhaps requiring more than just a simple majority vote. As I pointed out in a previous essay, many of our laws would have to be reconsidered if technology suddenly made enforcement easier.

Use of the database should be heavily monitored. Any access of the database should generate an audit trail that is examined by representatives of the legislative branch, the judicial branch, and public watchdog groups, such as the American Civil Liberties Union and the Electronic Frontier Foundation. This breadth of monitoring is what I would count on to insure against abuse of power.

The entire system should be audited and monitored by rotating and competing teams of computer security experts. They would identify vulnerabilities and track down anyone who attempts unauthorized access.

Brin's Metaphors

One of the most persuasive books on this topic is The Transparent Society, by David Brin. In correspondence while I was preparing this article, Brin was kind enough to send the following:

Essentially, my argument has always been that it is futile to try and stymie elites from seeing.

We are all monkeys who instinctively rely on sight... have you ever tried to poke a stick in the eye of a really BIG monkey? He won't let you.

But a big monkey will, reluctantly, let you look at him. It's the same with elites. We will never be able to blind them. But we can use modern tools to strip them naked so they remain accountable. So we can supervise their activities and make certain we are not threatened.

To use another animal metaphor, I have no objections to improving the vision of our guard dog... giving him more power to see and detect threats. What we DON'T want is for the dog to start feeling uppity and become a wolf. That means we should be concentrating on equipping the beast with a choke chain for us to yank hard in supervision, not on blinding the beast.

What bugs me is that none of our civil liberties protectors... like William Safire... seem to get this. They rail and rail against technology, as if hollering - or any amount of legislation - will stop the cameras from getting smaller/cheaper/lighter/better every year, or stop the databases from getting faster/smarter/more-pervasive. Not once have any of them put forward a recommendation that would actually have the effect they are calling for.

But this fad of Luddism DOES serve to distract from the thing they should be demanding, new openness and accountability laws. New procedures for supervision and transparency. New professional codes. An office of Inspector General of the United States, with real independence and teeth. Citizen oversight panels.

This kind of aggressive Looking Back will ensure good behavior and that we are left alone, no matter how much the dog sees.

Responsible Citizens

As responsible citizens, it is a mistake to respond with an automatic "No" to the attempt by a government agency trying to build a database. We should discard the fantasy of a Big Rock Candy Mountain in which the cops have wooden legs and limited information.

What we should be doing instead is insisting that safeguards must be in place to ensure that a database is not abused. Before we implement a surveillance database, we need to set up a system of checks and balances that ensures that the people involved are accountable and its uses are regulated and monitored. Let us design the Constitutional and legal protections as carefully as our Founders designed the three branches of government and the Bill of Rights. And let us ensure that at every step, including implementation, a broad range of citizens is involved in the process.

Unfortunately, the Bush Administration does not understand that open, accountable government is a prerequisite for the use of surveillance and databases. The Administration's predilection for secrecy and contempt for the Freedom of Information Act make it difficult for anyone to support increased use of surveillance and databases. In my view, the Administration will be making progress toward obtaining consent for broader surveillance when it takes steps to encourage, rather than discourage, more systematic scrutiny and greater openness of the agencies involved in homeland security.

Over the long term, however, we need to confront the issues posed by trends in technology. The costs of surveillance are falling. The costs of data storage and analysis are falling. Meanwhile, our security risks are rising. We need to deal with the situation openly and directly. Our chances of preserving freedom are better if we use databases than if we rely on alternative measures. When databases are outlawed, only outlaws will have databases.

 

Categories:
|

TCS Daily Archives