TCS Daily

The Eyes Have It

By Russell Seitz - May 19, 2003 12:00 AM

It was a black day for opticians when Zeus dispatched Hermes to clobber Argus of the Thousand Eyes, but the best spies in the sky still have more than two. Modern climatology benefits from the constant vigilance of an orbiting constellation of staring infrared sensors that downlink data at blinding speed. Ron Bailey's recent piece, Science Steals A Base, arose from the most recent headaches of getting all of them up to 20:20 performance as thermometers.

Sensors seldom stare through cloudless skies, and geometry can impact sensor readings. It's hard to tell on the fly if the last passing cloud was very like a whale, a camel, a hawk or a handsaw. Yet the value of climate models depends on their accuracy in depicting the world. Bailey's piece drew this comment: "the current models are not up to snuff until the grid size reaches 75 km [the] 300 km grid size produces ... numerical diffusion, and it
is massive... these models... must be viewed with nothing but derision..."

This stern fulmination is both acute and crucial to understanding the history of the climate change debate. At its sporadic best it is the chronicle of good models driving the bad into extinction, a saga as deserving of applause as any other authentic scientific breakthrough. Yet it will take a while to disremember all the hype that has gone before. Because the polemic abuse of global systems models is a long-running bipartisan tragedy.

As bad as coarse-grid 3-D models are, worse 2-D schemes and one-dimensional models with no grid at all preceded their use. For decades, all but a few climate modelers spent more time complaining about their inadequacies and demanding funds for their improvement than militating for changes in climate policy.

They had cause for modesty: practitioners of physics can treat less rigorous disciplines as little more than higher forms of stamp collecting. The most simplistic exercises in climate modeling have earned some very faint praise, and the extremes of reductionism some bald damnation. Yet despite the owlish reservations of most 20th century climatologists, a vocal few demanded changes in society predicated on little data and less capacity to interpret it.

The late physics Laureate Richard Feynman was no enemy of simple experiments in the real world. He elegantly deconvoluted the Challenger disaster using a glass of ice water to demonstrate the o-ring stiffening at the root of the problem. But he drew the line at one-dimensional models being used to forecast the fate of the Earth. The impudent rendering of the dynamic complexity of its atmosphere and oceans as a line segment arising from a featureless sphere earned this judgment: " You know, I don't think these guys know what they are talking about."

That was in 1986. Other powerful minds concurred that until the lack of resolution that divorced climate models from reality was remedied, their predictive value would be retarded. Their candor was rewarded by increased research funding, but the greatest advances arose from the explosion of computational power in the 1990's. Now the problem is coming full circle. Improvements in climate models can only result in improved predictions when they are provided with a nourishing and balanced diet of weather data. Because in the real world, clouds are not 75 kilometer squares. The modelers have another two orders of magnitude to go in improving spatial resolution and the room for improvement in how fast models run is open ended.

But the allocatable resources are not. At the limit, the problem of realistic cloud modeling in both time and space resembles a parable in a Borges story. He wrote of a monarch so obsessed with maps that he decrees that a one-to-one scale chart of his empire be drawn on canvas overhead. The task is begun , but the attempt to complete its infinite detail is
overwhelming, and all perish in the shade of the emperor's new shroud.

All honest modelers confess to being statisticians; when they do know what they're talking about, it's important to consider what they still don't have to say. Increasing the number of pixels on a computer screen makes the display of vivid and detailed movies possible, but if the original image was shot in grainy black and white by a slow and shaky camera, the most advanced DVD technology can't fix the problem. The importance of satellite weather data is that data coverage it provides is not just accurate, but dense and steady in time as well as space. Too dense , in fact, for many models to make full use of. It is still necessary to distribute things like thunderstorms and even hurricanes at random, and have them play hopscotch as they jump from pixel to pixel, instead of gliding across the map.

Climate happens one time at a time - without enough stamps, you don't have a collection. Providing complex models with enough information to allow forecasting requires a lot more measurements than a casual observer might suppose. It takes a lot to sustain a trillion operations per second . If that many numbers could talk, their first words would be " Feed Me!"

Computer models have clocks that determine how fast the animation runs. To keep things realistic, the data has to be gathered in twice as fast as the fastest 'action' that will appear on the face of screens to come. This idea about sampling speed is called the Nyquist criterion. Fail to fulfill it, and the model will part company with the real world . A similar notion applies to the number of places the data comes from: what if some of the grid elements in a model of the globe have no weather stations?

Software can fudge by swiping numbers from nearby stations, but borrowing data erodes the Nyquist criterion. It's worse than the 'garbage in, garbage out' phenomenon - you can't recycle noise. Let software dwell on the decaying signal to noise ratio of an imploding data spectrum and before long, you might as well be looking at a model of Mars.

The Nyquist criterion is why the weather of a century ago can't be resurrected from old reports using state of the art simulations. To a modern models' ravenous appetite, reams of Victorian data don't add up to an hors d'ouervre. Parts of Europe had closely spaced weather stations a century ago, but the arctic had but dozens. Climate models with grids that coarse look less like maps than paintings by Mondrian, with predictions ranging from impressionistic to surreal. When they can be made to run at all, the forecast tends towards nightmares followed by partial clearing with occassional scattered hallucinations.

That's why we depend of necessity on such proxies as tree rings , gasses in glaciers, and stable isotopes in corals to inform our picture of this planet's past, and look to meandering clouds on Neptune and the fluctuating polar caps of Mars for insight into the fluctuating climate of this solar system as a whole.

With artificial intelligence in such short supply, a good model is a terrible thing to waste. It is crucial to the credibility of earthly scientific endeavor that today's weather satellites be calibrated with exquisite objectivity as well as precision. They watch the whole world's weather, and the eyes of the world are on those who watch over them.

TCS Daily Archives