One part of the scientific mythos states that two groups of scientists, using the same information and knowledge of physical laws and principles, will come to identical conclusions. This notion is based on the premise that scientists analyze and process information in a dispassionate, rational and bias-free manner, unlike their counterparts in the arts, humanities, industry and government.
If this myth were even approximately true, why is there so much confusion and conflict over scientific issues such as the long-term effects of releasing increasing amounts of greenhouse gases into the atmosphere or the size of the ozone hole over Antarctica? How can we explain why scientists looking at the same data can come to different conclusions on these life-and-death issues? To solve this puzzle, we have to look a bit harder at how scientists make predictions about atmospheric events.
Climate forecasting methods come in two fundamentally different flavors, which can be labeled "Bottom Up" and "Top Down." Let's look at the Bottom-Up approach first.
The Bottom-Uppers measure observable quantities such as temperature, air pressure and wind speeds at a particular location in space and time. Then, they predict what will happen in the next instant using various physical and chemical laws. These laws are encoded in a computer program known as a "general circulation model," a reference to the circulation patterns of the winds and oceans that affect climate.
There are two principal difficulties in using such a method to predict atmospheric properties. The first is that the equations embodied in the computer programs are chaotic: For example, the flapping of the proverbial butterfly's wings in Brazil today can percolate into a tornado in Kansas -- but if the butterfly didn't flap its wings, an entirely different outcome might have occurred.
The same is true about predicting climate changes: Even if the initial information about, say, temperature is measured very accurately and plotted very precisely on a high-resolution grid of points, predictions will soon become unreliable -- and much sooner than the 50- to 100-year time frame that is being used in the global warming debate.
An added difficulty with the Bottom-Up approach is that while the equations governing the formation of day-to-day weather are pretty well understood, we have much less confidence in the analogous equations regulating the climate. For instance, there is a strong consensus that the interaction between the air and sea ice is important for determining climate change. But no one seems to know exactly how sea ice actually modifies the climate.
The Top-Down view of climate forecasting, on the other hand, ignores the basic physics of the atmosphere, focusing instead on actual observations of past climatic states. The basic idea of this approach is that future climatic states are already contained in the past, and that if we are clever enough to statistically process data about the climate from earlier decades and centuries, we can successfully predict what the climate will do in coming years. But there were no meteorologists recording measurements in biblical times. So where does the Top-Down climatologist obtain this information?
The Top-Downers have been very clever in gathering information about the climate in past centuries: They had examined ship captains' logs, measured tree rings of the giant sequoias of Northern California, even sampled ancient atmospheres from bubbles preserved in pieces of amber (a fossil resin) from the Jurassic Age. All these sources and more are grist for the climatologists' mill. But Top-Down prediction also has its pitfalls.
For example, the information about past climate states from a piece of amber is pretty meager. And it's not all that accurate, either. Ditto for temperature measurements from tree rings. If the statistical methods have to operate on this sort of incomplete and inaccurate data, we can expect only very general, qualitative kinds of predictions.
In addition, the statistical tools themselves are not beyond reproach. Basically, they are the same types of methods that people use to predict the rise and fall of stock prices, and we know that in this field (which has infinitely more and better data) the methods are hardly foolproof (or universally accepted).
Moreover, it's difficult to predict truly novel climatic states from looking at the past, because the statistical methods basically assume that the underlying mechanism generating these states has not changed. But the dinosaurs didn't have to worry about factories spewing carbon dioxide into the air, nor did they concern themselves with gases from cans of hair spray eating away at the ozone layer above the Earth. Climatic mechanisms have changed -- a lot! So it makes one wonder if data on ancient climatic states has much, if any, relevance to what we can expect to see in the future.
All this shows that the difficulty of making predictions, whether you start from basic physical principles and let the climate emerge, or begin with actual measurements of climate and try to infer where it's going.
The question of how much trust we can place in these computer-cum-mathematical models ultimately revolves around two issues: What question do we want the model to answer, and how accurate must that answer must be? By definition, no model is perfect, nor should we expect it to be. The resolution of both these issues ultimately rests on expert human judgment; in short, mathematics plus computing does not equal magic.
John Casti is a mathematician at the Santa Fe Institute, a center devoted to the study of complex systems. His books include "Searching for Complexity: What Scientists Can Know About the Future" (Morrow, 1991) and "Would-Be Worlds" (Wiley, 1997).
Copyright © 1997 Steven J. Milloy. All rights reserved. Site developed and hosted by WestLake Solutions, Inc.
Material presented on this home page constitutes opinion of the author.