Här kommer en mycket intressant analys gjord av professor Briggs (matematik/statistisk) om klimatmodellernas samverkan och hur man skall tolka dessa ibland samstämmiga resultat. Analysen är som sagt lite teknisk men väldigt intressant då den belyser problemen med denna ”samstämmighet” och vad det egentligen innebär.
SE även en del av mina tidigare inlägg om klimatmodeller: Basic Greenhouse Equations ”Totally Wrong” – ytterligare ett anförande från konferensen i New York, The Great Global Warming Hoax, Mera om Klimat modellernas falsarium , Klimatmodellernas falsarium, Klimatmodellernas skojeri – Fel på 100 - 300%!,
Det som är minst lika intressant med denna analys är alla kommentarer till analysen där bl.a. folk som är ansvariga för dessa modeller deltar. Läs denna mycket intressanta diskussion som ett komplement till Briggs analys. Diskussion är väldigt upplysande om alla de problem och förenklingar som är behäftade med dessa modeller.
Citat från Gavin (som jobbar med en av dessa modeller och som försvarar deras betydelse):
”The models are based on similar underlying assumptions (conservation of energy, momentum, radiative transfer etc.) but which are implemented independently and with different approximations. If you ask the question, what are the consequences of the underlying assumptions that are independent of the implementation, you naturally look for metrics where the models agree. Those metrics can be taken as being reflective of the underlying physics that everyone agrees on. This is clearly not sufficient to prove it ‘true’ in any sense (there maybe shared erroneous assumptions), but it clearly must be necessary.”
”There are hundreds of interesting metrics, and no one model is the best at all of them. Instead most models are in the top 5 for some and in the bottom 5 for others.”
”i) does model argeement imply ‘truth’? Truth is in quotes because neither you nor I can ever define the true state of the climate (or any observed feature within it), and so every statement is about an approximation to the real state of the world. Specifically, take the situation of stratospheric chemistry in the early 1980s. Most of the chemistry involved in ozone depletion was known and all these models agreed that the decline in strat. ozone would be smooth but slow (in the absence of CFC mitigation). They were all wrong. The decline in strat ozone in the Antarctic polar vortex was fast and dramatic. The missing piece was the presence of specific reactions on the polar stratospheric clouds that enhanced by orders of magnitude the processing of the chlorine. Thus the model agreement did demonstrate the (correct) implications of the (known) underlying chemistry, but obviously did not get the outcome right because the key reactions didn’t turn out to be included by any model. Hence agreement, while necessary, is not sufficient.
iii) IPCC has hundreds of graphs showing different model metrics, and rightly so. Most of the important impacts are in some way tied to the global mean temperature change, and so that is used as a useful shorthand. But don’t confuse iconisation of specific graphs with a real statement about importance. Would you pick the one model that has the best annual mean temperature, or the best seasonal cycle or the best interannual variability, in Europe? in N. America? in Africa? I guarantee no one model is ‘best’ on all those metrics.
iv) It is not a priori obvious that the mean of multiple models should outperform the best of any individual one. This remains an unexplained but interesting result. The upshot is that you can treat the model ensemble like a random sample to reduce errors. Of course all of the models are biased (as is the mean, but less so) and if I suggested anything different, I apologise. ”
Citat från Andrew:
”This brings me to one of the major problems I have with the models. They have different input values of very basic variables, like climate sensitivity, yet they can all be made to fit the observed changes in surface temperature. How can this be? The reason is pretty obvious, actually, that the models all did so, not becuase they are all correct (which is impossible) but becuase they were ~made~ to. Every modeler knew the answer ahead of time. They use ”aerosols” and ”ocean delay” as highly ”adjustable” fudge factors. Natural forcings are also unknown, and can be ”adjusted”. The models can match history, not becuase they are good models (they aren’t) but becuase they have been ~made~ to do so. On the other hand, if you test the models with measurements other than those they were adjusted to fit, they almost invariably fail miserably, every one of them, to match what we see there.”
If every model agrees, it probably is becuase they are all doing the same thing wrong.
Citat från Mike D:
Gavin makes the excellent point, attributed to George Box, that ”all models are wrong, but some are useful.” The usefulness of models fall into two broad classes: theory and prediction. Theoretical models attempt to map known physical, chemical, and biological relationships. Predictive models (sometimes called ”black box”) attempt to make accurate predictions.
There is a strong tendency to confuse or combine these utilities, and that is true is any modeling (my specialty is forest growth and yield models). Proponents of theoretical models are often adamant that their models are best (a value judgement) and insist that they be used in predictive situations. Predictive modelers, in contrast, may use crude rules of thumb that are unattractive to theoreticians, but predictive modelers emphasize that their goal is accurate prediction.
Hence the assertion that models are wrong must also be bifurcated. Theoretical models are wrong if the theories behind them are invalid. Predictive models are wrong if they make poor predictions. It is easy (but not useful) to confuse these wrong-itudes.
The best weather prediction models are more empirical than theoretical. They look at current conditions (fronts, pressure gradients, jet streams, etc.) as they are cadastrally arrayed across the globe, and compare those to past dates when the same or very similar arrays occurred. Then the weather outcomes of the similar past conformations are examined, and use to predict the immediate future weather. Not much theory to that, more of a data mining of the past; hence the descriptor ”empirical.”
Climate models are much more theoretical because we basically lack empirical data about past climate. Some attempts are made to use proxies, sunspot data, Milankovitch cycles, etc. but the data are sparse and time frames vary widely. In general we can predict a decline in temperatures and a return to Ice Age conditions based on fairly good evidence at a long time scales, but when and how that slide will occur is imprecise at short time scales. When theoretical GHG ”forcings” are included in climate models, empiricism is almost completely absent.
So we are in a situation where theoretical climate models are being used to make short-term predictions. Further, those predictions have generated some fairly Draconian suggested measures that are extremely distasteful, at least to many people. More taxes, less freedom, ”sacrifices”, economic disruptions etc. are being recommended (imposed) based on the predictions of theoretical models. Political ”solutions” to fuzzy predictions from ”wrong” and improperly classed models are greatly feared, and I think properly so.
The discourse cannot help but become impolite in this situation. Neither ”side” is immune. How much better it would be if we realized that we cannot predict the climate (in the short term) and instead prepared to be adaptable to whatever happens, while preserving (enhancing) as much freedom, justice, and prosperity as we possibly can. ”
Analysen finns här:
Läs även andra bloggares åsikter om <a href=”http://bloggar.se/om/milj%F6” rel=”tag”>miljö</a>
Why multiple climate model agreement is not that exciting
April 8th, 2008
There are several global climate models (GCMs) produced by many different groups. There are a half dozen from the USA, some from the UK Met Office, a well known one from Australia, and so on. GCMs are a truly global effort. These GCMs are of course referenced by the IPCC, and each version is known to the creators of the other versions.
Much is made of the fact that these various GCMs show rough agreement with each other. People have the sense that, since so many ”different” GCMs agree, we should have more confidence that what they say is true. Today I will discuss why this view is false. This is not an easy subject, so we will take it slowly.
Suppose first that you and I want to predict tomorrow’s high temperature in Central Park in New York City (this example naturally works for any thing we want to predict, from stock prices to number of people who will vote for a certain USA presidential candidate). I have a weather model called MMatt. I run this model on my computer and it predicts 66 degrees F. I then give you this model so that you can run it on your computer, but you are vain and rename the model to MMe. You make the change, run the model, and announce that MMe predicts 66 degrees F.
Are we now more confident that tomorrow’s high temperature will be 66 because two different models predicted that number?
The reason is that changing the name does not change the model. Simply running the model twice, or a dozen, or a hundred times, does not give us any additional evidence than if we only ran it just once. We reach the same conclusion if instead of predicting tomorrow’s high temperature, we use GCMs to predict next year’s global mean temperature: no matter how many times we run the model, or how many different places in the world we run it, we are no more confident of the final prediction than if we only ran the model once.
So Point One of why multiple GCMs agreeing is not that exciting is that if all the different GCMs are really the same model but each just has a different name, then we have not gained new information by running the models many times. And we might suspect that if somebody keeps telling us that ”all the models agree” to imply there is greater certainty, he either might not understand this simple point or he has ulterior motives.
Are all the many GMCs touted by the IPCC the same except for name? No. Since they are not, then we might hope to gain much new information from examining all of them. Unfortunately, they are not, and can not be, that different either. We cannot here go into detail of each component of each model (books are written on these subjects), but we can make some broad conclusions.
The atmosphere, like the ocean, is a fluid and it flows like one. The fundamental equations of motion that govern this flow are known. They cannot differ from model to model; or to state this positively, they will be the same in each model. On paper, anyway, because those equations have to be approximated in a computer, and there is not universal agreement, nor is there a proof, of the best way to do this. So the manner each GCM implements this approximation might be different, and these differences might cause the outputs to differ (though this is not guaranteed).
The equations describing the physics of a photon of sunlight interacting with our atmosphere are also known, but these interactions happen on a scale too small to model, so the effects of sunlight must be parameterized, which is a semi-statistical semi-physical guess of how the small scale effects accumulate to the large scale used in GCMs. Parameterization schemes can differ from model to model and these differences almost certainly will cause the outputs to differ.
And so on for the other components of the models. Already, then, it begins to look like there might be a lot of different information available from the many GCMs, so we would be right to make something of the cases where these models agree. Not quite.
The groups that build the GCMs do not work independently of one another (nor should they). They read and write for the same journals, attend the same conferences, and are familiar with each other’s work. In fact, many of the components used in the different GCMs are the same, even exactly the same, in more than one model. The same person or persons may be responsible, through some line of research, for a particular parameterization used in all the models. Computer code is shared. Thus, while there are some reasons for differing output (and we haven’t covered all of them yet), there are many more reasons that the output should agree.
Results from different GCMs are thus not independent, so our enthusiasm generated because they all roughly agree should at least be tempered, until we understand how dependent the models are.
This next part is tricky, so stay with me. The models differ in more ways than just the physical representations previously noted. They also differ in strictly computational ways and through different hypotheses of how, for example, CO2 should be treated. Some models use a coarse grid point representation of the earth and others use a finer grid: the first method generally attempts to do better with the physics but sacrifices resolution, the second method attempts to provide a finer look at the world, while typically sacrificing accuracy in other parts of the model. While the positive feedback in temperature caused by increasing CO2 is the same in spirit for all models, the exact way it is implemented in each can differ.
Now, each climate model, as a result of the many approximations that must be made, has, if you like, hundreds (even thousands) of knobs that can be dialed to and fro. Each twist of the dial produces a difference in the output. Tweaking these dials, then, is a necessary part of the model building process. The models are tuned so that they, as closely as possible, first are able to produce climate that looks like the past, already observed, climate. Much time is spent tuning and tweaking the models so that they can, at least roughly, reproduce past climate. Thus, the fact that all the GCMs can roughly represent the past climate is again not as interesting as it first seemed. They better had, or nobody would seriously consider the model as a contender.
Reproducing past data is a necessary but not sufficient condition that the models can predict future data. Thus, it is also not at all clear how these tweakings affect the accuracy in predicting new data, which is data that was not used in any way to build the models, that is, future data. Predicting future data has several components.
It might be that one of the models, say GCM1 is the best of the bunch in the sense that it matches most closely future data. If this is always the case, if GCM1 is always closest (using some proper measure of skill), then it means that the other models are not as good, they are wrong in some way, and thus they should be ignored when making predictions. The fact that they come close to GCM1 should not give us more reason to believe the predictions made by GCM1. The other models are not providing new information in this case. This argument, which is admittedly subtle, also holds if a certain group of GCMs are always better than the remainder of models. Only the close group can be considered independent evidence.
Even if you don’t follow-or believe-that argument, there is also the problem of how to quantify the certainty of the GCM predictions. I often see pictures like this:
Each horizontal line represents the output of a GCM, say predicting next year’s average global temperature. It is often thought that the spread of the outputs can be used to describe a probability distribution over the possible future temperatures. The probability distribution is the black curve drawn over the predictions, and neatly captures the range of possibilities. This particular picture looks to say that there is about a 90% chance that the temperature will be between 10 and 14 degrees. It is at this point that people fool themselves, probably because the uncertainty in the forecast has become prettily quantified by some sophisticated statistical routines. But the probability estimate is just plain wrong.
How do I know this? Suppose that each of the eight GCMs predicted that the temperature will be 12 degrees. Would we then say, would anybody say, that we are now 100% certain in the prediction?
Again, obviously not. Nobody would believe that if all GCMs agreed exactly (or nearly so) that we would be 100% certain of the outcome. Why? Because everybody knows that these models are not perfect.
The exact same situation was met by meteorologists when they tried this trick with weather forecasts (this is called ensemble forecasting). They found two things. The probability forecasts made by this averaging process were far too sure-the probabilities, like our black curve, were too tight and had to made much wider. Second, the averages were usually biased-meaning that the individual forecasts should all be shifted upwards or downwards by some amount.
This should also be true for GCMs, but the fact has not yet been widely recognized. The amount of certainty we have in future predictions should be less, but we also have to consider the bias. Right now, all GCMs are predicting warmer temperatures than are actually occurring. That means the GCMs are wrong, or biased, or both. The GCM forecasts should be shifted lower, and our certainty in their predictions should be decreased.
All of this implies that we should take the agreement of GCMs far less seriously than is often supposed. And if anything, the fact that the GCMs routinely over-predict is positive evidence of something: that some of the suppositions of the models are wrong.