I den första delen (Se mitt inlägg: The UN Climate Change Numbers Hoax eller IPCC:s lögn!) gick jag igenom IPCC: s rapporter och visade hur en mycket, mycket litet antal forskare (5 st, säg 5 st.) i slutänden gick igenom Working Group 1:s rapport - ”The Physical Science Basis”. Denna grupp (WG 1) är av fundamental betydelse då det är denna grupp som tar fram det vetenskapliga data som ligger till grund för senare analyser och kommentarer i de övriga 2 Working groups.
Alltså inte 2 500 stycken som IPCC, Al Gore et consortes så envist hävdar när de försöker tysta alla kritiska röster och säger ”Debatten är över”.
Hur IPCC totalt har missbrukat den s.k. peer review processen. Där IPCC har vänt på hela den normala vetenskapliga peer review processen och där ”editors” kan och ofta avslår eller ändrar kommentarer från vetenskapsmännen. Där dessa ”editors” systematiskt resar bort eller inte godkänner ändringar som inte överensstämmer med ”Summary for Policymakers”.
Där IPCC KRÄVER, det är verkligen upp och nervända världen, att de 11 ”vetenskapliga” kapitlen SKALL ÖVERENSSTÄMMA med Summary for Policymakers. Så dessa vetenskapliga kapitel MÅSTE VÄNTA tills Summary for Policymakers är publicerad INNAN de får publiceras.
Dvs. politiken bestäms först sedan får vetenskapen anpassa sig till detta. Om detta var en seriös process så skulle de vetenskapliga kapitlen skrivas först och SEDAN skulle man sammanfatta dessa.
Så, så ligger det till med IPCC och dess ”vetenskapliga” rapporter. Och den ”consensus” som sägs råda! Och att det var HELA 2 500 vetenskapsmän, förlåt 5 st. som står bakom dessa påståenden.
Egentligen är hela hanteringen av IPCC:s rapporter en politisk och vetenskaplig skandal av gigantiska mått. Och NATURLIGTVIS så rapporterar INTE massmedia om detta eller på vilken lösan sand dessa rapporter bygger på.
Men det blir värre! Ja, det är faktiskt sant även om det är svårt att tro det. Är det inte fantastisk hur ”vetenskapligt” det hela är?
I denna del skall vi granska och gå igenom de metoder som IPCC använder för sina tvärsäkra ”förutsägelser” och påståenden om framtiden.
Professor Kesten Green (Business and Economic Forecasting Unit, Monash University) och professor Scott Armstrong (The Wharton School, University of Pennsylvania ) är några av förgrundsfigurerna och pionjärerna när det gäller att ta fram metoder för, och forskning kring förutsägelser på vetenskaplig basis.
Armstrong har bl.a. tagit fram ”bibeln” inom detta område – Principles of Forecasting (2001) ”the work of 40 internationally-known experts on forecasting methods and 123 reviewers who were also leading experts on forecasting methods. The summarizing process alone required a four-year effort”. http://www.forecastingprinciples.com/armstrongpublishersinformation.pdf
Som ett resultat av all denna forskning och metodstudier så har man kommit överens om en lista med 140 ”forecasting principles” som måste följas (åtminstone till övervägande del) om förutsägelser skall kunna säga baseras på vetenskap. Och om dessa förutsägelser skall ha någon trovärdighet vad det gäller sina påståenden.
Här finns lite mera om metoderna:
Nu har Armstrong och Green använt sig av dessa principer och gjort en analys av IPCC: s rapporter för att se hur mycket vetenskap det finns bakom IPCC: s påståenden och förutsägelser i deras rapporter.
Och som IPCC använder för att skrämma liver ur folk och som våra politiker och massmedia slaviskt följer. ”Debatten är ju som sagt över och det finns ju inget att diskutera” som IPCC, Al Gore et consortes hävdar och använder för att tysta sina kritiker.
En total sågning av dessa s.k. vetenskapliga metoder som IPCC har använt. Med andra ord, OM DET NU ÄR MÖJLIGT, så är den vetenskapliga och politiska skandalen ÄNNU VÄRRE!
Skandalen har i och med detta passerat ”gigantiska mått”. Frågan är bara vilket epitet man skall sätta på denna skandal där så många är inblandade? Och där så många vinner på att fortsätta med denna fars och hysteri.
Och som vanligt är det det ”vanliga” folket som är förlorare. Som skräms från vettet med denna domedags hysteri och ges dåligt samvete för något som de inte kan påverka. Och som i slutändan får betala ”hela kalaset” – biljontals kr (1 000 000 000 000) som kunde ha använts för att lösa VERKLIGA problem HÄR OCH NU!
Läs även andra bloggares åsikter om <a href=”http://bloggar.se/om/milj%F6” rel=”tag”>miljö</a>
Citat från analysen:
”In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is ”no”.
To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making.
The IPCC WG1 Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references in the 1056-page Report to the primary sources of information on forecasting methods despite the fact these are conveniently available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles.
Many of the violations were, by themselves, critical.
The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.”
Alltså, av 140 ”forecasting principles” så gick det BARA att bedöma 89 av dessa p.g.a. den bristande metodiken, källor etc. i IPCC: s rapport.
Och av dessa 89 principer så BRYTER IPCC MOT 72 stycken!
Där flertalet av dessa principer är avgörande ock kritiska för att en metod eller förutsägelse skall över huvudtaget kunna anses vara vetenskaplig och dess resultat trovärdiga eller ha någon relevans!
Så INTE nog med att IPCC: s ”editors” styr och ställer för att de ”vetenskapliga” kapitlen SKALL ÖVERENSSTÄMMA med Summary for Policymakers. Och där dessa ”editors” systematiskt resar bort eller inte godkänner ändringar som inte överensstämmer med ”Summary for Policymakers”.
Nu visar det sig dessutom att hela metodiken bakom ”förutsägelserna” och påståendena i rapporten är totalt värdelösa då de inte ens har följt de mest elementära krav för att ÖVERHUVUDTAGET kunna anses ha något vetenskapligt värde!
Här är några av dessa fundamentala principer som IPCC BRYTER MOT:
”As a result, those who forecast in ignorance of the forecasting research literature are unlikely to produce useful predictions. Here are some well-established principles that apply to long-term forecasts for complex situations where the causal factors are subject to uncertainty (as with climate):
- Unaided judgmental forecasts by experts have no value. This applies whether the opinions are expressed in words, spreadsheets, or mathematical models. It applies regardless of how much scientific evidence is possessed by the experts.
Among the reasons for this are:
a) Complexity: People cannot assess complex relationships through unaided observations.
b) Coincidence: People confuse correlation with causation.
c) Feedback: People making judgmental predictions typically do not receive unambiguous feedback they can use to improve their forecasting.
d) Bias: People have difficulty in obtaining or using evidence that contradicts their initial beliefs. This problem is especially serious for people who view themselves as experts.
- Agreement among experts is weakly related to accuracy. This is especially true when the experts communicate with one another and when they work together to solve problems, as is the case with the IPCC process.
- Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply. Ascher (1978), refers to the Club of Rome’s 1972 forecasts where, unaware of the research on forecasting, the developers proudly proclaimed, ”in our model about 100,000 relationships are stored in the computer.” Complex models also tend to fit random variations in historical data well, with the consequence that they forecast poorly and lead to misleading conclusions about the uncertainty of the outcome. Finally, when complex models are developed there are many opportunities for errors and the complexity means the errors are difficult to find. Craig, Gadgil, and Koomey (2002) came to similar conclusions in their review of long-term energy forecasts for the US that were made between 1950 and 1980.
- Given even modest uncertainty, prediction intervals are enormous. Prediction intervals (ranges outside which outcomes are unlikely to fall) expand rapidly as time horizons increase, for example, so that one is faced with enormous intervals even when trying to forecast a straightforward thing such as automobile sales for General Motors over the next five years.
- When there is uncertainty in forecasting, forecasts should be conservative.
Uncertainty arises when data contain measurement errors, when the series are unstable, when knowledge about the direction of relationships is uncertain, and when a forecast depends upon forecasts of related (causal) variables. For example, forecasts of no change were found to be more accurate than trend forecasts for annual sales when there was substantial uncertainty in the trend lines (Schnaars and Bavuso 1986). This principle also implies that forecasts should revert to long-term trends when such trends have been firmly established, do not waver, and there are no firm reasons to suggest that they will change. Finally, trends should be damped toward no-change as the forecast horizon increases.”
THE FORECASTING PROBLEM
In determining the best policies to deal with the climate of the future, a policy maker first has to select an appropriate statistic to use to represent the changing climate. By convention, the statistic is the averaged global temperature as measured with thermometers at ground stations throughout the world, though in practice this is a far from satisfactory metric (see, e.g., Essex et al., 2007). It is then necessary to obtain forecasts and prediction intervals for each of the following:
1. Mean global temperature in the long-term (say 10 years or longer).
2. Effects of temperature changes on humans and other living things.
If accurate forecasts of mean global temperature can be obtained and the changes are substantial, then it would be necessary to forecast the effects of the changes on the health of living things and on the health and wealth of humans.
The concerns about changes in global mean temperature are based on the assumption that the earth is currently at the optimal temperature and that variations over years (unlike variations within days and years) are undesirable.
For a proper assessment, costs and benefits must be comprehensive. (For example, policy responses to Rachel Carson’s Silent Spring should have been based in part on forecasts of the number of people who might die from malaria if DDT use were reduced).
3. Costs and benefits of feasible alternative policy proposals.
If valid forecasts of the effects of the temperature changes on the health of living things and on the health and wealth of humans can be obtained and the forecasts are for substantial harmful effects, then it would be necessary to forecast the costs and benefits of proposed alternative policies that could be successfully implemented.
A policy proposal should only be implemented if valid and reliable forecasts of the effects of implementing the policy can be obtained and the forecasts show net benefits.
Failure to obtain a valid forecast in any of the three areas listed above would render forecasts for the other areas meaningless. We address primarily, but not exclusively, the first of the three forecasting problems: obtaining long-term forecasts of global temperature.”
Den skala som används för att bedöma om en metod eller förutsägelse är vetenskaplig och om de följer dessa 140 principer, ligger mellan -2 till +2 för vart och en av de 140 principerna. ”A rating of +2 indicates the forecasting procedures were consistent with a principle, and a rating of -2 indicates failure to comply with a principle. Sometimes some aspects of a procedure are consistent with a principle but others are not. In such cases, the rater must judge where the balance lays.”
Den genomsnittliga ”ratingen” för hela IPCC rapporten ligger på mellan -1.37 och -1.35. Är det inte vetenskapligt så säg!
(En detaljerad redovisning av siffrorna finns här: http://www.forecastingprinciples.com/Public_Policy/Forecasting_Audit_combined2.pdf)
Och om vi då tittar I detalj på vilka vetenskapliga principer som IPCC bryter mot:
”Of the 140 principles in the Forecasting Audit, we judged that 127 were relevant for auditing the forecasting procedures described in Chapter 8. The Chapter provided insufficient information to rate the forecasting procedures that were used against 38 of these 127 principles. For example, we did not rate the Chapter against Principle 10.2:
”Use all important variables.” At least in part, our difficulty in auditing the Chapter was due to the fact that it was abstruse. It was sometimes difficult to know whether the information we sought was present or not.
Of the 89 forecasting principles that we were able to rate, the Chapter violated 72. Of these, we agreed that there were clear violations of 60 principles.
Principle 1.3 ”Make sure forecasts are independent of politics” is an example of a principle that is clearly violated by the IPCC process. This principle refers to keeping the forecasting process separate from the planning process. The term ”politics” is used in the broad sense of the exercise of power. David Henderson, a former Head of Economics and Statistics at the OECD, gave a detailed account of how the IPCC process is directed by non-scientists who have policy objectives and who believe that anthropogenic global warming is real and dangerous (Henderson 2007).
The clear violations we identified are listed in Table 1.
Table 1. Clear Violations
- Describe decisions that might be affected by the forecast.
- Prior to forecasting, agree on actions to take assuming different possible forecasts.
- Make sure forecasts are independent of politics.
- Consider whether the events or series can be forecasted.
Identifying Data Points
- Avoid biased data sources.
- Use unbiased and systematic procedures to collect data.
- Ensure that information is reliable and that measurement error is low.
- Ensure that the information is valid.
- List all important selection criteria before selecting methods.
- Ask unbiased experts to rate potential methods.
- Select simple methods unless empirical evidence calls for a more complex approach.
- Compare track records of various forecasting methods.
- Assess acceptability and understandability of methods tousers
- Examine the value of alternative forecasting methods.
Implementing Methods: General
- Keep forecasting methods simple.
- Be conservative in situations of high uncertainty or instability.
Implementing Quantitative Methods
- Tailor the forecasting model to the horizon.
- Do not use ”fit” to develop the model.
Implementing Methods: Quantitative Models with Explanatory
- Apply the same principles to forecasts of explanatory variables.
- Shrink the forecasts of change if there is high uncertainty for predictions of the explanatory variables.
Integrating Judgmental and Quantitative Methods
- Use structured procedures to integrate judgmental and quantitative methods.
- Use structured judgments as inputs of quantitative models.
- Use prespecified domain knowledge in selecting, weighing, and modifying quantitative models.
- Combine forecasts from approaches that differ.
- Use trimmed means, medians, or modes.
- Use track records to vary the weights on component forecasts.
- Compare reasonable methods.
- Tailor the analysis to the decision.
- Describe the potential biases of the forecasters.
- Assess the reliability and validity of the data.
- Provide easy access to the data.
- Provide full disclosure of methods.
- Test assumptions for validity.
- Test the client’s understanding of the methods.
- Use direct replications of evaluations to identify mistakes.
- Replicate forecast evaluations to assess their reliability.
- Compare forecasts generated by different methods.
- Examine all important criteria.
- Specify criteria for evaluating methods prior to analyzing data.
- Assess face validity.
- Use error measures that adjust for scale in the data.
- Ensure error measures are valid.
- Use error measures that are not sensitive to the degree of difficulty in forecasting.
- Avoid error measures that are highly sensitive to outliers.
- Use out of sample (ex-ante) error measures.
- (Revised) Tests of statistical significance should not be used.
- Do not use root mean square error (RMSE) to make comparisons among forecasting methods.
- Base comparisons of methods on large samples of forecasts.
- Conduct explicit cost-benefit analysis.
- Use objective procedures to estimate explicit prediction.
- Develop prediction intervals by using empirical estimates based on realistic representations of forecasting situations.
- When assessing PIs, list possible outcomes and assess their likelihoods.
- Obtain good feedback about forecast accuracy and the reasons why errors occurred.
- Combine prediction intervals from alternative forecast methods.
- Use safety factors to adjust for overconfidence in PIs.
- Present forecasts and supporting data in a simple and understandable form.
- Provide complete, simple, and clear explanations of methods.
- Present prediction intervals.
Learning That Will Improve Forecasting Procedures
- Establish a formal review process for forecasting methods.
- Establish a formal review process to ensure that forecasts are used properly.
We also found 12 ”apparent violations”. These principles, listed in Table 2, are ones for which one or both of us had some concerns over the coding or where we did not agree that the procedures clearly violated the principle.
Table 2. Apparent Violations
- Obtain decision makers’ agreement on methods.
Structuring the Problem
- Identify possible outcomes prior to making forecast.
- Decompose time series by level and trend.
Identifying Data Sources
- Ensure the data match the forecasting situation.
- Obtain information from similar (analogous) series or cases. Such information may help to estimate trends.
Implementing Judgmental Methods
- Obtain forecasts from heterogeneous experts.
- Design test situations to match the forecasting problem.
- Describe conditions associated with the forecasting problem.
- Use multiple measures of accuracy.
- Do not assess uncertainty in a traditional (unstructured) group meeting.
- Incorporate the uncertainty associated with the prediction of the explanatory variables in the prediction intervals.
- Describe your assumptions.
Finally, we lacked sufficient information to make ratings on many of the relevant principles. These are listed in Table 3.
Table 3. Lack of Information
Structuring the Problem
- Tailor the level of data aggregation (or segmentation) to the decisions.
- Decompose the problem into parts.
- Decompose time series by causal forces.
- Structure problems to deal with important interactions among causal variables.
- Structure problems that involve causal chains.
Identifying Data Sources
- Use theory to guide the search for information on explanatory variables.
- Obtain all the important data.
- Avoid collection of irrelevant data.
- Clean the data.
- Use transformations as required by expectations.
- Adjust intermittent series.
- Adjust for unsystematic past events.
- Adjust for systematic events.
- Use graphical displays for data.
Implementing Methods: General
- Adjust for events expected in the future.
- Pool similar types of data.
- Ensure consistency with forecasts of related series and related time periods.
Implementing Judgmental Methods
- Ask experts to justify their forecasts in writing.
- Obtain forecasts from enough respondents.
- Obtain multiple forecasts of an event from each expert.
Implementing Quantitative Methods
- Match the model to the underlying phenomena.
- Weigh the most relevant data more heavily.
- Update models frequently.
Implementing Methods: Quantitative Models with Explanatory Variables
- Use all important variables.
- Rely on theory and domain expertise when specifying directions of relationships.
- Use theory and domain expertise to estimate or limit the magnitude of relationships.
- Use different types of data to measure a relationship.
- Forecast for alternative interventions.
Integrating Judgmental and Quantitative Methods
- Limit subjective adjustments of quantitative forecasts.
- Use formal procedures to combine forecasts.
- Start with equal weights.
- Use domain knowledge to vary weights on component forecasts.
- Use objective tests of assumptions.
- Avoid biased error measures.
- Do not use R-square (either standard or adjusted) to compare forecasting models.
- Ensure consistency of the forecast horizon.
- Ask for a judgmental likelihood that a forecast will fall within a pre-defined minimummaximum interval.
Learning That Will Improve Forecasting Procedures
- Seek feedback about forecasts.
Some of these principles might be surprising to those who have not seen the evidence-”Do not use R-square (either standard or adjusted) to compare forecasting models.” Others are principles that any scientific paper should be expected to address-”Use objective tests of assumptions.” Many of these principles are important for climate forecasting, such as ”Limit subjective adjustments of quantitative forecasts.”
Some principles are so important that any forecasting process that does not adhere to them cannot produce valid forecasts. We address four such principles, all of which are based on strong empirical evidence. All four of these key principles were violated by the forecasting procedures described in IPCC Chapter 8.
Consider whether the events or series can be forecasted (Principle 1.4)
This principle refers to whether a forecasting method can be used that would do better than a naïve method. A common naïve method is to assume that things will not change.
Interestingly, naïve methods are often strong competitors with more sophisticated alternatives. This is especially so when there is much uncertainty. To the extent that uncertainty is high, forecasters should emphasize the naïve method. (This is illustrated by regression model coefficients: when uncertainty increases, the coefficients tend towards zero.) Departures from the naïve model tend to increase forecast error when uncertainty is high.
In our judgment, the uncertainty about global mean temperature is extremely high. We are not alone. Dyson (2007), for example, wrote in reference to attempts to model climate that ”The real world is muddy and messy and full of things that we do not yet understand.” There is even controversy among climate scientists over something as basic as the current trend. One researcher, Carter (2007, p. 67) wrote:
…the slope and magnitude of temperature trends inferred from time-series data depend upon the choice of data end points. Drawing trend lines through highly variable, cyclic temperature data or proxy data is therefore a dubious exercise. Accurate direct measurements of tropospheric global average temperature have only been available since 1979, and they show no evidence for greenhouse warming. Surface thermometer data, though flawed, also show temperature stasis since 1998.
Global climate is complex and scientific evidence on key relationships is weak or absent. For example, does increased CO2 in the atmosphere cause high temperatures or do high temperatures increase CO2? In opposition to the major causal role assumed for CO2 by the IPCC authors (Le Treut et al. 2007), Soon (2007) presents evidence that the latter is the case and that CO2 variation plays at most a minor role in climate change.
Measurements of key variables such as local temperatures and a representative global temperature are contentious and subject to revision in the case of modern measurements because of inter alia the distribution of weather stations and possible artifacts such as the urban heat island effect, and are often speculative in the case of ancient ones, such as those climate proxies derived from tree ring and ice-core data (Carter 2007).
Finally, it is difficult to forecast the causal variables. Stott and Kettleborough
(2002, p. 723) summarize:
Even with perfect knowledge of emissions, uncertainties in the representation of atmospheric and oceanic processes by climate models limit the accuracy of any estimate of the climate response. Natural variability, generated both internally and from external forcings such as changes in solar output and explosive volcanic eruptions, also contributes to the uncertainty in climate forecasts.
The already high level of uncertainty rises rapidly as the forecast horizon increases. While the authors of Chapter 8 claim that the forecasts of global mean temperature are well-founded, their language is imprecise and relies heavily on such words as ”generally,” ”reasonable well,” ”widely,” and ”relatively” [to what?]. The Chapter makes many explicit references to uncertainty. For example, the phrases ”. . . it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable” and ”Despite advances since the TAR, substantial uncertainty remains in the magnitude of cryospheric feedbacks within AOGCMs” appear on p. 593.
In discussing the modeling of temperature, the authors wrote, ”The extent to which these systematic model errors affect a model’s response to external perturbations is unknown, but may be significant” (p. 608), and, ”The diurnal temperature range… is generally too small in the models, in many regions by as much as 50%” (p. 609), and ”It is not yet known why models generally underestimate the diurnal temperature range.” The following words and phrases appear at least once in the Chapter: unknown, uncertain, unclear, not clear, disagreement, not fully understood, appears, not well observed, variability, variety, unresolved, not resolved, and poorly understood.
Given the high uncertainty regarding climate, the appropriate naïve method for this situation would be the ”no-change” model. Prior evidence on forecasting methods suggests that attempts to improve upon the naïve model might increase forecast error.
To reverse this conclusion, one would have to produce validated evidence in favor of alternative methods. Such evidence is not provided in Chapter 8 of the IPCC report.
We are not suggesting that we know for sure that long-term forecasting of climate is impossible, only that this has yet to be demonstrated. Methods consistent with forecasting principles such as the naïve model with drift, rule-based forecasting, wellspecified simple causal models, and combined forecasts might prove useful. The methods are discussed in Armstrong (2001). To our knowledge, their application to long-term climate forecasting has not been examined to date.
Keep forecasting methods simple (Principle 7.1)
We gained the impression from the IPPC chapters and from related papers that climate forecasters generally believe that complex models are necessary for forecasting climate and that forecast accuracy will increase with model complexity. Complex methods involve such things as the use of a large number of variables in forecasting models, complex interactions, and relationships that employ nonlinear parameters.
Complex forecasting methods are only accurate when there is little uncertainty about relationships now and in the future, where the data are subject to little error, and where the causal variables can be accurately forecast. These conditions do not apply to climate forecasting. Thus, simple methods are recommended.
The use of complex models when uncertainty is high is at odds with the evidence from forecasting research (e.g., Allen and Fildes 2001, Armstrong 1985, Duncan, Gorr and Szczypula 2001, Wittink and Bergestuen 2001). Models for forecasting variations in climate are not an exception to this rule. Halide and Ridd (2007) compared predictions of El Niño-Southern Oscillation events from a simple univariate model with those from other researchers’ complex models. Some of the complex models
were dynamic causal models incorporating laws of physics. In other words, they were similar to those upon which the IPCC authors depended. Halide and Ridd’s simple model was better than all eleven of the complex models in making predictions about the next three months. All models performed poorly when forecasting further ahead.
The use of complex methods makes criticism difficult and prevents forecast users from understanding how forecasts were derived. One effect of this exclusion of others from the forecasting process is to reduce the chances of detecting errors.
Do not use fit to develop the model (Principle 9.3)
It was not clear to us to what extent the models described in Chapter 8 (or in Chapter 9 by Hegerl et al. 2007) are either based on, or have been tested against, sound empirical data. However, some statements were made about the ability of the models to fit historical data, after tweaking their parameters. Extensive research has shown that the ability of models to fit historical data has little relationship to forecast accuracy (See ”Evaluating forecasting methods” in Armstrong 2001.) It is well known that fit can be improved by making a model more complex. The typical consequence of increasing complexity to improve fit, however, is to decrease the accuracy of forecasts.
Use out-of-sample (ex ante) error measures (Principle 13.26)
Chapter 8 did not provide evidence on the relative accuracy of ex ante long-term forecasts from the models used to generate the IPCC’s forecasts of climate change. It would have been feasible to assess the accuracy of alternative forecasting methods for medium- to long-term forecasts by using ”successive updating.” This involves withholding data on a number of years, then providing forecasts for one-year ahead, then two-years ahead, and so on up to, say, 20 years. The actual years could be disguised during these validation procedures.
Furthermore, the years could be reversed (without telling the forecasters) to assess back-casting accuracy. If, as is suggested by forecasting principles, the models were unable to improve on the accuracy of forecasts from the naïve method in such tests, there would be no reason to suppose that accuracy would improve for longer forecasts. ”Evaluating forecasting methods” in Armstrong 2001 provides evidence on this principle.”
Fler citat från analysen:
”The many violations provide further evidence that the IPCC authors were unaware of evidence-based principles for forecasting. If they were aware of them, it would have been incumbent on them to present evidence to justify their departures from the principles. They did not do so. We conclude that because the forecasting processes examined in Chapter 8 overlook scientific evidence on forecasting, the IPCC forecasts of climate change are not scientific.”
”Bryson (1993) wrote that while it is obvious that when a statement is made about what climate will result from a doubling CO2 it is a forecast, ”I have not yet heard, at any of the many environmental congresses and symposia that I have attended, a discussion of forecasting methodology applicable to the environment” (p. 791).”
”Using the titles of the papers, we independently examined the references in Chapter 8 of the IPCC Report. The Chapter contained 788 references. Of these, none had any apparent relationship to forecasting methodology.”
”Research since 1980 has provided much more evidence that expert forecasts are of no value. In particular, Tetlock (2005) recruited 284 people whose professions included, ”commenting or offering advice on political and economic trends.” He asked them to forecast the probability that various situations would or would not occur, picking areas (geographic and substantive) within and outside their areas of expertise. By 2003, he had accumulated over 82,000 forecasts. The experts barely if at all outperformed non-experts and neither group did well against simple rules.
Comparative empirical studies have routinely concluded that judgmental forecasting by experts is the least accurate of the methods available to make forecasts. For example, Ascher (1978, p. 200), in his analysis of long-term forecasts of electricity consumption found that was the case.”
”Experts’ forecasts of climate changes have long been newsworthy and a cause of worry for people. Anderson and Gainor (2006) found the following headlines in their search of the New York Times:
Sept. 18, 1924 MacMillan Reports Signs of New Ice Age
March 27, 1933 America in Longest Warm Spell Since 1776
May 21, 1974 Scientists Ponder Why World’s Climate is Changing: A Major Cooling Widely Considered to be Inevitable
Dec. 27, 2005 Past Hot Times Hold Few Reasons to Relax About New Warming
In each case, the forecasts behind the headlines were made with a high degree of confidence.”
Några citat om de klimatmodeller som IPCC och Global Warming Hysterikerna älskar så mycket OCH SOM HELA DENNA FANTASTISKA HYSTERI BASERAS PÅ. Observera att många av dessa kritiska kommentarer kommer från medförfattare till IPCC rapporten!
”The methodology for climate forecasting used in the past few decades has shifted from surveys of experts’ opinions to the use of computer models. Reid Bryson, the world’s most cited climatologist, wrote in a 1993 article that a model is ”nothing more than a formal statement of how the modeler believes that the part of the world of his concern actually works” (p. 798-790). Based on the explanations of climate models that we have seen, we concur.
While advocates of complex climate models claim that they are based on ”well established laws of physics”, there is clearly much more to the models than the laws of physics otherwise they would all produce the same output, which patently they do not. And there would be no need for confidence estimates for model forecasts, which there most certainly are. Climate models are, in effect, mathematical ways for the experts to express their opinions.”
”In a wide-ranging article on the broad topic of science and the environment, Bryson (1993) was critical of the use of models for forecasting climate. He wrote:
…it has never been demonstrated that the GCMs [General Circulation Models] are capable of prediction with any level of accuracy. When a modeler says that his model shows that doubling the carbon dioxide in the atmosphere will raise the global average temperature two to three degrees Centigrade, he really means that a simulation of the present global temperature with current carbon dioxide levels yields a mean value two to three degrees Centigrade lower than his model simulation with doubled carbon dioxide. This implies, though it rarely appears in the news media, that the error in simulating the present will be unchanged in simulating the future case with doubled carbon dioxide. That has never been demonstrated-it is faith rather than science.” (pp. 790-791)
Balling (2005), Christy (2005), Frauenfeld (2005), and Posmentier and Soon
(2005) each assess different aspects of the use of climate models for forecasting and each comes to broadly the same conclusion: The models do not represent the real world sufficiently well to be relied upon for forecasting.
Carter, et al. (2006) examined the Stern Review (Stern 2007). They concluded that the authors of the Review made predictions without reference to scientific validation and without proper peer review.
Pilkey and Pilkey-Jarvis (2007) examined long-term climate forecasts and
concluded that they were based only on the opinions of the scientists. The scientists’ opinions were expressed in complex mathematical terms without evidence on the validity of chosen approach. The authors provided the following quotation on their page 45 to summarize their assessment: ”Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation and eventually build a structure which has no relation to reality (Nikola Telsa, inventor and electrical engineer, 1934).” While it is sensible to be explicit about beliefs and to formulate these in a model, forecasters must also demonstrate that the relationships are valid.
Carter (2007) examined evidence on the predictive validity of the general circulation models (GCMs) used by the IPCC scientists. He found that while the models included some basic principles of physics, scientists had to make ”educated guesses” about the values of many parameters because knowledge about the physical processes of the earth’s climate is incomplete. In practice, the GCMs failed to predict recent global average temperatures as accurately as simple curve-fitting approaches (Carter 2007, pp. 64 – 65). They also forecast greater warming at higher altitudes in the tropics when the opposite has been the case (p. 64). Further, individual GCMs produce widely different forecasts from the same initial conditions and minor changes in parameters can result in forecasts of global cooling (Essex and McKitrick, 2002).
Interestingly, when models predict global cooling, the forecasts are often rejected as ”outliers” or ”obviously wrong” (e.g., Stainforth et al., 2005).
Roger Pielke Sr. (Colorado State Climatologist, until 2006) gave an assessment of climate models in a 2007 interview:
You can always reconstruct after the fact what happened if you run enough model simulations. The challenge is to run it on an independent dataset, say for the next five years. But then they will say ”the model is not good for five years because there is too much noise in the system”. That’s avoiding the issue then. They say you have to wait 50 years, but then you can’t validate the model, so what good is it?
…Weather is very difficult to predict; climate involves weather plus all these other components of the climate system, ice, oceans, vegetation, soil etc. Why should we think we can do better with climate prediction than with weather prediction? To me it’s obvious, we can’t!”
In his assessment of climate models, physicist Freeman Dyson (2007) wrote:
I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in.
Bellamy and Barrett (2007) found serious deficiencies in the general circulation models described in the IPCC’s Third Assessment Report. In particular, the models (1) produced very different distributions of clouds and none was close the actual distribution of clouds, (2) parameters for incoming radiation absorbed by the atmosphere and for that absorbed by the Earth’s surface varied considerably, (3) did not accurately represent what is known about the effects of CO2 and could not represent the possible positive and negative feedbacks about which there is great uncertainty. The authors concluded:
The climate system is a highly complex system and, to date, no computer models are sufficiently accurate for their predictions of future climate to be relied upon. (p.72)
Trenberth (2007), a lead author of Chapter 3 in the IPCC WG1 report wrote in a Nature.com blog ”… the science is not done because we do not have reliable or regional predictions of climate.”
Taylor (2007) compared seasonal forecasts by New Zealand’s National Institute of Water and Atmospheric Research (NIWA) with outcomes for the period May 2002 to April 2007. He found NIWA’s forecasts of average regional temperatures for the season ahead were 48% correct, which was no more accurate than chance. That this is a general result was confirmed by New Zealand climatologist Jim Renwick, who observed that NIWA’s low success rate was comparable to that of other forecasting groups worldwide. He added that ”Climate prediction is hard, half of the variability in the climate system is not predictable, and so we don’t expect to do terrifically well.”
Renwick is a co-author with Working Group I of the IPCC 4th Assessment Report, and also serves on the World Meteorological Organization Commission for Climatology Expert Team on Seasonal Forecasting. His expert view is that current GCM climate models are unable to predict future climate any better than chance (New Zealand Climate Science Coalition 2007).”
Några mer intressanta citat:
”Trenberth (2007) and others have claimed that the IPCC does not provide forecasts but rather presents ”scenarios” or ”projections.” As best as we can tell, these terms are used by the IPCC authors to indicate that they provide ”conditional forecasts.”
Presumably the IPCC authors hope that readers, especially policy makers, will find at least one of their conditional forecast series plausible and will act as if it will come true if no action is taken. As it happens, the word ”forecast” and its derivatives occurred 37 times, and ”predict” and its derivatives occurred 90 times in the body of Chapter 8.”
”In order to audit the forecasting processes described in Chapter 8 of the IPCC’s report, we each read it prior to any discussion. The chapter was, in our judgment, poorly written. The writing showed little concern for the target readership. It provided extensive detail on items that are of little interest in judging the merits of the forecasting process, provided references without describing what readers might find, and imposed an incredible burden on readers by providing 788 references. In addition, the Chapter reads in places like a sales brochure. In the three-page executive summary, the terms, ”new” and ”improved” and related derivatives appeared 17 times. Most significantly, the chapter omitted key details on the assumptions and the forecasting process that were used. If the authors used a formal structured procedure to assess the forecasting processes, this was not evident.
”It is difficult to understand how scientific forecasting could be conducted without reference to the research literature on how to make forecasts. One would expect to see empirical justification for the forecasting methods that were used. We concluded that climate forecasts are informed by the modelers’ experience and by their models-but that they are unaided by the application of forecasting principles.”
”To provide forecasts of climate change that are useful for policy-making, one would need to prepare forecasts of (1) temperature changes, (2) the effects of any temperature changes, and (3) the effects of feasible proposed policy changes. To justify policy changes based on climate change, policy makers need scientific forecasts for all three forecasting problems. If governments implement policy changes without such justification, they are likely to cause harm.”
”Public policy makers owe it to the people who would be affected by their policies to base them on scientific forecasts.”
Här kommer några citat om analysen:
Climate scientist Jos de Laat of the Royal Dutch Meteorological Institute wrote of the paper:
”I very much agree with your statement that ‘the forecasts in the report … present the opinions of scientists transformed by mathematics and obscured by complex writing’, I don’t think that many climate scientists are willing to admit this… I was quite surprised, even a little bit disturbed, to learn that there exists a research field devoted to the science of prediction. I have a formal education in climate science (University degree, BS in physics, MS in Meteorology and Oceanography, PhD in climate science), so I’ve been around for some time now, yet I don’t recall anyone ever mentioning your research area.”
Forecaster Kjell Stordahl in a note for Oracle observed that the IPCC’s physical science report presents temperature data in a selective fashion and that the GCM models they rely on for forecasting exclude important variables such as Solar activity. Regarding the validity of the IPCC forecasts, he wrote:
”I have added the real temperature data 2000-2007 in black as a horizontal line in the figure. We know for sure that [concentrations of CO2 in the atmosphere] have increased compared with the 2000 level. Hence [the "constant concentrations"] scenario is not relevant in comparisons. However, we see that the real temperature 2000 -2007 is lower than IPCC’s temperature forecasts and the confidence/uncertainty limits even for this scenario.
The conclusion based on these comparisons shows that IPCC’s 2000 forecasts have been significantly too high so far.”
”Another very difficult problem for IPCC is the quantification of forecasting uncertainties. So far IPCC has not been able to quantify these. In chapter 10 (Figure 10.5) in Physical Science Basis report, 2007, IPCC shows what they call ”ensemble” simulations which are based on ”independent” simulations from different climate models. Hence, the variations in the model forecasts illustrate the uncertainties in the temperature forecasts. This is of course not an acceptable way to describe forecast uncertainties.
IPCC states several places in their Physical Science Basis report, 2007, that they have not been able to include water vapour and clouds in an acceptable way in the climate models. The absorption ability of water vapour/clouds is much higher than the absorption ability of the greenhouse gases. Therefore, the different IPCC climate models are biased and the ”independent” simulations will not illustrate the confidence limits. In addition IPCC has not assigned or quantified probabilities to the span made by the different simulations.”
Kära Svenska folket – ni har blivit DUBBELT grundlurade av Global Warming Hysterikerna. Och dessa personer vill använda BILJONTALS kr (1 000 000 000 000) av ERA PENGAR för att lösa ett ”problem” SOM INTE FINNS och SOM KANSKE INTRÄFFAR OM 100 år!
Så låt inte dessa charlataner få FÖRSKINGRA era pengar och dessutom gynna sig själva. Det är dags att säga ifrån högt och tydligt så att våra politiker förstår att det är allvar!
Som jag konstaterade I min första rapport (The UN Climate Change Numbers Hoax eller IPCC:s lögn!) – Det är dags att sätta stopp för detta århundrades största vetenskapliga och politiska skandal alla kategorier.
Så vad säger man nu när skandalen är av en ÄNNU STÖRRE OMFATTNING än vad man trodde var möjligt?
Jo, att det är dags att gå ut på gatorna och ropa ut vad som är så UPPENBART! Nämligen att ”Kejsaren är naken”. Dvs. avslöja hela det politiska, massmediala, vetenskapliga miljökomplexet som driver denna ovetenskapliga hysteri i sina EGNA intressen. Make no mistake about that!
Många personer som sjunger med i halleluja kören är mycket väl medvetna om att det INTE finns någon vetenskaplig grund bakom hela Global Warming Hysterin. Det gör det ändå för att de har ”vested intrest” – karriär, anslag, inkomster, företag, ökad försäljning, massmedial uppmärksamhet etc. etc.
Vilket gör det ännu mera cyniskt!
Det finns massor av problem (och folk dör av dem IDAG), inklusive riktiga miljöproblem, som behöver lösas HÄR OCH NU. Använd då dessa gigantiska summor på detta istället för att jaga hjärnspöken SOM KANSKE INTRÄFFAR OM 100 ÅR!
Framtida generationer kommer inte att förlåta de politiker, massmedia och vetenskapsmän som deltog i detta cyniska svek. För de kommer nämligen att sitta med facit i hand och kommer att ha mycket svårt att förstå detta kostsamma lurendrejeri. Och att det fick fortgå så länge!
Hela rapporten finns här:
GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS