Bryan Lawrence : Predictability and Risk

Bryan Lawrence

... personal wiki, blog and notes

Predictability and Risk

Nearly a year ago, I thought I might write something down about how predictability works in the climate sense, as opposed to, say weather forecasts. Suffice to say, I never got around to it. When I saw Roger Pielke's blog I started reading it because I agree with his basic premise about the importance of other parts of the earth system in making good climate predictions (see this post for a recent example or this older one). But over time as I read his group blog entries, the pattern of nonsense about climate predictability became worse and worse, and I stopped reading it regularly. I got really fed up when I read the guest article on predictability by Henk Tennekes as it follows Pielke's statement that predictions of regional and global climate change are not science.

I was fed up enough that I had started bookmarking some material, and was going to write something. But I don't have to. James Annan has done a fabulous job in three posts:

I liked the first post because it makes really clearly the distinction between predictive uncertainty that exists because of inherent unpredictability (he gives the example of coin tosses, I've used dice), and the uncertainty that exists because of lack of knowledge about the system (which is mostly what Pielke is on about). It's much better than my efforts.

I liked the second post, because it nails the issue about misusing the observations to validate climate predictions, and the third really puts it to bed. Why oh why does Pielke not get it?

The bottom line here is that we do the best predictions we can, and we undoubtedly need to produce the best possible indication of our uncertainty in that prediction. At which point as James and other's say, we're into a betting situation. So what should we do?

In terms of weather, the met office recommends that as a general guide one should take action when the probability of an event exceeds the ratio of protective costs to losses (C/L). It's simple betting argument. We do the thing that minimises our potential losses.

Now the reality is that something might be happening to the climate (ok, is!). There might be some costs associated with climate change. It's logical to do try and estimate the probability of various possible futures, because in terms of betting, we've already made the bet (our planet is on the line), so the only real question is what to do about it. We need to estimate that C/L ratio. Sure, we can argue that we can never know enough to calculate the odds, but using that as an argument for not doing anything is the same as saying we're sure that we will suffer no losses. Now how certain can we be of that?

Update (25/01/2006): James is still writing posts on predictability, the following one is how one should interpret statements like: tomorrow there is 70% chance of rain:

Update(02/03/2006): James is still writing posts. The first one is yet more on the comparison between Bayesian and frequentist uncertainty, with an excellent example from his back garden. The bottom line of the second one is that the verification problem for climate simulations is somewhat harder than that of weather forecasting, because really the only way of having any confidence is using a procedure called cross-validation - which relies on holding back some data from the methodology used to construct the model - and then testing the model against that ... but it's darn hard to hold back observations from the model builders ...

Categories: climate environment

This page last modified Thursday 02 March, 2006
DISCLAIMER: This is a personal blog. Nothing written here reflects an official opinion of my employer or any funding agency.