Contributed by Maria-Helena Ramos and Florian Pappenberger
In hydrology, many operational forecasters are usually receiving weather forecasters from a meteorological service without in-depth explanations regarding where these forecasts come from (there are a few laudable exceptions usually in organisations where the hydrological and weather forecast are issued by the same institution or where a concept of Public Weather Advisors exists – see for instance in UK).
We can imagine that weather forecasters are so used to their forecasting practices that they forget that users may not be acquainted with numerical weather prediction models and on how models create forecasts. In addition they have to deal with a large user community from forecasts on TV or radio, to energy companies, public health (e.g. heat waves) to flood forecasting centres.
A short course (or some hours of internet navigation) can fill the knowledge gap and any user of weather forecasters can quickly end up with the basic concepts behind computational fluid dynamics, 4D variational data assimilation or isobars charts – out of which she or he have to distil the knowledge which is actually required.
The question then seems to be solved. But then the operational hydrologist discusses his/her recent acquired knowledge with the meteorologist who says: “Well, you know, model outputs are not all. Our forecasts are then expertized and forecaster’s ‘best judgement’ is used to produce the final forecast that will be sent out to specific users and to the public”.
And now the question is: Where does this ‘expertise’ come from? If the forecast is not a model output, what is it then? What finally is an ‘expert forecast’?
From raw model outputs to ‘expert forecast’
About a month ago, Dan Satterfield published an interesting post (The Great Facebook Blizzard- Storms and Rumors of Storms), on his blog at the AGU Blogosphere where the question “Is Posting Raw Model Data A Mistake?” is raised. A comparative mention to a ‘model forecaster’ versus ‘a meteorologist’ indicate that forecasting is more than conveying model outputs.
Let’s take a hydrologic example, the one of the EDF developments to build a streamflow forecasting system (you can read more about their forecasting system here). Taking advantage of many years of practice in hydrological forecasting, developments to formalize forecast uncertainty began exploring human expertise and forecasters’ capacity to translate forecast uncertainty into statistical confidence intervals (Houdant, 2004).
The initial idea was to first of all train operational forecasters to give estimations of the 10th and 90th percentiles of streamflow forecasts (i.e., the values below which 10% and 90% of river flow observations fall, given that the system is reliable). Exercises were carried out to make forecasters express numerically their intrinsic perception of the uncertainty associated with a future river flow condition. They were asked questions like: “When you forecast 10 mm at a given place, what uncertainty do you associate with this value?”
In other words, forecasters were asked to no longer give a unique forecast value, but to provide scenarios, namely an ‘average’ scenario, a ‘low’ scenario (the 10th percentile), and a ‘high’ scenario (the 90th percentile). This way of expressing forecasts through quantiles or scenarios pushed forecasters to give a more formal indication of forecast uncertainty. The approach required ‘probability reasoning’. Forecasters had to provide confidence intervals for their forecasts. They had to be trained to give the 10th and 90th percentiles to answer to the question they were used to be asked by users, i.e., “what is the forecast peak flow of a basin or inflow to a reservoir?”
At EDF, the discussions about the implementation of a probabilistic system were conducted until 2005-2006, when it was then decided to make it the rule at the forecasting centers: from then on, they would always communicate forecast intervals/quantiles to the users, hoping that forecasters would be able to calibrate ‘subjectively’ the confidence intervals they were associating with their forecasts. The notion of ‘subjective probability’ in forecasts (Murphy and Dann, 1984), based on expertise applied by the forecaster, was introduced in the practice of operational forecasting (Garçon et al., 2009). Training and case-study analyses were considered as efficient means to help forecasters in ‘calibrating’ their subjective probabilities and avoid over-confidence (underestimation of total uncertainties). Illustrations of the role of human forecast expertise in producing and communicating uncertain forecasts can be seen here and here.
What interaction can we expect between model outputs and ‘expert forecasts’?
In an interview conducted with forecasters at EDF in Grenoble, their point of view was clear: to contribute in making human expert forecasts reliable and to facilitate the production of such forecasts in a routine way, it is essential to provide forecasters with objective probability forecasts (automatically produced by the probabilistic forecasting system). It was said: “to effectively enhance the ability of expertise, it is necessary to develop interactive tools that provide the forecaster the ability to simply change scenarios of precipitation and flow generated automatically: the forecaster must be able in a few ‘clicks’ to narrow the dispersion of the ensemble forecasts or to redirect it towards higher or lower values” (free translation & not ipsa verba).
We must then understand that an ‘expert forecast’ (in France, we say ‘prévision expertisée’) contains the forecaster’s best judgement about the upcoming situation. This may mean that at least one type of goodness (according to Murhpy’s definition in his 1993 paper), the Type I: Consistency (correspondence between forecasts and judgements) might be at a high level, although, as also reminded by Murphy, “since forecaster’s judgements are, by definition, internal to the forecaster and unavailable for explicitly evaluation, the degree of correspondence between judgements and forecasts cannot be assessed directly”.
One of the issues is actually to pin the process down and make it transparent: but is it nearly (im)possible?
We come back here to a point we once raised in a paper by asking ourselves and the reader if ‘communicating uncertainty in hydro-meteorological forecasts was mission impossible’. At that time, we concluded by saying that “from [our] experience, there is an optimistic temptation to bet on a negative answer: the mission is not impossible, at least not in its absolute terms, although the tasks to be executed might be difficult to accomplish”.
Faulkner et al. (2007) called for a translational discourse between science and professionals, so far this discourse has not even reached the stage of a toddler. The discussion is definitely not yet closed!
The recent post by Tom and the discussions that follow it show very clearly that ‘consistency’ can still be a full topic of investigation!