Can understanding how a forecast is used help improve the science?

Can understanding how a forecast is used help improve the science?

Liz Stephens is a HEPEX guest columnist for 2014. This is Liz 1st column.

Increasingly, the weather forecast enterprise is seen as a ‘partnership’ which enables a two-way flow of information both to help forecasters improve decision-making, and to provide feedback from decision-makers into the forecasting institutions. But how can an understanding of how the forecast is used help to improve the science?

Flood in Reading

Anders Persson has described how for organisations who have an extremely high or low protection cost a probability forecast would be no benefit to them. Rayner et al. (2005) (Paywall) demonstrated this nicely for water resources management in the Pacific Coast US, where given the high cost of them not being able to supply water they would always take the conservative option regardless of the forecast. For such an end-user, developments to the forecast system may have little influence on its usability.

But there are situations where a forecast could have unquestionable benefit, yet it is not used. At a meeting at ECMWF shortly after Typhoon Haiyan, some of the audience voiced their frustrations that nothing seemed to have been done in advance of landfall, despite it being well forecast days in advance. Perhaps in such situations the usability of the forecast is dictated by political factors, but is there anything that we as scientists can do to improve usability?

Such frustrations are the motivation for my current research, which addresses the usability of the Global Flood Awareness System (GloFAS) for humanitarian response. I hope that through better understanding user requirements I will be able to carry out a more targeted evaluation and development of the forecast system, and in turn, this will benefit the users of the forecast.

But what sort of information from end-users can be fed back into the science? Do any readers of the HEPEX blog have any examples of how they have changed the focus of their research following something they have learnt from their end-users?

Next post: 04 April 2014.

columnist2014-Hepex-PinLiz will be contributing to this blog over the year. Follow her columns here.

3 thoughts on “Can understanding how a forecast is used help improve the science?

  1. What sort of information from end-users can be fed back into the science? One of the unique features at ECMWF, since its start in 1979, has been the daily monitoring of the forecast system by the Meteorological Operations Section or “Met Ops” for short. Its “Daily Report” is read by the scientific staff on the look out after interesting problems to solve.

    Comments or complaints from users of the forecasts in the Member States are mostly referred to the “Met Ops” for further investigations. But as with the police, where many alarms from the public turn out to be false, a lot of the feed back from the Member States tend unfortunately to be based on far too few cases; the “Power of Randomness” is often underestimated. But those which were not were sufficiently many to provide valuable input and from my time in the “Met Ops” are remember many useful comments, in particular when we had introduced a new model version:

    1. In the early 90’s, while mingling at an “ice breaking” session during an ECMWF conference, we heard how pleased the Member States were with a new model version. But the Finnish delegate was not 100% happy: – There is a tendency to cool the temperature at the end of the 10-day period. We didn’t, to be honest, at first put too much weight on this comment until we heard the same from the Turkish delegate. Finland and Turkey were then the two most eastern countries among the Member States, so perhaps there was something to look into? Yes, it turned out that the model had a systematic tendency to “retrogress”, to drift the large scale weather patters to the west, too often advancing the cold air from Siberia to the west. This was corrected by some changes in the numerical scheme, if I remember correctly.

    2. In early 1995, during a cold outbreak over Europe, there were complaints from the Member States about too cold temperature forecasts. This easily happens if the forecast cloud amount is underestimated and “Met Ops” needed more time to ascertain any possible bias. However, when Harry van Otter from “Meteo Consult” angrily rang up the whole ECMWF scientific staff was mobilized and a serious bug was soon found. Just as “latent heat” is released when water vapour cools into water, so does water release heat when it freezes to ice. And our new soil moisture scheme had not taken this heating into account!

    3. Something similar happened in January 2011 when the Directors in the Nordic countries put their weight behind complaints from the forecasters about too cold 2-m temperatures. In this case it led to important new insights into the treatment in the model of super cooled water in low level clouds.

    Like a retired police commissar I could keep on for hours with more stories like these from my “case book”: as when we created a lake in Saudi Arabia, had ice on Lake Tanganyika or maintained small sandy deserts along the Baltic coast . . .

    At ECMWF we tried to encourage the Member States, and sometimes also non-Members states, to create similar “Met Opses” at their institutes, to monitor their own NWP system. But it always failed because of the rivalry between modellers and forecasters; the former regarded the model as their “baby” and were unwilling to let the latter into its “care”.

  2. Liz.

    Interesting post. More from an operational delivery perspective; we’ve been running a series of ‘science behind the service’ workshops with responders in Scotland recently. This followed a particular flooding incident which involved a difficult river forecasting situation in which responders wanted to evacuate a care home yet we didn’t consider the risk warranted it. Although no products or services have changed since these sessions it’s helped generate an understanding of the complex forecasting that supports warning decisions. If anything has changed then it’s the guidance offered to the responders about risk and uncertainty.

    1. I really like the concept of “science behind the service” workshops – I do believe Liz is right in saying that “usability of [a] forecast … dictated by political factors” can happen, but I also think that these things are not always so separate. You may have the political decision that there should be only one warning chain and information service, which then of course will impact on how widely forecasts are available. Indeed there maybe good scientific arguments for that too – so science informs politics and politics informs science. Having said that there is not always a reasonable balance as we can currently see in the discussion in the UK on dredging rivers for flood protection (

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.