Browsed by
Category: verification

What to look for when using forecasts from NWP models for streamflow forecasting?

What to look for when using forecasts from NWP models for streamflow forecasting?

By Durga Lal Shrestha, James Bennett, David Robertson and QJ Wang, members of the CSIRO Columnist Team There have been a few posts on NWP performance lately, and so we thought we’d add our perspective. We’ve been working closely with the Bureau of Meteorology to extend their new 7-day deterministic streamflow forecasting service to an operational ensemble streamflow forecasting service. One of the fundamental choices we have to make is the source of quantitative precipitation forecasts (QPFs). This is not…

Read More Read More

FcstVerChallenge: will you join the HEPEX team?

FcstVerChallenge: will you join the HEPEX team?

You may have read Florian’s recent post on the WMO’s “forecast verification challenge”. In short: the WMO’s World Weather Research Programme set a challenge to develop new user oriented verification scores, metrics, diagnostics or diagrams. Any entries have to be submitted by the end of October and the winning entry will be awarded with a “keynote” presentation at the 2017 WMO verification meeting in Geneva as well as free passage into that event. Some HEPEX-ers got together last week and…

Read More Read More

A user-oriented forecast verification metric competition

A user-oriented forecast verification metric competition

Forecast performance is one of the most central themes not only in day-to-day weather forecasting, but also in HEPEX. It is so important that we have devoted an entire chapter in our science and implementation plan to it (see here). I am, in particular, often forwarding the link to these blog posts when I am explaining (or trying to explain) forecast properties to a forecast user. Nevertheless, many of the scores remain abstract. Whilst a forecast bias may still be…

Read More Read More

How good is my forecasting method? Some thoughts on forecast evaluation using cross-validation based on Australian experiences

How good is my forecasting method? Some thoughts on forecast evaluation using cross-validation based on Australian experiences

Contributed by David Robertson, James Bennett and Andrew Schepen, members of the CSIRO Guest Columnist Team As hydrological forecasting researchers, we are often excited when we develop new methods that lead to forecasts with smaller errors and/or more reliable uncertainty estimates. So how do we know whether a new method truly improves forecast performance? The true test of any forecasting method is, of course, how it performs for real-time applications. But this is often not possible, and we have to…

Read More Read More

Analogues are the new deterministic forecasts

Analogues are the new deterministic forecasts

by Marie-Amélie Boucher, a HEPEX 2015 Guest Columnist According to Krzysztofowicz (2001), “Probabilistic forecasts are scientifically more honest [than deterministic forecasts], enable risk-based warnings of floods, enable rational decision making, and offer additional economic benefits.” More than 10 years later, I think most people (and especially members of the HEPEX community!) agree with this statement. But what about different types of probabilistic/ensemble forecasts? How do they compare to one another? In the literature, it is very frequent to use deterministic forecasts…

Read More Read More