An invitation: help HEPEX program apps to teach the value of probabilistic forecasts

Contributed by Florian Pappenberger, Andy Wood, Maria-Helena Ramos, Schalk-Jan van Andel, Louise Crochemore, and Louise Arnal

This is a blog for people who usually do not read this blog. This is a blog which asks for help so that our learning and teaching games and tools become more useful and widespread. This is a blog hoping that we can find some enthusiastic volunteers.

The more people know how to use probabilistic forecasts, the better their decisions and the lower the impact caused by extreme weather or hydrology (floods, droughts etc) will be.

Weather, water and fire (and many others) forecasts are uncertain — people instinctively know this. For example, if I forecast a maximum of 15 degrees Celsius for tomorrow, you probably know it will not be exactly 15 degrees Celsius but something around that number (and you may add an uncertainty range from your experience).

Most weather and water forecast centres also understand this and produce not just a single forecast but a set of forecasts. For instance, the Global Flood Awareness System generates 51 scenarios (51 different forecasts) of what the future could look like. This enables it to issue a probabilistic forecast, which means (for the above example) that you can estimate a certain percentage chance of the temperature being above or below 15 degrees Celsius.

Despite this uncertainty, people still need to take decisions. An example of a simple decision is whether or not to take an umbrella when you leave the house. A more complex one may be whether you should stop the London Underground because of a risk of flooding along the Thames.

Probabilistic forecasts enable better decisions because you know what the uncertainties are, whereas you have no clue of them with a single forecast.

Forecast users can strengthen their understanding of the value of probabilistic forecasts, and their skills in using them, through forecasting games. There are many examples:

  • controlling a reservoir on a river to avoid droughts (here),
  • exploring the economic value of uncertain forecasts (here),
  • managing floods (here),
  • weather roulette (here) which evaluates the information in forecasts.

HEPEX has designed and implemented a number of such games (see here), but, to date, except for our first try on the online version of the “Pay for a forecast” game (here), they are paper- or presentation- based, and we would like them to reach a wider audience.

The challenge today is:

  1. Can you improve the existing ideas?
  2. Could you design a game where different players compete against each other? (*)
  3. Most importantly, can you transform the paper-based version into apps (web or phone) to reach a wider audience?

We are also interested in putting together ideas on how to set up a project (volunteer, student, commercial), which kind of partners, and how to get funding (volunteer-based, crowd-funding, H2020 and the like, commercially, etc) to enhance our tools for training and teaching the value of probabilistic forecasts.

Please contact us or any of the HEPEX co-chairs by Friday 28th April if you could contribute to engineering web-apps in support of the HEPEX initiative or of you are interested in participating to project proposals on the topic.

You can also come and meet us at the EGU poster session of the ensemble session in Vienna. Our HEPEX poster, will be displayed on Friday, 28th April, and the attendance time is from 17:30–19:00 in Hall A.

(*) In the Weather Roulette game, for instance, you would need to decide whether the app decides the odds for different outcomes. The goal of each player is to win the most in the weather roulette casino shared by everyone. Alternatively, players could each have their own casino and bet in the casinos of other players. Examples of the questions a designer would face include: On what forecasts should the odds be based? And what information is made available to the players so that they can decide how much to bet on different outcomes?

Posted in activities, projects | Leave a comment

Competition: How would you explain your work if you knew only 200 simple words?

Contributed by Louise Arnal, Rebecca Emerton, Liz Stephens, Hannah Cloke

It is not always easy to explain what you work on, especially when you have to avoid using jargon specific to your field. Yet, this is something that we almost all have to do from time to time. It is important to be able to explain your research simply in order to communicate effectively with scientists in other fields and, for example, businesses, policy makers and the public.

So we thought we’d have some fun with this and run a competition designed to really test how simply you can explain a common theme of all of our work: “Ensemble hydrological forecasting”.

Here is your challenge: using only the 200 most commonly used words of the English dictionary (listed below), you will have to explain what “Ensemble hydrological forecasting” is.

To help you out a little bit, you’re also allowed the use of the word “water”. You can make words plural and use punctuation, but you cannot conjugate verbs. You can write as much or as little as you need to explain the concept.

An ECMWF surprise prize is waiting for the winner

Submit your answer in the comment box below, starting the sentence with “Ensemble hydrological forecasting is…”.

The competition will be open until 21 March 2017, after which we will put the answers to a vote to choose a winner, who will receive a prize from the team at ECMWF.

Below is a list of words that you are allowed to use, in alphabetical order. In the first comment to this post, you will find an example if you’re struggling to get started.

Good luck!

These are the words you can use:

Posted in activities | 24 Comments

2018 HEPEX workshop, Melbourne, Australia: Breaking the barriers

Contributed by James Bennett (CSIRO) and local organizers team

In the afterglow of the highly successful 2016 HEPEX workshop in Quebec City, Canada, the planning for the next HEPEX workshop in 2018 in Melbourne, Australia is underway.

Melbourne is far way for many HEPEXers, so we thought we would give an early warning of this workshop to give you all some planning time. With this in mind, we plan to hold the workshop in February 6-8, 2018.

This is the height of summer in Melbourne, and we hope it will coax a few cold northerners to the antipodes. Melbourne is a thriving modern city, with a number of major research and operational centres interested in hydrometeorological ensemble forecasting (e.g. the Bureau of Meteorology, the University of Melbourne, Monash University and the CSIRO).

The theme for the workshop is ‘breaking the barriers’ to highlight current challenges facing ensemble forecasting researchers and practitioners and how they can (and have!) been overcome. We wish to highlight the following topics:

  • using ensemble forecasts to improve decisions in practice,
  • extending forecasts in space (including to ungauged areas) and across lead-times, from short-term to sub-seasonal to seasonal forecast horizons,
  • using ensemble forecasts to maximise economic returns from existing water infrastructure (e.g. reservoirs), even as inflows and demand for water change,
  • using ensemble forecasts to improve environmental management of rivers,
  • applying ensemble forecasts for agriculture,
  • searching for better/new sources of forecast skill,
  • balancing the use of dynamical climate and hydrological models with the need for reliable ensembles,
  • communicating forecast quality and uncertainty to end users.

More generally, we welcome contributions on new and improved ensemble hydrological prediction methods as well the application of existing methods in practical and operational settings.

As before, the Melbourne 2018 workshop will go for 3 days and include both oral and poster presentations on all aspects of hydrological ensemble prediction. We will give an update with abstract submission dates and more information – stay tuned!

Posted in announcements-events, meetings | Leave a comment

Meeting user needs for sub-seasonal streamflow forecasts in Australia

By Tongtiegang Zhao, Andrew Schepen and Q.J. Wang, members of the CSIRO Columnist Team

Good streamflow forecasts allow water management agencies to make better decisions and achieve more efficient water use. Currently, the Australian Bureau of Meteorology provides seasonal forecasts of three-month-total streamflow for over 200 gauging stations around Australia. Forecast users, particularly water management agencies, also require sub-seasonal streamflow forecasts, so that they can better plan short-term water use. Our recent study responds to this user need by testing ensemble sub-seasonal to seasonal streamflow forecasting for 23 case study catchments around Australia (Figure 1).


Figure 1: Location map of the 23 case study catchments around Australia

We apply the Bayesian joint probability (BJP) modelling approach to predict monthly streamflow three months ahead. The predictors are one-month antecedent streamflow and climatic indices, including El Niño Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD). In BJP, streamflow and climate variables are first normalised through data transformations. The transformed variables are then assumed to follow a multivariate normal distribution.

We evaluate the skill of sub-seasonal forecasts relative to a climatology reference. The results show that the month 1, 2 and 3 ahead forecasts are respectively positively skillful (have smaller errors than climatology forecasts) in 74%, 57% and 46% of the test cases (Figure 2). The variation of sub-seasonal forecast skill is associated with rainfall seasonality, streamflow variability and catchment geomorphology. As lead time increases, forecast skill reduces and the BJP-generated ensemble forecasts tend towards climatology. The sub-seasonal forecasts are overall reliable in ensemble spread at different lead times.


Figure 2: Cumulative distributions of skill for month 1, 2 and 3 ahead forecasts

Seasonal forecasts are obtained by accumulating sub-seasonal forecasts of streamflow in months 1, 2 and 3 ahead. We find that the accumulated seasonal forecasts are reliable and more skilful than climatology forecasts. Further, the seasonal forecasts accumulated from monthly forecasts are in general similarly skillful to direct seasonal forecasts (Figure 3, below).


Figure 3: Skill of accumulated and direct seasonal forecasts

The BJP modelling approach is an integral part of the Bureau’s forecasting system to generate seasonal streamflow forecasts. We have demonstrated the potential of BJP to produce reliable and skilful sub-seasonal forecasts. Our sub-seasonal forecasting work will be incorporated into the Bureau’s system to provide informative sub-seasonal forecasts for water management.

Do you want to know more about these results?

Please, check the following paper for more details: ‘Ensemble forecasting of sub-seasonal to seasonal streamflow by a Bayesian joint probability modelling approach’, Tongtiegang Zhao, Andrew Schepen, Q.J. Wang, Journal of Hydrology.541, Part B: 839-849 .doi: 10.1016/j.jhydrol.2016.07.040.

Posted in columnist, forecast techniques, seasonal prediction | Leave a comment

Flood memory and historical marks of high waters

Contributed by Maria-Helena Ramos (Irstea, France)

A flood mark in Lyon (France) for the flood event on 21 Jan 1955 in the Saone River

Last year, the Hepex Portal published a blog post by Richard Davies from about the UK and Ireland floods in December 2015 and January 2016. When navigating through the floodlist website, I found a page dedicated to flood and high water marks (here).

I have always found these marks indicating the level reached by the waters of a river (or any other waterbody) after a flood event to be fascinating. It is not only because of their hydrological importance or contribution to the analysis of extreme events. I like the “memorial” role they play for nature. They remind us that on those particular occasions that river was flooding or reaching levels that were remarkable for people living in the surroundings.

A recent paper (Sustainable flood memory: Remembering as resilience) has discussed the importance of keeping flood memory alive: “Flood marks, flood gauges, early warning systems (mediated by television, radio and online), public photographs, videos and news reports are the mnemonic practices that ensure that floods cannot be entirely erased from lived memory.” Interestingly, the authors mention an “unexpected materialization” of flood memory from a person that “kept a decanter on her table, which contained (after over 5 years) a volume of turbid water from when the flood had entered her property.” (check the nice photo illustrating it in the paper).

The authors’ study indicate that “Personal memory is a finite resource of potentially high-energy engagement. Forgetting how to live with flooding reveals a political economy of mismanaging memory (as much as water) that drives vulnerability.” (more about this topic can be seen here too).

The Zouave in Paris as a landmark for floods in the Seine River

Last year in France, we had severe flooding in the Seine and Loire river basins in late May-early June. A heavy rainfall event reached the northern part of France and was characterized by persistent and strong rain intensities. According to Météo-France, May 2016 was the rainiest month of May in northeast France since 1959. In some areas, rain fell over soils that were already wet due to previous rainfalls over the month, which contributed to severe flooding, mainly over the Upper and Middle Seine river basin and in several tributaries of the Middle Loire river basin.

In Paris, the increasing levels of the Seine River were followed closely during the event, due to its impact on commercial activities in the banks and the public underground transportation system. In addition, the catastrophic consequences of flooding of the Seine River also include the risk of flooding buildings such as the Orsay, Louvre and Grand Palais museums or the National Museum of Natural History. If you walk along the Seine in Paris, you will see how close these and many others monuments are to the river.

“Zouave du Pont d’Alma” in Paris on 3 Jun 2016 (source: Le Monde)

But let’s come back to the flood marks. As you probably know, the most famous “landmark” for the floods in the Seine River in Paris is the “Zouave du Pont d’Alma”. Situated at the Alma bridge (here), this statue of the artist Georges Diebolt, built in 1856, is an indicator (although not really too accurate, as  discussed in an article of Le Monde on 3 June 2016) of the severity of a flood event: in June 2016 waters went up to the hips (6.10 m), as in 1982 (6.18 m); in 1924, up to the waist (7.30 m); and in 1910, up to the shoulders (8.62 m). It is often considered that “floods are occurring in the Seine river when the statue has its feet in the water” (which starts at about 1.5 m, and can mean the beginning of trouble for many of the city’s inhabitants).

The “Zouave du Pont d’Alma” as a landmark for floods in the Seine River in Paris

The Zouave is a famous mark in Paris and I’ve just found out that it even has a dedicated song, which you can listen here or read the lyrics (in French) here (thanks to the passionate French association of hydrologists and my colleague Vazken Andréassian, a collector of hydrologic poems, as mentioned here).

But how to mark a flood?

Anecdotes aside, I want to come back to the flood and high water marks. It seems it is not so simple to mark a flood as one might think. I guess (or hope) no one was really there in person doing the mark when the flood occurred. This means that a flood mark needs to be searched for just after a flood, and here comes the importance of a good technique to do it in an effective way.

The U.S. Geological Survey recently published a manual with techniques and methods to identify and preserve high-water mark data (here). They say that “searching for recent high-water marks requires an eye for detail that is best developed through field practice.” It is a well-illustrated guide and tips are included to help you collect data. These tips are  (check out page 44): Safety first; Respond quickly; Look up; Stand back; Visualize the flood; Hunt for hidden clues; Think ahead; When in doubt, collect more data.

I personally like the “Visualize the flood” tip, where you are advised to “imagine the water at the peak stage”. I think it can be a good exercise to put into practice (using our imagination) a lot of concepts we have learned about channel hydraulics, river water velocity, but also local vulnerability and flood exposure.

This is me in 2009 showing a flood mark in Paris for the 1910 flood in the Seine River

In France, we have a national database where you can add your photo and contribute to keeping the memory of floods (check here).

Do you have a similar one in your country? Would you like to tell us more about it?

Contact Hepex co-chairs and website administrators if you would like to propose a blog post telling a bit of the history of floods and flood marks in your city or country, or if you just want to post your best photo and share it with us!

Posted in data systems, floods, historical | Leave a comment

HEPEX 2016 Year in Review

Contributed by Maria-Helena Ramos, QJ Wang, Andy Wood and Fredrik Wetterhall (Hepex co-chairs)

The Hepex Portal published 46 posts in 2016. Here below the year in review, with its highlights.

A hot topic for 2016?

Certainly, the winner is: Global, continental and countrywide forecasting. It was a recurrent topic in the posts published during the year. We have learned that:

  • The current state of large-scale (global and continental) operational flood forecasting is largely due to the integration of meteorological and hydrological modelling capabilities, improvements in data, satellite observations and land-surface hydrology modelling, and increased resources and computer power (by Rebecca Emerton)
  • Large scale models of complex basins with floodplains, braided drainage network, or flat relief is an ongoing research topic in Brazil (by Ayan Fleischmann and Fernando Fan)
  • One of the challenges of international forecasting systems is that these systems have to do with inconsistent collations of data from different countries (by Chantal Donnelly)
  • A new global precipitation dataset, the MSWEP, is available and can be useful for a broad range of hydrologic applications (by Hylke Beck et al.)
  • Handling large amounts of data for ensemble forecasting can be a nightmare, and benefits are expected from using standardised, well supported and self-describing binary file formats to make data sharing easier (by James Bennett)
  • There is an active community to support the integration of GloFAS forecasts into existing national and local forecasting capabilities (by Rebecca Emerton, Liz Stephens and Hannah Cloke)
  • Solutions to improve global forecasting can arise from a joint community effort such as the one shown by the #FloodHack held on 16 -17 January at ECMWF in Reading (by Fredrik Wetterhall)

And do you know any of those brave people who accepted being interviewed for Hepex in 2016?

Check out Hepex interviews here:

A big ‘thank you’ for all the interviewees!

The ensemble of Hepex interview posts can be seen here. If you know someone who would also have something to tell us, just prepare your interview and send the post to us for online publication in our Portal.

A step into historical hydrology

A novelty in the Hepex posts this year was the theme in a post proposed by Andy Wood and colleagues tracing the origins of ESP, a widely applied technique to produce streamflow seasonal predictions.  It will be great to see more posts on related forecasting history. If you have an idea that would be of interest to Hepex readers, just write it down!

And many more activities: workshop, columnist teams, special issue, experiments, games…

  • The highest point of our community activities in 2016 was certainly the workshop in Quebec in June, with about 100 participants from all over the world. A summary post highlighted the three main aspects discussed during the workshop: science, operations and applications. It shows how Hepex is contributing to each of these aspects and, most importantly, how it is fostering the community to link them for a more integrative view of hydrological forecasting. The workshop presentations that were kindly made available can be retrieved here.
  • In 2016, we have introduced the Hepex guest columnist teams. CSIRO (Australia), SMHI (Sweden), LSH (Brazil), and Irstea (France) contributed a total of 16 posts over the year. You can see all posts here, and enjoy reading about their activities, views and opinions on hydrological forecasting and related topics.
  • Also, right in the beginning of the year, we launched the HESS special issue on sub-seasonal to seasonal hydrological forecasting. This was one of the outputs of the workshop in Norrköping/SMHI held in 2015. There are already 13 papers in this special issue, and it will remain open for submissions until 31 Mar 2017. You are welcome to propose your contribution.
  • Another output is the Seasonal streamflow forecast experiment. Data has been collected, formatted and a protocol for inter-comparisons is developed. We expect that in 2017 the first results will emerge. If you want to take part in it, check the dedicated webpage and contact the leaders of the experiment. You can also check the poster and the oral presentation we had at AGU 2016 in San Francisco.
  • Another HEPEX experiment that has been recently launched is the Data Assimilation inter-comparison experiment. It comes after the Hepex workshop in Quebec, when we had a successful “break-out session” on the topic. The DA experiment is being piloted by Dirk Schwanenberg (Kisters) and Albrecht Weerts (Deltares). Contact them if you want to participate.
  • We cannot forget that 2016 has seen the first online Hepex game: check what Louise Arnal has proposed to the community here. And if you have new ideas for new games, just go ahead and share it with us in the next year.

Most viewed post of the year?

Well, we think you can guess which one hit (again) the podium with over 600 views… if not, just check here and enjoy reading it!

So, what’s next for 2017?

  • Don’t miss the hydrological forecasting sessions at EGU on 23-28 April in Vienna: the deadline for abstract submission is 11 Jan 2017. Descriptions (and a quiz to entertain you during your holidays) can be found in this post.
  • Next year will also see the IAHS 2017 Scientific Assembly. This time the International Association of Hydrological Sciences will be holding workshops in Port Elizabeth, South Africa, from 10 to 14 July 2017. Hepex particularly encourages you to submit an abstract and participate to Session 12 – Probabilistic forecasts and land-atmosphere interactions to advance hydrological predictions. You have until 14 Feb 2017 for submissions.
  • And if you have not done it yet, submit your paper on seasonal forecasting to our special issue (see above) before 31 Mar 2017.
  • And certainly much more to come! Keep an eye on our Portal!

We invite you all to contribute with your own blog posts (tips can be found here) and to support the organization of Hepex activities in 2017.

Happy holidays!

Posted in activities | Leave a comment

Short-term optimization of a tropical hydropower reservoir operation using deterministic and ensemble forecasts

Contributed by Fernando Fan, member of the LSH Research Group Guest Columnist Team

As we said in previous posts, hydropower is the most important source of electricity in Brazil and it is subject to the natural variability of water yield. Its extremes lead to the risks of power production deficits during droughts, and to safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy.

While deterministic forecasts and optimization schemes are the established techniques for short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques is receiving growing attention. In a recent work (Fan et al., 2016), we showed some hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil.

Fig. 1: Location of the Três Marias basin

The case study is the hydropower plant of Três Marias, located in southeast Brazil (Fig. 1). The reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control downstream of the dam (at the City of Pirapora).

In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts are used in a hydrological model to generate streamflow forecasts over a period of two years (Fig. 2).

Fig. 2: Data used in the 2-year period experiment

The online optimization depends on a deterministic and multi-stage stochastic version of a model with a predictive control scheme in combination with a novel scenario tree reduction technique.

Results (Figures 3 and 4) for perfect forecast show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days (blue-dots). In comparison, the use of actual forecasts with shorter lead times of up to 15 days shows the practical benefit of actual operational data (black-dot).

Fig. 3: Peak flow at Pirapora (results from the optimization using different input data and multiple lead-times for the forecasts)

Fig. 4: Volume over the 2000m³/s flooding threshold at Pirapora (results from the optimization using different input data and multiple lead-times for the forecasts)

It also appears form our results that the use of stochastic optimization combined with ensemble forecasts leads to a significantly higher level of flood protection without compromising the energy production (Figure 5).

Fig. 5: Energy generation (results from the optimization using different input data and multiple lead-times for the forecasts)

Do you want to know more about these results?

Posted in columnist, operational systems, water management | Leave a comment

The family of the GR hydrological models – Interview with Charles Perrin and Vazken Andréassian

Contributed by Guillaume Thirel and Maria-Helena Ramos, members of the Irstea Guest Columnist Team

You may have already heard of one of the GR models developed by the Catchment Hydrology research group at Irstea in the Centre of Antony (France). Or you may have already run one of these models in your study catchments or as an exercise with your students.

These models started to be developed in the 80’s, with the support of data from the Orgeval experimental basin, a 104-km² catchment within the River Seine basin, managed by the GIS-ORACLE Group and monitored by Irstea since 1962.

Historically, the development of the models was dependent on the time step, and different lumped, reservoir-based structures were specifically proposed for the simulation of river flows at the hourly, daily, monthly and annual time steps.

Today, a time-independent structure is being developed, and progress has also been made through the incorporation of new modelling components, e.g., snow modelling, model uncertainty quantification and data assimilation for real-time forecasting, etc.. Semi-distributed versions of the GR models were also implemented, as well as dedicated calibration strategies. The main objective of these developments is to robustly model flows at gauged and ungauged catchments (check the team’s publications here).

Claude Michel (who retired in 2007) is the father of these developments, and showed an original path to hydrological modelling in the 1980s. Some of his students, among which are Vazken Andréassian and Charles Perrin (who fortunately are not planning their retirement yet!), tried to follow this direction.

We asked Charles and Vazken some questions about hydrological modelling, the history and future of the GR models, as well their applications to real-time flood forecasting in France:

GT: The most well-known of the GR models is certainly the GR4J model, a model with 4 parameters for the daily time step, which has been applied in several parts of the world and has quite a success in Australia. Why do you think GR4J is so successful in simulating streamflows? What are its main strengths?

CP: Probably this question should be asked to the users themselves! I can see three main aspects that may have encouraged the use of this model in various contexts. Firstly, the model is easy to implement because of its simplicity and parsimony. These two qualities are the result of the development approach adopted by Claude Michel, which prevented to propose overly-complex or overparameterized models. It makes the model very easy to implement.

Secondly, partly because of data availability issues, the daily model was the starting point of the GR model developments, and hence received much attention. Since the daily time step is well adapted for a wide range of practical applications, the daily model was therefore widely tested. Lastly, but not least, the model proved to be quite robust and general enough to be applied in various hydro-climatic contexts, which was also a quality sought during its development. This was shown in various intercomparisons (led, for example, by Australian colleagues among others), which may have increased the confidence of end-users.

However, the term “success” is probably excessive. Feedbacks from end-users have also shown that the “Court of Miracles” of the model remains large [see Andréassian et al., 2010, for a discussion], and that it can obviously not be considered the panacea in all circumstances.

MHR: The GRP model is widely used for real-time flood forecasting in France. Looking back in time, what were the most difficult steps for its implementation in real-time and its use by the operational services?

CP: In the 1990s, there was no actual flood forecasting services in France and the centres in charge of flood monitoring were mostly doing flood warning based on observations and simple propagation methods on main streams. However, several catastrophic events, especially in southern France, stressed the urgent need to develop appropriate forecasting systems based on rainfall to increase anticipation.

The research developed by Claude Michel at that time produced forecasting tools based on the GR models and simple assimilation techniques. From theory to practice, one often says that there is a gap. Developing operational tools based on the GRP model (our flood forecasting model) was some kind of trial-and-error process. The main difficulty was to find a way to develop a system that could (1) meet users’ expectations (in terms of results, easiness-of-use), (2) be flexible enough to evolve with results of research, and (3) be compatible with limited software development means available.

Today, thanks to the work coordinated by Carina Furusho-Percot in our team, the model is quite widely used operationally in France. This gives very interesting feedbacks that show the advantages… but also the limits of the model, which is very useful to identify the most promising paths to improve the model. The next step is probably to widen the use of ensemble forecasts through the GRP model by operational services in France; an activity that in our team is led by you and which should benefit from the large work made by the HEPEX community.

GT: If you could go back in time and start developing a hydrological model again, what would be your first concern or where would you first put the focus of your development on?


Two different ‘models’ of the Spitfire. Which one is the most faithful-looking model? Which one would fly? (source: VA, after Sten Bergström)

CP: Sten Bergström often used this metaphor of the model of the Spitfire, distinguishing the issues of model fidelity and model prediction capacity. Our team has given priority to the prediction capacity, trying to find afterwards links with catchment-scale representation. Other teams may prefer the way round. Probably (or even fortunately) there is no ideal or single path to develop models and both have their advantages and drawbacks. However, as applied modellers, probably starting by seeking models that can “fly” is a nice (and already large) endeavour.

One pleasant feature with the approach we developed at Irstea is that our simple models are white boxes, in which the internal behaviour of the model is probably easier to understand than much more complex models. To go a step forward, there are certainly ways to develop models which will be more efficient because their fidelity is improved. Probably interesting ways to explore in the future!

GT: Vazken, in your opinion, what is a good hydrological model?

VA: This is a tough question. But if I am to answer it shortly, I would start by saying that there certainly is no unique answer. We must recognize that we are now in a world of large model diversity, and the questions really are which hypothesis can we test with which model? and which decision can we base on which model?

I like to draw an analogy between hydrological models and maps. And I believe that there are no ‘good’ maps, there are only maps which are better than others depending on what you would like to to do with it. Sometimes, a map of imaginary places can have a pedagogical interest [see below].


Pedagogical map summarizing geographical features (source: VA, taken from an old manual of geography)

Alike a map, a model represents the reality but it is not the reality: as we often say “the map is not the territory”, I would like to hear modellers acknowledge that the model is not the reality, and, unavoidably, results from a simplification, a conceptualization. For those who like literature, I would recommend reading Lewis Caroll, Borges and Umberto Eco, who all wrote on the absurdity of the 1:1 scale map. We could say similar things on the all-mighty ‘physical’ hydrological model.

Alike the map, the model is wrong. But the map is very useful for the traveller, and the model to the hydrologist. Alike the map, the model is a decision support tool, which can take different shapes depending on the objective of the user.


Three different maps but useful at a different level for a traveller wanting to reach the charming city of St Julien-Molin-Molette (source: VA, taken from geoportail IGN, France)


Incomplete Australia by M. Thevenot in 1663 (source: VA)

And the amazing thing with maps and models is that a model can be extremely useful even if it is not exhaustive. See the example of the early map of Australia by M. Thevenot from 1663. It is not exhaustive, it is incomplete… but it was the best that geographers could do at some point in history. We should certainly look at our models in a similar way.

GT: If you were to cite one essential property of the ‘good’ model?

VA: Nowadays, I tend to give more and more importance to the extrapolation capacity of the models. In hydrology, we need to build maps of the future, of a future that we have not seen yet, of extreme events that will be above all those we have observed. For this, I would say that it is essential to evaluate the extrapolation capacity of our models. Faith in the ‘physical’ nature of the equations is not enough. We need objective evaluations, which can take the form of “crash tests”.

MHR: Vazken, we all know that one of your hobbies is collecting hydrological poems. Have you ever run into a poem dedicated to hydrological modelling?

VA: Hm… there are many interesting hydrologic poems on floods and droughts (see here some of the poems I collected), but I have never run into a poem dedicated to hydrologic modelling. It is however not an impossible task, after all James Maxwell himself did put his own equations into verses (just google for A problem in dynamics), and more recently Tom Pagano wrote a poem on flow forecasting (see Tom’s blog or look here). Jean-Claude Olivry wrote a very interesting poem on African hydrology and on hydrological processes in general (here), and Woody Guthrie wrote countless songs on the Grand Coulee Dam. Nothing specific to modelling, but hundreds of poems on hydrological phenomena.

Thank you, Charles and Vazken, for your time and insights!

logo-airgrFor more info:

  • A brief history of the GR models can be found here.
  • If you are interested in using the GR models, you can download the airGR R-package recently developed by our research team. It allows the calibration and run of several models, from the hourly to the annual time step. All information can be found here.

Acknowledgements: This is Irstea’s last contribution as guest columnist team to the HEPEX blog for 2016. We enjoyed a lot this opportunity to communicate and share our viewpoints and we hope you all have also enjoyed reading our posts.

Posted in columnist, hydrologic models, interviews | 2 Comments

Flood forecasting in the UK: what should we learn from the winter 2015 floods? Interview with Hannah Cloke and David Lavers

Contributed by Louise Arnal

In November 2014, a HEPEX post entitled “Flood forecasting in the UK: what should we learn from the Winter 2013/14 floods?” was written by Liz Stephens and Hannah Cloke (it can be found here). This post presented the forecast lessons learnt after the winter 2013/14 floods in the UK. Two years later, progress has been made in flood forecasting, early warning and emergency response, but some challenges remain.

In September 2016, the UK government Department for Environment, Food & Rural Affairs (Defra) published a National Flood Resilience Review (NFRR). This review aimed at:

  1. understanding the risks of river and coastal flooding from extreme weather over the next 10 years in the UK;
  2. assessing and finding ways to improve the resilience of key local infrastructure in the UK (i.e., energy, water, transport and communications);
  3. improving the response to flood incidents in the UK, focusing on the installation of new temporary flood defences.

Hannah Cloke (University of Reading) and David Lavers (ECMWF) were both sitting on the Scientific Advisory Group, whose role was to review and validate the science in the NFRR. They have kindly agreed to answer some questions about this review.

Louise Arnal: Why was the NFRR written?

Hannah Cloke: The NFRR was written in response to the Cumbria floods of winter 2015. This event has shown that there is a need to reassess the current risk of flooding in the country and find ways for the country to be better prepared for future flooding and extreme weather events. For this review, the Met Office and the Environment Agency (EA) have stress tested the EA’s models and flood risk maps (for river and coastal flooding), using extreme rainfall and tidal scenarios produced by the Met Office. This was really important because the EA have so far based their assessments of the risk of fluvial floods on historical records of river levels during previous floods, rather than on Met Office extreme rainfall projections.

Louise Arnal: How were those extreme rainfall scenarios generated?

David Lavers: Possible extreme rainfall scenarios were developed by considering “ensemble forecasts and projections” which gives a range of counter-factual worlds. The difference between these scenarios and what observed rainfall has already occurred was taken to be an indicator of how much worse the rainfall could be. The Met Office ran 11,000 months of weather simulations, from which they estimated that a ‘plausible’ uplift would be between 20-30% for winter monthly rainfall totals, although this figure is different for each region across England and Wales. These possible rainfall differences are similar to those calculated by the ECMWF who ran a similar experiment. The Met Office then applied these uplifts to simulations of recent extreme rainfall events (2km resolution every fifteen minutes). These new simulations were later used as input to the EA hydrological models.

Note from the review: ‘Plausible’ extreme tidal scenarios were produced by combining a recent storm surge with the highest recorded astronomical tide.

Louise Arnal: What hydrological models does the EA use for flood forecasting?

Hannah Cloke: The EA have around 2000 local detailed models, with a spatial resolution ranging from a few km to a whole catchment). These models combine three main components:

  • Survey information: the river channel shape and surrounding landscape.
  • A hydrological model.
  • A hydraulic model: which enables flood extent and depth to be mapped.

The EA combines outputs from their local detailed models with less detailed broad scale modelling outputs and observed flood data to create the national Extreme Flood Outlines (EFO) maps. These maps show the extent of extreme floods (from rivers and sea) at any specific location in the UK, taking into account flood defences. On these maps, the outer boundary of an area is called the Extreme Flood Outline and shows the extent of a fluvial or tidal flood with a 0.1% chance of happening in any year at this location.

Louise Arnal: So, how were these maps used in this review?

David Lavers: A stress test was performed for the review by forcing several local detailed models from the EA with the extreme scenarios produced by the Met Office. This was done to see whether these floods would extend beyond the areas shown in the EFO maps. In modelling the predicted floods, a ‘worst case’ approach to other parameters of the model was adopted (e.g., prior soil saturation). For the review, the EA compared their current EFO maps with the extreme scenarios EFO maps for six case studies, four inland and two coastal areas. They observed that the flood extents and depths lay within, or very close to, the current EFOs. Figure 1 is an example of the EFO map for Oxford.


Figure 1 EFO map for Oxford, from the NFRR (2016).

Louise Arnal: How confident can we be in those results?

David Lavers: The assessment of flood risk is dependent on observed records of river flows which typically go back only 30-40 years, the longer the timeseries, the more accurate the results. We advised the Met Office and the EA to extend their flood records by using information from historic sources, for example, by using old newspapers or photographs.

Hannah Cloke: Although we judged the EFO maps to have passed a reasonable stress test for this review, the extent of flooding is impossible to forecast precisely and the possibility of floods extending beyond the EFO cannot be excluded. A source of uncertainty in this review is for example changes in the catchment response over time, which were not accounted for, like a change in the capacity of the catchment to absorb water. We concluded that the results were indicative and that the statistical methods to reduce uncertainties in flood estimation should be developed further. This work should also be extended to surface water and groundwater flood risk.

Louise Arnal: One major theme of this review was the communication of flood risk. What do you think are the current challenges in the UK and what advice was given to Defra?

Hannah Cloke: According to a survey from the EA, “although nearly half the population surveyed in recent research reported being aware of a local flood risk, only 7% felt this risk applied to their own property.” There is clearly an awful lot of work left to do to make sure that everyone is aware of their flood risk and knows how to prepare for flooding (see EA’s #floodaware campaign). One of the most important conclusions about flood risk communication was that widespread scientific descriptions of a ‘1 in x year’ flood risk are confusing to the public and alternative descriptions must be developed.

Louise Arnal: In November 2016, just two months after the NFRR was published, the UK House of Commons Environment, Food and Rural Affairs Committee (EFRA; committee appointed by the House of Commons to examine the expenditure, administration and policy of the Defra) published a Future Flood Prevention Report. This report was written as a response to the NFRR, which according to EFRA provided only limited solutions, insufficient to tackle “fundamental structural problems”. What do they mean by that?

David Lavers: One main criticism that they make is the fact that the EA currently relies too much on preventing floods by constructing defences at the point of impact, for example, in town centres. Although the NFRR mentions that engineered hard defences are not the only part of the solution, alternative methods were not addressed in this review. According to the report by the EFRA committee, there should be more emphasis made on catchment measures, which can help prevent flood waters from the source and along the river path to the impact point.

Louise Arnal: What are catchment measures exactly? and how good are they?

Hannah Cloke: Catchment measures are flood management measures that can be used to reduce the risk of flooding across a river catchment. They include natural flood storage measures like afforestation to increase water infiltration upstream close to the source of runoff, a combination of natural flood storage measures and ‘soft engineering’ defences like dykes along the river path, and measures to increase the resilience of communities in settlements at the point of impact (see Figure 2). These measures were set up for many smaller catchments in response to the Pitt Review of the 2007 summer floods (an example is the Pontbren project). These measures have been trialled only on small catchments so far, where they have shown positive impact on flood risk alleviation. But there is limited evidence as to their effectiveness on larger and more lowland catchments and for extreme events such as the Cumbria floods. So the report recommends further tests of those methods on a wider set of catchments.


Figure 2 Catchment measures, from the Future Flood Prevention Report (November 2016).

Louise Arnal: What are the next steps towards improving flood forecasting in the UK?

David Lavers: The report mentions the need for more accurate severe weather forecasts, both in time and space. Furthermore, although the NFRR have reduced some uncertainties in projecting the near-term impact on rainfall and flooding, more remain which should be tackled, for example, by extending the flood events time series. And last but not least, modelling and forecasting practices should move towards fully integrated flood risk modelling, from weather forecasting to impact assessment.

Louise Arnal: Thank you both for your time!

Where you can find the reports on which this interview was based:

Posted in floods, interviews, risk management | Leave a comment

Uncertainty in operational hydrological forecasting: Insights from SMHI’s services

Contributed by Ilias Pechlivanidis  (SMHI), member of the SMHI Guest Columnist Team


The production of hydrological forecasts generally involves the selection of model(s) and their setup, calibration and initialization, verification and updating, generation and evaluation of forecasts. However, the precision of hydrological forecasts is often subject to both epistemic and aleatory uncertainties, with the former being related to various components of the production chain and the data used.

Aleatory uncertainty refers to quantities or natural phenomena that are inherently variable over time and space, and hence characterised as random or stochastic, and epistemic uncertainty is related to the lack of understanding of a hydrological system (e.g. model structure and parameters), being further propagated to the description of the system.

In operational systems, we commonly use field observations to calibrate and initiate hydrological models; however, recent technological advancements have allowed us to use additional information, i.e., remote sensing data and meteorological ensemble forecasts, to improve hydrological forecasts (Olsson and Lindström, 2008). For instance, the Ensemble Prediction System (EPS) approach is used to acknowledge the uncertainty in the meteorological initial conditions and to generate probabilistic forecasts.

SMHI operationally produces hydrological short-term forecasts (10 days; deterministic and probabilistic) in Sweden based on the HBV (Lindström et al., 1997) and the S-HYPE (Lindström et al., 2010) hydrological models (Table 1). Although some techniques and methods are commonly shared between the two models, uncertainties can impact differently the performance from the two services.

My objective in this blog post is to present some of these uncertainties in the production chain and some ways to reduce them.

Table 1. Datasets used for hydrological forecasting at SMHI. 

 Period HBV S-HYPE


1961 – 1 month prior today Archive pthbv* Archive pthbv*
1 month prior – Today 06:00 Realtime pthbv* Realtime pthbv*


Today 06:00 –

+10 days 06:00


51 ensemble EPS



*pthbv: gridded 4 km data set of temperature and precipitation over Sweden

**Current operational service includes EPS for S-HYPE

Model setup, structure and parameters

S-HYPE (its 2012 version, named here as SH2012) simulates the same processes as HBV, but includes more water pathways and its parameters are more linked to physiographical characteristics in the landscape (e.g. HRUs). S-HYPE is also setup for a resolution of some 37000 subbasins, while HBV for some 1000 subbasins. S-HYPE also explicitly models the routing through a large number of lakes.

Both systems operate at a daily time step; however this can limit the understanding of processes at the subdaily level, which could occasionally result into low forecast performance for some events. This is particularly observed during the days/periods of 0oC temperature, and hence small temperature deviations within the day would in reality result into mixed processes of snow melting and accumulation, while on the aggregated modelled daily time step this would either be melting or accumulation (this is partially compensated by using a temperature interval over which the snowfall fraction decreases from 100% to 0%).

Nevertheless, Fig. 1 shows that both models (HBV Non-AR and SH2012 Non-AR) achieve a good performance (Kling-Gupta Efficiency KGE > 0.6) in all lead times. S-HYPE outperforms the HBV since the former comprises a real-time updating of the water discharge downstream of gauges for some of the stations, a feature which is lacking in the HBV forecasting system.

Updating methods

The operational flood forecasts at SMHI are primarily updated using an autoregressive (AR) forecast of the error. State updating and/or corrections of the input data are also sometimes used. Measured discharge and water levels in lakes are used for correction of modelled values by replacement of calculated values by the observed ones.

Fig. 1 shows the effect of AR updating of the forecasted discharge (without versus with AR). Introduction of AR updating can significantly improve the forecasts accuracy (KGE close to 0.9 for lead day 1 and 2), particularly due to the contribution of the volume and peak correction (Pechlivanidis et al., 2014). S-HYPE seems to achieve slightly better performance than HBV after lead day 4, again due to the update in the water level in the lakes and upstream discharge, which has a longer term impact on the performance rather than the AR updating.

Fig. 1. Forecast KGE performance for a number of Swedish stations (crosses) and reference-alternative systems as a function of lead time. Thick lines show performance of the station median for every lead day.

Ensembles forecasts

Fig. 1 shows that, overall, the deterministic and the ensemble HBV systems are fairly similar. Digging deeper by decomposing the performance into terms describing different attributes of the flow signal, we show that this is due to the lack of adequately capturing the distribution in observed discharge (Fig. 2).

Note that the deterministic HBV forecasts used here are based on a high-resolution model, whilst the ensemble median is here used to represent the ensemble spread. However, Arheimer et al. (2011) showed that ensemble forecasts can be of added value, particularly if the difference between the probabilistic and the deterministic forecasts is large.

Fig. 2. Forecast improvement (%) due to AR updating and use of median EPS in the hydrological production systems.


Overall, both HBV and S-HYPE services are capable of producing adequate forecasts with the performance steadily decreasing in lead time.

  1. In the systems without AR-updating, S-HYPE shows superiority to its HBV counterpart during all lead times, highlighting the importance of updating the water level in the lakes and upstream discharge.
  2. The AR updating method can reduce the epistemic uncertainty and improve the performance of the two systems. This is mainly because the method has a significant impact on the improvement of discharge volume. S-HYPE seems to perform slightly better than HBV in the longer lead time, probably because the S-HYPE system is capable of updating the lake water level, which has an impact on the longer lead times.
  3. Moreover, the deterministic and ensemble HBV systems with AR updating perform fairly similar for all lead times. This could be subject to the high quality archive national dataset (pthbv), which was used to drive the deterministic model; however ensemble forecasts can be of added value, when the difference between the probabilistic and deterministic forecasts is large.

Acknowledgements: This was SMHI’s last contribution as guest columnist team to the HEPEX blog for 2016. I am very grateful to the HEPEX co-chairs and community for giving us the opportunity to share insights and challenges in operational forecasting. I would also like to thank my colleagues Göran Lindström, David Gustafsson, Chantal Donnelly and Jonas Olsson for acting as guest authors. I hope the community enjoyed reading our contributions.

Posted in columnist, hydrologic models, operational systems | Leave a comment