Why are meteorologists apprehensive of ensemble forecasts?

Contributed by Anders Persson, Uppsala, Sweden

A colleague in my world-wide meteorological network made me aware of a CALMet conference in Melbourne, i.e. dealing with meteorological education and training. Through the website you can access the program with more or less extensive abstracts. I have no doubt that most presentations were relevant and interesting, but what surprised me was that a search for the key words “probability” or “ensemble”  gave no hits. “Uncertainty” came up in only one (1) presentation, no 36 “To communicate forecast uncertainty by visualized product” by Jen-Wei Liu and Kuo-Chen Lu from the Central Weather Bureau in Taiwan.

This made me again ponder over the question why meteorologists still are apprehensive of ensemble systems (ENS) and probability forecasting.

1. Ensemble forecasting brings statistics into weather forecasting

Since the start of weather forecasting as we know it (in the 1860s), there has always been a rivalry between physical-dynamic-synoptic and statistical methods. Edward Lorenz’s famous 1959 experiment when he discovered the “butterfly effect” was part of a project in the late 1950’s to find out if statistical methods could be as effective in weather forecasting as numerical techniques. The answer was at the time not as clear-cut, but during the 1960’s, the numerical weather prediction (NWP) made much larger advances than the statistical approaches. Statistical methods were thereafter only used to calibrate NWP in what became known as MOS (model output statistics).

Over a lunch at ECMWF Edward Lorenz, on one of his annual visits in the 1990s, told us a parable he had got from the renowned Norwegian meteorologist Arnt Eliassen:

All the world’s birds wanted to compete who could fly the highest. They all set off ascending, but one after the other they had to drop out. Finally, only the great golden eagle was left. But as also he had to stop in order to return, a little sparrow who had been hiding in his feathers came out and managed to beat the eagle by a meter or two. The eagle is the dynamic NWP Eliassen had told Lorenz (who told us), the sparrow the statistical MOS.

To some extent the MOS can deal with uncertainties, but in a limited way since it is based on a deterministic forecast. It can estimate the general uncertainty at a certain range, but not distinguish between more or less predictable flow patterns. This is the strength, the core value of the ENS.

But ensemble forecasts are essentially statistical, probabilistic, and the meteorological education have always avoided to venture into this domain, except for those who wanted to become climatologists which in the old days was looked down upon. The ideal has been a physical-dynamic “Newtonian” approach, where perfect or almost perfect forecasts were seen as possible, if only the meteorological community got enough money to purchase better computers.

Indeed, it has paid of; the predictability range has increased by about one day per decade. Our five day deterministic forecasts today are as good and detailed as the two day forecasts in the 1980s. But also the demands and expectations from the public has increased. Even if we in a few decades from now can make more accurate and detailed seven day forecasts, there will still be questions about their reliability. The problem of uncertainty estimations will always be with us.

2. The ensemble system is a Bayesian system

But also among those meteorologists who are used to statistics, there is another problem. I became aware of that when I traveled on behalf of ECMWF to different Member States. A frequent question was: -How can you compute probabilities from those 50 members when you are not sure that they are equally likely?

My answer then was that we did not know! We did not know the likelihood of every member and we didn’t even know if they were all equally likely (probably they were not). But the verification statistics were good, and they would not have been so good if our assumption had been utterly wrong.

A typical “postage stamp map” from the ECMWF system. These 50 forecasts are not a priori equally likely, but since we do not know the probability of each of them we have to apply Laplace “Principle of insufficient reason” and assume that they are equally likely – an assumption which makes the system Bayesian. Image courtesy of ECMWF.

Only later I was made aware that my answer was the same as Siméon de Laplace had given two centuries earlier, when he was developing what is today known as “Bayesian statistics”: – We do not know, but make a qualified guess and see how it works out. Bayesian statistics, in contrast to traditional “frequentist” statistics, acknowledges the usefulness of subjective probabilities, degrees of belief. Laplace’s answer, which I unknowingly resorted to during my ECMWF days, is known as “Laplace principle of indifference”.

So part of the apprehension to ensemble forecasting cannot be attributed to ignorance, conservatism or “Newtonianism”, but has its basis in a long standing feud between “Bayesian” and “frequentist” statisticians. A “Bayesian” can look at the sky and say “there is a 20% risk of rain” whereas a frequentist would not dare to say that unless he had a diary which showed that in 34 cases out of 170 with similar sky, wind and pressure rain has occurred.

In recent years the gulf between “frequentist” and “Bayesians” has narrowed. Also, the calibration of the ENS data “à la MOS”, “washed away” much of the Bayesian characteristics and provided a more “frequentist” forecast product.

3. What is left for the forecaster?

Bayesian methods should not be alien to experienced weather forecasters. Since weather forecasting started in the 1860s there has been a strong Bayesian element in the routines, perhaps not described as such, but never the less this is how forecasters worked before the NWP. Who else but an experienced forecaster could look at the sky and give a probability estimate of rain? If the forecaster had a weather map to look at, the estimation would be even more accurate. Verification studies in the pre-NWP days in the 1950’s showed that forecasters had a good “intuitive” grasp of probabilities.

But with the advent of deterministic NWP the “unconscious” Bayesianism among weather forecasters evaporated gradually. The NWP could tell very confidently that in 72 hours time it would be +20.7 C, WSW 8.3 m/s and rain 12.4 mm within the coming six hours?

Anybody could read that information, your didn’t need to be a meteorologist. But you needed to be a meteorologist to have an opinion about the quality of the forecast: -Would it perhaps be cooler? The wind weaker? How likely is the rain?

There are currently more weather forecasters around than at any time before, in particular in the private sector where advising customers about their decision making is an important task (Photo from a training course at Meteo Group, Wageningen. Permission to use by Robert Muerau)

The risk was always that this forecast, even against the odds, would verify. So wasn’t it most tactical to accept the NWP? After all, if the forecast was wrong, the meteorologist had something to put his blame on. Some meteorologist took this easy road, but most tried to use their experience, knowledge of the models and meteorological know-how, to make a sensible modification of the NWP, including the reliability of the forecast. If the last NWP runs had been “jumpy” and/or there were large divergences among the available models.Tthis was taken as a sign of unreliability.

The “problem” for the weather forecasters was that with the arrival of the ENS they were deprived of even this chance to show their skill. The “problem” with a meteogram from ENS, compared to a more traditional deterministic from a NWP model, was that “anybody” could read the ENS meteogram! You didn’t need to be a meteorologist, not even a mathematically educated scientist. Einstein’s famous “grandma” could read the weather forecast and understand its reliability!

“You do not really understand something unless you can explain it to your grandmother.” – Albert Einstein

So what is left for the meteorologist?

I will stop here, because this text is already long enough. But the question above is really what educational and training seminars, conferences and workshops should be more focused on. I am personally convinced that the meteorologists have a role to play.

My conviction is based on my experiences from the hydrological forecast community, in particular the existence of this site. Is there any corresponding “Mepex“?

My conviction is also based on my experience as a forecaster myself, how the general public (and not so few scientists) need help to relate the uncertainty information to their decision making.

My conviction is finally based on my experiences from history that new tools always make traditional craftsmen more effective and prosperous – provided they are clever enough to see the new opportunities. Else they will miss the bus . . .

PS. To their credit it must be mentioned that EuMetCal is developing training resources for probabilistic forecasting. Ds.
All images from Thinkstock if not otherwise written.
Posted in ensemble techniques, forecast communication, opinion | Leave a comment

Ensemble prediction: past, present and future

Contributed by Fredrik Wetterhall and Roberto Buizza, ECMWF

The work of producing meteorological ensemble forecasts started 25 years ago at ECMWF and NCEP, and it sparked a revolution in both weather forecasts and its many applications. To celebrate this occasion, more than 100 people from across the world joined the 28 speakers at ECMWF’s Annual Seminar 11-14 September held in Reading, UK. The theme was “Ensemble prediction: past, present and future” and the four days where filled with presentations and discussions on what has been done, where we are and how we in the future can further improve the accuracy and reliability of ensemble-based forecasts.

Thanks to advances in models, data assimilation schemes and the methods used to simulate initial and model uncertainties, today ensembles are widely used to provide a reliable estimate of possible future scenarios. This is expressed for example in terms of probabilities of weather events or of risk indices. Increasingly, ensembles are routinely used to provide forecasters and users with the range of weather scenarios that could happen in the future. An example is given by the ECMWF ensemble-based strike probability of hurricane Irma, issued by ECMWF on 5 September.

The ECMWF ensemble-based strike probability that hurricane Irma would pass within a 120 km radius during the next 10 days, issued on the 5th of September (left panel).

Using ensemble forecasts

Different aspects of ensemble forecasting were discussed during the seminar, and they included the history and theory of ensemble forecasting, initial conditions, model uncertainties, error growth, predictability across scales, verification and diagnostics and future outlook. The full programme including recordings of the talks can be found here. The theme that may be of most interest for the HEPEX community was devoted to applications of ensemble forecasts. The session discussed the various ensemble products that now exist to help decision making (David Richardson, ECMWF), hydrological ensembles including the HEPEX experience (Hannah Cloke, Reading University) and observing and supporting the growing use of ensemble products (Renate Hagedorn, DWD). The session was testament as to how mainstream ensemble forecasts have become, not only in science but also in institutions and authorities that use probabilistic information in decision-making. There is still a lot to do to overcome some of the existing barriers, but the acceptance of ensemble forecast is truly a success story.

Panel discussions and looking forward

The seminar also included a panel discussion which provided an opportunity to explore and discuss in more detail some of the fundamental questions that are currently being tackled by the community, such as:

  • Should we be moving to small ensembles at high resolution, or large ensembles at more moderate resolution?
  • If the most cost-effective ensemble structure changes with lead time, should our ensemble be built so as to give a resolution and ensemble size that changes with lead time?
  • If an ideal ensemble consists of a set of equally likely members, is there a role for an unperturbed/central forecast?
  • What do we expect from the future in terms of our ability to represent model error in ensemble systems, and the representation of perturbations more generally?

It can be interesting to report some of the comments raised during the lively panel discussion:

  • Some users would react also at small probabilities: they would be the ones benefiting more from a size increase;
  • Ensemble size very is important both for the extended/long ranges and for high-resolution ensembles, to be able to capture the fine-scale details;
  • Considering the range of users of the ECMWF ensembles, overall, a size of 50 seems about right; although ECMWF principal aim should be to provide the best raw ensemble forecasts, it should work with the users to develop calibration methods, and understand whether the balance between ensemble size and resolution should be revisited once calibration methods are more widely used;
  • ECMWF should aim to provide the national meteorological services and its users with ensemble-based probabilistic forecasts that could be used by a wide range of users; it will be then up to the national meteorological services and/or third parties to design ‘tailored’ ensemble configurations that can address the needs of specific users;
  • We need more observation-based diagnostic to understand model error, and design better schemes;

Participants of the ECMWF Annual Seminar 2017. Photo: Simon Witter, ECMWF

The HEPEX community was an early advocator of using ensemble forecasts and it is important that we continue to push the boundaries of how ensembles should be used in research and applications. A good way of doing just that is to come to the HEPEX workshop in Melbourne next year!

Posted in activities, data assimilation, ensemble techniques, forecast techniques, forecast users, historical, meetings, operational systems, verification | 1 Comment

Final call for abstracts: 2018 HEPEX workshop in Melbourne, Australia

As you may have heard, the 2018 HEPEX workshop in Melbourne is coming up soon (Feb 6-8, 2018). Abstracts are due for submission on Sep 30, 2017. The workshop will feature both oral and poster presentations. The theme for the workshop is ‘breaking the barriers’ to highlight current challenges facing ensemble forecasting researchers and practitioners and how they can (and have!) been overcome. We wish to highlight the following barriers:

  • using ensemble forecasts to improve decisions in practice
  • extending forecasts in space (including to ungauged areas) and across lead-times, from short-term to sub-seasonal to seasonal forecast horizons
  • using ensemble forecasts to maximize economic returns from existing water infrastructure (e.g. reservoirs), even as inflows and demand for water change
  • using ensemble forecasts to improve environmental management of rivers
  • applying ensemble forecasts for agriculture
  • searching for better/new sources of forecast skill
  • balancing the use of dynamical climate and hydrological models with the need for reliable ensembles
  • communicating forecast quality and uncertainty to end users

More generally, we welcome contributions on new and improved ensemble hydrological prediction methods, as well the application of existing methods in practical and operational settings.

Keynote speakers for the workshop have been finalised – you can check out this and other information on the workshop website.

The HEPEX workshop is a highly effective forum for exchanging ideas and experiences on all things hydrological forecasting, and registration is free. So get to submitting those abstracts!

Any questions? Please contact us!

Posted in activities, announcements-events, meetings | Leave a comment

End-To-End Probabilistic Impact Based Early Warning Systems for Community Resilience

Contributed by Dr. Bapon Fakhruddin, New Zealand

Recently I attended the Fourth Pacific Meteorological Council (PMC) and Second Pacific Meteorological Ministers Meeting (PMMM), which was held in Honiara, Solomon Islands, from 14 to 17 August.

Figure 1. Dr Bapon Fakhruddin at the meeting. Photo credit: Jenny Davson-Galle

Reaching communities and ensuring that those most in need are provided with effective communications and technologies are top priorities for the Pacific Meteorological Council (PMC). Science has in-build uncertainty and is highly probabilistic. The probabilistic ensemble forecasting approach exposes the range of uncertainty associated with different predictions. They also allow the adoption of a risk-based approach for decision making and contribute to building confidence to help operational forecasters. Effective early warning systems (EWS) require a complete understanding of the populations and assets exposed to threats linked to the probabilistic ensembles forecasts.

With present extreme weather events, risk-based early warning systems are essential. Practice shows that people and communities at risk need to be involved in the understanding of their exposure and the vulnerabilities of different groups, including the disabled, the elderly, children and pregnant women. An effective system also relies on expert risk assessment, interpretation and communication. Currently, in many places, we do not use probabilistic risk assessment. Since science information is probabilistic, risk assessment needs to follow the same path.

Figure 2. Participants at the meeting. Photo credit: Jenny Davson-Galle

Understanding the forecast

Research has shown that, before deciding to take a disruptive – and often expensive – action such as evacuation, people must understand the forecast; they must believe that it actually applies to their situation and, most importantly, they must feel that they need to act because people, including their loved ones, are at risk. However, in many cases, common practice has been to prepare and release forecast messages without adequate concern on how they are received, understood and/or interpreted. Accurate, appropriate information that translates early warnings into early actions at community level is essential.

“We’ve always been talking about reaching the last mile, and that means getting to the people who haven’t got the message we are relaying. We’re talking about people with disabilities as well. They need to be included in our conversations and awareness efforts too,” said the Secretariat of the Pacific Regional Environment Programme’s (SPREP) Climate Change Advisor, Espen Ronneberg. “We do this to prepare those who are vulnerable to disasters as well, and that includes people with disabilities.” was added.

Acting on the risks

Disabled and elderly people are particularly at risk from natural disasters as, even with strong family and community support systems, it takes longer for them to reach designated safety zones. Likewise, extra forward planning is required for the evacuation of hospital patients and other health care facilities. Ronneberg believes that the way forward lays in encouraging disabled people to join in on EWS (Early Warning Systems) discussions. “I think the best way to include them would be through the People with Disabilities’ Forum, and it will be great if we can get them to take an interest in meteorology as well,” he said.

The PMC session included a discussion on new forms of risk assessment, such as the shift from deterministic to probabilistic risk estimation. Deterministic approaches are used to assess the impacts of a specific natural hazard scenario, whereas probabilistic methods are used to produce more refined estimates of how often a hazard is likely to happen, and the potential damage it will deliver, with the help of modelling tools. Probabilistic assessments work with uncertainties, partly due to the random nature of natural hazards, and partly because of our incomplete understanding of natural hazards and the limited ways of measuring hazards, exposure and vulnerability (OECD, 2012).

Communicating probability information

As hazard information is always probabilistic, the risk information and risk communication also need to be probabilistic. When any new probabilistic forecast product is introduced, it can be mis-communicated to the affected people. For people to make good decisions, the capacity to generate an early warning with an acceptable lead-time is essential. For example, advances in tropical cyclone (TC) forecasts using ensemble methods have been widely used for operational TC tracking. By using simple, weighted, or selective methods, TC tracking forecasts tend to have smaller positional errors than single model–based forecasts.

Figure 3 End-to-end early warning systems.

The impacts of climate variability and change were also recognised at the meeting as major challenges to island nations. Of particular concern to the Pacific region were sea level rise, salt water intrusion, drought, flooding, coastal inundation, ocean conditions (tides, swells, waves, acidification) and impacts on health (e.g. malaria and dengue), water resources, agriculture and fisheries (invasive species, etc.).

WMO’s approach to Climate Risk and Early Warning System (CREWS) initiatives and the requirements for disaster loss data standardization is crucial for impact-based early warning system, which offers more accurate risk assessment. See also a summary the outcomes of both the Multi-Hazard Early Warning System and the Disaster Risk Reduction Global Platform Meetings (http://www.wmo.int/earlywarnings2017/) held in May in Cancun, Mexico.

The next Pacific Meteorological Council (PMC) meeting will be held in Samoa in 2019. The PMC consists of members of the Pacific National Meteorological and Hydrological Services supported by its technical partners, regional organisations, non-government organisations and private sectors.


The 14-17 August meeting was co-hosted by the Government of Solomon Islands, the Secretariat of the Pacific Regional Environment Programme (SPREP) and World Meteorological Organisation (WMO).

 

Posted in meetings | Leave a comment

Quiz: Can you guess the city by looking at its river from space?

Contributed by Calum Baugh, Maria-Helena Ramos and Florian Pappenberger

Here are eight cities (and their rivers) seen from Google Earth. Can you recognize them?

Since nobody seems to have guessed the quiz we had in a previous post, we provide for each city/river some clues. Additionally, keep in mind the general clue for all of them: there is at least one Hepex member living in (or very close by) each of these cities (you don’t need to guess who they are…)

River 1: ‘Flooding often coincides with high tides in this city, its barrier can protect against storm surges…but for how long?’ Check the answer here

River 2: ‘The river drains the second largest lake in this country, industry thrived along its banks leading to the nickname Little Manchester.’ Check the answer here

River 3: ‘This river drains the lake which is eponymous from this city, flowing northwest, its steep gradient means there are 10 hydroelectric stations along its route prior to its confluence.’ Check the answer here

River 4: ‘Apparently, here, (non-hydrologists) tourists are often confused about the terms “right bank” and “left bank” and may spend hours trying to figure out which side of the river they are standing on.’ Check the answer here

River 5: ‘Heavy rain and catastrophic flooding was particularly observed in September 2013. They say the event went quickly from bad to worse. It was eight days, 1,000-year rain, 100-year flood.’ Check the answer here

River 6: ‘The canals somewhat dwarf the river here; perhaps hepexers will be more familiar with the beer to which the river lends its name.’ Check the answer here

River 7: ‘Flooding occurred along this river in May 2017 when nearly double the April rainfall average fell; however this city was spared any damage, with the worst affected being the cities upstream.’ Check the answer here

River 8: ‘Heavy flood protection prevents many floods in this city, but in 2014 the flooding resulted primarily from moderate rainfall combined with 111 km/h winds ‘pushing’ water upriver.’ Check the answer here

So, how many cities have you guessed right?

Posted in activities | 2 Comments

Hydrologic similarity: Bridging the gap between hyper-resolution and hydrologic ensemble prediction

Contributed by:  Nate Chaney (Princeton University) and Andy Newman (NCAR)

The ever-increasing volume of global environmental data and the continual increase in computational power continue to drive a push towards fully distributed modeling of the hydrologic cycle at hyper-resolutions (10-100 meters) [Wood et al., 2011]. In principle, this has the potential to increase model fidelity and lead to more locally-relevant hydrologic predictions (e.g., soil moisture at the farm level).

However, for the foreseeable future, due to computational constraints this modeling approach will not be suitable for large ensemble frameworks—a pre-requisite for reliable operational applications given the unavoidable uncertainties in model structure, model parameters, and meteorological forcing.

However, this does not mean that hydrologic ensemble prediction should continue to rely on over-simplistic hydrologic models simply to maintain computational efficiency. It is undeniable that providing field-scale hydrologic predictions can have the potential to significantly advance the use of hydrologic models (e.g., precision agriculture).

Furthermore, the important role of the physical environment and human management in hydrologic response necessitates a more explicit representation of the spatial drivers of heterogeneity in hydrologic models. Therefore, there is a need for a modeling approach that can provide field-scale predictions while approximating the computational efficiency of existing hydrologic models used in ensemble frameworks. Contemporary applications of hydrologic similarity can satisfy both objectives.

Hydrologic similarity

Hydrologic similarity aims to harness the observed covariance between a system’s physical environment (topography, soil, land cover, and climate) and its hydrologic response to assemble robust reduced-order models.

Although originally limited to one-dimensional binning of over-simplistic metrics of hydrologic response (e.g., topographic index), recent advances have taken hydrologic similarity a step further; a system’s most representative hydrologic response units (HRUs) are defined by clustering the high dimensional environmental data space [Newman et al., 2014]—the petabytes of readily available high resolution global environmental data make this feasible over the globe. Semi-distributed models can then be built to simulate these HRUs and their spatial interactions (e.g., HydroBlocks, Chaney et al., 2016).

Within the clustered spatial domain, each fine-scale grid cell (~30 meters) is associated with a specific HRU through its environmental characteristics. This then makes it possible to map out the HRU simulations onto the fine-scale grid to approximate the fully distributed simulation (see Figure 1 for an example).

Using this approach, ongoing work continues to show that the fully distributed simulation can be closely reproduced using around 1/1000 of the number of HRUs; each grid cell is a unique HRU in the fully distributed simulation. In other words, the semi-distributed model can effectively provide the same hydrologic information as the hyper-resolution fully distributed model with only a fraction of the computation.

This equates to being able to run roughly 1000 ensemble members of the semi-distributed model in the time it would take to run the fully distributed model once; all while being assured that each ensemble member closely approximates its corresponding fully distributed solution.

In summary, contemporary implementations of hydrologic similarity provide a unique opportunity to bridge the gap between physically-based hyper-resolution modeling efforts and hydrologic ensemble prediction. It enables robust ensemble frameworks to provide locally-relevant information while ensuring they can robustly characterize the unavoidable uncertainties due to model structure, model parameters, and meteorological forcing.

References

  • Chaney, N., P. Metcalfe, and E. F. Wood (2016), HydroBlocks: A Field-scale Resolving Land Surface Model for Application Over Continental Extents, Hydrol. Process., doi:10.1002/hyp.10891.
  • Newman, A. J., M. P. Clark, A. Winstral, D. Marks, and M. Seyfried (2014), The Use of Similarity Concepts to Represent Subgrid Variability in Land Surface Models: Case Study in a Snowmelt-Dominated Watershed, J. Hydrometeorol., 15, 1717–1738.
  • Wood, E. F. et al. (2011), Hyperresolution global land surface modeling: Meeting a grand challenge for monitoring Earth’s terrestrial water, Water Resour. Res., 47(5).
Posted in ensemble techniques, forecast techniques, hydrologic models | Leave a comment

Quiz: Can you guess the river from the space?

Contributed by Calum Baugh, Maria-Helena Ramos and Florian Pappenberger

Here are four rivers seen from Google Earth. Can you recognize them?

River 1:

Check the answer here

River 2:

Check the answer here


River 3:

Check the answer here

River 4:

Check the answer here

 

Posted in activities | Leave a comment

Which scales matter for water resources management?

Contributed by Andreas Hartmann, Axel Bronstert, Bettina Schaefli

The discussion about which scale is the most relevant for water resources management is an increasingly important debate of hydrological modelling principles over the last decade.

The session “(Ir‑)relevant scales in hydrology: Which scales matter for water resources management?” convened at this year’s EGU General Assembly 2017 tried to put some new light (and fire) into this ongoing debate.

Photo taken during the session at EGU 2017 by Axel Bronstert

Solicited speakers representing the plot scale, hillslope and catchment scale, and the large-scale provided valuable insights into their research experience and provided opinions about the session topic.

  • Hans-Joerg Vogel provided a list of questions that may be answered at the plot scale. For instance, to what detail do we need to know the basic soil hydraulic properties? And how useful are sophisticated lab measurements to predict what is happening in the field? Also, methods to handle the highly non-linear change in flow paths as a function of the hydraulic state and its history may be developed at the plot scale. However, how soil water dynamics integrate over the scales of hillslopes or catchments is still an open question cannot solely be answered at the plot scale. How far can approaches like Richard’s equation that were developed at the plot scale be transferred to those larger scales?
  • This question was picked up by the following presenter, Theresa Blume, who advocated for more research for understanding the link between plot and catchment scales. Nested monitoring programmes with monitored catchments that envelop different monitoring plots, as applied within the Catchments As Organised Systems (CAOS) project, may provide promising advances in our understanding how plot scale dynamics integrate to hillslopes and catchments. When such measurement design is applied in a comparative approach, generalized knowledge about hydrological processes across different types of hydrological landscapes could be obtained. But how to integrate the derived understanding into models that can be applied at test sites with less information?
  • The modeller’s point of view was picked up by Jens Christian Refsgaard. Stating that it It is recognised that there is a mismatch of spatial scales between our process knowledge, the modelling grids of our distributed catchment models (50 – 500 m) and the water management problems, where the relevant scale often is claimed to be the catchment (e.g. 10 – 5000 km2). Groundwater pollution may even occur on larger scales (several 10,000 km²) while mitigation measures have to be applied locally. Hence, relevant scales vary from one issue to another (plot to large scale) and for some issues (e.g. mitigation measures) are in the order of 100 m when for instance the European Water Framework Directive (EWFD) is applied. Although distributed models are technically able to simulate the hydrological behaviour on such small scales, how can we evaluate them for their realism if now operations are available?
  • The challenge of evaluating hydrological models that operate on scales even larger than the catchments scale was also picked up by Thorsten Wagener. We increasingly build and apply hydrologic models that simulate systems beyond the catchment scale. Such model can provide opportunities for new scientific insights, for instance, the consideration of intercatchment flow. Also, large-scale models can help us to understand changes to water resources from larger scale activities like agriculture or from hazards such as droughts. However, these models also require us to rethink how we build and evaluate them given that some of the unsolved problems from the catchment scale have not gone away. So what opportunities for solving these problems are there? Are there possibilities that have not yet been utilized?
  • An increasing source of information for large-scale model applications are global archives of hydrological observations, as stated by Lena Tallaksen. These may allow a more detailed development and evaluation of hydrological models for the purpose of water resource assessments and climate change impact studies at the global and continental scale. Recent research has been providing improved knowledge of the present state of global water resources and variability across large spatial domains, the role of terrestrial hydrology in earth system models, the influence of climate variability and change on continental hydrology (including extremes), and the representation of subsurface hydrology and land-surface atmosphere feedback processes. Large-scale models are more and more adapted to include multiple types of input data such as remote sensing processes. However, a lack of ground truthing especially in less developed regions, the representation of hydrological variability below the modelling scale and uncertainties in downscaling large-scale climate forcings to the model scale still limit the applicability of large-scale models. Despite these challenges, large-scale models may represent a useful source of information for continental-scale hydrological assessments and evidence-based policy making. To increase their reliability, transfer of knowledge across scales is essential to improve hydrologic predictions at different spatial scales in an ever-changing world.

Overall, by the excellent presentations and discussion during the session, we found that each scale has its own relevance for water management. Experimental research at the plot and catchment scales brings advances in hydrological process understanding that can improve our simulation tools, while comparative hydrology and large scale modelling can provide quantitative information at larger scales to support water governance and policy making.

The key question in the near future is certainly how to further improve collaboration and foster discussions across all hydrological scales, especially in the context of ever more complex models at all scales. One option would certainly be to organize regular cross-cutting conference sessions. Another interesting initiative that has been presented at EGU2017 is the data and model sharing platform https://www.hydroshare.org/.

Posted in activities, announcements-events, water management | Leave a comment

The role of Early Career Scientists in community research

Contributed by Florian Pappenberger and Maria-Helena Ramos (both considerably beyond the early career stages, they admit)

(this post can also be seen in the Young Hydrologic Society Portal)

Science and forecasting practice are the foundations of the HEPEX community. These are certainly the routine of many of us during our office hours and while spending time in front of your computers.

But this community is also based on individuals, and this is often what really makes it fun to go to meetings, workshops and conferences. Face-to-face interactions often bring new ideas into form (see also this previous post from CSIRO team), while also helping us to further develop interpersonal skills.

It is thus not really a surprise when members of the community get together after meetings and write (sometimes successful) research proposals together. One example is the IMPREX project, funded under the EU H2020 programme, where many partners already knew each other from HEPEX before participating in IMPREX.

The ‘EX’ that HEPEX and IMPREX have in common does not express the same idea at all, but several challenges of HEPEX are part of the research tasks of IMPREX:  for instance, improving hydrological models and data assimilation for forecasting extremes, or estimating the economic value of forecasts in the water sector.

IMPREX stands for IMproving PRedictions and management of hydrological EXtremes. The project targets improving the quality of short-to-medium hydro-meteorological predictions, enhancing the reliability of future climate projections, applying this information to strategic sectoral and pan-European surveys at different scales, and evaluating and adapting current risk management strategies.

But this post is not about IMPREX. What in fact has (gladly) attracted our attention in this project and we would like to talk about here is the active participation of early career scientists (ECS).

Intergenerational engagement in research projects

Many research programmes call for the multiple benefits of stakeholder and public engagement, but what about “intergenerational engagement” between the early career scientists and the well-established ones?

If we look closely, the age group distribution in IMPREX, and in HEPEX as well, is extremely diverse. It ranges from early career scientists (usually MSc or PhD candidates) to well-established researchers, both sides with particular skills (from analytical to computer programming skills) and viewpoints (for carrying out science experiments but also succeed in multi-cultural team leadership, for instance). This diversity is not always fully explored in research projects, but the case seems to be different in IMPREX.

ECSs in IMPREX have tasks and are in charge of presentations in all project meetings. They have their place in the agenda and are encouraged to get involved in the consortium. They also have their place in the project’s online blog to communicate anything they want: their science achievements, activities, new discoveries, participation in meetings or just general reflections.

They produce posts about topics which are interesting for them and describe them from their perspective. New perspectives are always exciting to read. And new perspectives which are untainted by ‘old’ ideas (sometimes disguised under what is called ‘experience’) are even more interesting.

So far, there have been a number of IMPREX ECS blog posts that are closely related to HEPEX topics and certainly worth reading:

  1. An ecologist’s viewpoint of hydrological forecasting (as far as we can remember, we never had a blog post in HEPEX from that angle – read the discussion!)
  2. An interesting post presenting an insider’s point of view of four forecasting services active in Germany, The Netherlands, Spain and at the pan-European scale.
  3. A flood decision-making experiment, where we can step in the boots of a flood manager with the help of IMPREX ECSs.
  4. And a friendly report of their participation in the 2017 EGU General Assembly (notably, check the very nice way they present their photos at the end of the post!)

We are eagerly awaiting the next ones!

Posted in activities | 2 Comments

Risk aversion and decision making using ensemble forecasts

by Marie-Amélie Boucher and Vincent Boucher

Assessing the value of forecasts is a very popular topic among the HEPEX community. The assessment of forecast value is highly dependent on the purpose served by the forecasts. For the specific problem of decision-making related to flood mitigation, Murphy (1976, 1977) proposed the use of the cost-loss ratio framework. The vast majority of papers related to the assessment of forecast value for flood mitigation adopt this framework, so one could think that everything is pretty much solved… except that the cost-lost ratio has very important flaws!

One such flaws is the fact that the cost-loss ratio assumes that the decision maker is risk neutral. Risk neutral individuals only care about the expected outcome, and disregard the spread of the distribution of outcomes. Risk neutral individuals are very rarely encountered in real life. Indeed, most of us are risk averse. That is, for the same expected outcome, we prefer less risky distributions. This is (among other things) why we buy insurance. Informally, most people dislike risk and would be willing to spend resources (e.g. money) in order to reduce the amount of risk faced.

Utility theory

Economists (and statisticians and mathematicians) have been studying those issues for a long time, and came up with many models and concepts which could be used in hydrology, as an alternative to the cost-loss ratio. The central framework is based on “utility theory”. To put it’s development a bit in a context, here is a (very) pseudo-historical adaptation of a conversation between Nicolau I Bernoulli and his cousin Daniel [1]:

It didn’t happen exactly like that, but the general idea is preserved. The game that Nicolau is referring to is now known as the St-Petersburg paradox and was first described in a letter. The first person to present it more formally was Daniel Bernoulli in 1738. He was also the first to suggest the idea of risk aversion and exposed (without any mathematics) that different persons faced with the same decision problem and the same information could take different decisions, because of different preferences.

It is only much later, in 1944, that the concept of risk-based individual preferences was formalized as a mathematical theory, by John von Neumann and Oskar Morgenstern [2]:

Utility theory is not perfect (and indeed, many generalizations and extensions exist), but it has at least two advantages over the cost-loss ratio:

  1. It allows for decisions to consider a finite number or a continuum of decisions (e.g. “protect” vs “don’t protect” for floods, or alternatively, “amount spent in protection”).
  1. It allows to account explicitly for the decision maker’s level of risk aversion in the assessment of forecasts value.

In hydrology

Krzysztofowicz suggested using utility theory in hydrology as early as in 1986. It is not a miraculous solution to the problem of assessing the value of forecasts for flood mitigation, but it would be an improvement over the cost-loss ratio. It can explain real-life behaviors that the cost-loss ratio cannot. This was the case in our recent application for the assessment of forecasts value on the Montmorency River in Canada (Matte et al. 2017). Other examples of accounting for risk-aversion for decision-making in hydrology and meteorology include Shorr (1966), and Cerdá and Quiroga Gómez (2008).

References:

  • Bernoulli D. (1738) Specimen Theoriae Novae de Mensura Sortis, Commentarii academiae scientiarum imperialis Petropolitanae, 5, 175-192.
  • Cerdá Tena E. and Quiroga Gómez S. (2008) Cost-Loss Decision Models with Risk Aversion, Working paper no. 01, Instituto Complutense de Estudios Internaciolales, 28 pages.
  • Krzysztofowicz R. (1986) Expected utility, benefit, and loss criteria for seasonal water supply planning, Water Resources Research, 22(3), 303-312.
  • Matte S., Boucher M-A, Boucher V. and Fortier-Filion T-C (2017) Moving beyond the cost-loss ratio, economic assessment of streamflow forecasts for a risk-averse decision maker, Hydrology and Earth System Sciences (Accepted)
  • Murphy A.H. (1977) Value of climatological, categorical and probabilistic forecasts in cost-loss ration situation, Monthly Weather Review, 105(7), 803-816
  • Murphy A.H. (1976) Decision-making models in cost-loss ratio situation and measures of values of probability forecasts, Monthly Weather Review, 104(8), 1058-1065
  • Shorr B. (1966) The cost/loss utility ratio, Journal of Applied Meteorology, 5(6), 501-803.
  • von Neumann J. and Morgenstern O. (1944) Theory of games and economic behavior, vol. 60, Princeton University Press Princeton, 625 pages.

Figure captions:

[1] The original pictures were taken from these websites: http://www.famous-mathematicians.com/daniel-bernoulli/ (right) and http://www2.stetson.edu/~efriedma/periodictable/html/Bi.html (left). The schematic drawing of D. Bernoulli’s blood pressure experiment apparatus was taken from https://plus.maths.org/content/daniel-bernoulli-and-making-fluid-equation

[2] The original pictures were taken from these websites: http://www.tinbergen.nl/oskar-morgenstern-and-john-von-neumann-shelby-white-and-leon-levy-archives-center/ (left) and http://www.karbosguide.com/books/pcarchitecture/chapter02.htm (right)

Posted in decision making, economic value | 2 Comments