-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add capability for recalibrating output of probability blending #2078
Add capability for recalibrating output of probability blending #2078
Conversation
This reverts commit 3ea6cd6.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #2078 +/- ##
==========================================
+ Coverage 98.39% 98.41% +0.01%
==========================================
Files 124 135 +11
Lines 12212 13298 +1086
==========================================
+ Hits 12016 13087 +1071
- Misses 196 211 +15 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work implementing this plugin Belinda. The description in the documentation was very clear and I like the straightforward implementation. The comments from me are mostly minor suggestions and a couple of questions to help get my head around the code. Feel free to ignore ones that you think don't apply.
doc/source/extended_documentation/calibration/beta_recalibration/beta_recalibration.rst
Outdated
Show resolved
Hide resolved
improver_tests/calibration/beta_recalibration/test_Recalibrate.py
Outdated
Show resolved
Hide resolved
Looks great Belinda 👏 You will have to show me how to run the unit tests soon too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this application code. Do you imagine there will be specific code for estimating the coefficients at some point, or are you intending to document your approach to this estimation somewhere?
I've asked one potentially daft question, and added a few comments that are hopefully useful.
improver_tests/calibration/beta_recalibration/test_BetaRecalibrate.py
Outdated
Show resolved
Hide resolved
Co-authored-by: bayliffe <benjamin.ayliffe@metoffice.gov.uk>
Thanks for reviewing @bayliffe. I don't intend to add code in Improver for estimating the coefficients. Similarly to choosing the blending weights, we'd do this via ad-hoc analysis, e.g. using scipy.optimize. |
Thanks for the changes @btrotta-bom and the answers to my questions. Happy to approve. It would be great to see how you approach the alpha/beta estimation, and ideally that work might make it into the IMPROVER docs at some point, or be included as a reference to a paper, so that other users of IMPROVER can create the necessary inputs to benefit from this new functionality. |
Ranjan and Gneiting (https://stat.uw.edu/sites/default/files/files/reports/2008/tr543.pdf) have shown that blending well-calibrated probability forecasts does not in general produce a well-calibrated output. They demonstrate that recalibrating the blend using the beta distribution CDF improves reliability, sharpness, and score on proper metrics.
This PR adds a CLI to apply the beta CDF transformation to probability forecasts. It allows the parameters to vary with forecast period.
An experiment recalibrating the blended output of rainfall forecasts (where each forecast is already individually calibrated with Rainforests) showed clear improvement in reliability and sharpness.
Description
Testing: