Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major update #166

Merged
merged 251 commits into from
Mar 18, 2022
Merged

Major update #166

merged 251 commits into from
Mar 18, 2022

Conversation

nikosbosse
Copy link
Contributor

@nikosbosse nikosbosse commented Nov 25, 2021

Feature updates

  • new and updated Readme and vignette
  • the proposed scoring workflow was reworked. Functions were changed so they
    can easily be piped and have simplified arguments and outputs.

new functions and function changes

  • the function eval_forecasts was replaced by a function [score()] with a
    much reduced set of function arguments.
  • Functionality to summarise scores and to add relative skill scores was moved
    to a function [summarise_scores()]
  • new function [check_forecasts()] to analyse input data before scoring
  • new function [correlation()] to compute correlations between different metrics
  • new function [add_coverage()] to add coverage for specific central prediction
    intervals
  • new function [avail_forecasts()] allows to visualise the number of available
    forecasts
  • all plotting functions were renamed to begin with plot_
  • the function [pit()] now works based on data.frames. The old pit function
    was renamed to [pit_sample()]. PIT p-values were removed entirely.
  • the function [plot_pit()] now works directly with input as produced by [pit()]
  • many data-handling functions were removed and input types for [score()] were
    restricted to sample-based, quantile-based or binary forecasts.
  • the function [brier_score()] now returns all brier scores, rather than taking
    the mean before returning an output.

package data updated

  • package data is now based on forecasts submitted to the European Forecast Hub
    (https://covid19forecasthub.eu/).
  • all example data files were renamed to begin with example_
  • a new data set, summary_metrics was included that contains a summary of the
    metrics implemented in scoringutils

Other breaking changes

  • The 'sharpness' component of the weighted interval score was renamed to
    dispersion. This was done to make it more clear what the component represents
    and to maintain consistency with what is used in other places.

nikosbosse and others added 22 commits November 23, 2021 21:50
Update branch with newest version of check_forecasts function
Update branch with latest version of check function... again...
Merge branch 'simplify-eval-forecasts' of https://github.com/epiforecasts/scoringutils into simplify-eval-forecasts

# Conflicts:
#	R/pit.R
#	man/pit.Rd
#	man/pit_df.Rd
reduce the number of arguments to eval_forecasts()
        get rid of verbose argument in most instances
        eliminate by argument by requiring users to only include relevant columns
        replace list of arguments for interval_score by ...
        remove PIT plots from eval_forecasts()
        remove summarised argument and always summarise (users can just pass in every column to summarise_by to avoid any summary taking place.
    simplify code for PIT values by
        returning PIT values rather than p-values from an Anderson-Darling test
        creating a version that works on a data.frame and returns a list that can then get passed to plotting function
@codecov
Copy link

codecov bot commented Nov 25, 2021

Codecov Report

Merging #166 (d44649a) into master (4f583a5) will increase coverage by 2.11%.
The diff coverage is 52.09%.

❗ Current head d44649a differs from pull request most recent head 92ecf09. Consider uploading reports for the commit 92ecf09 to get more accurate results

@@            Coverage Diff             @@
##           master     #166      +/-   ##
==========================================
+ Coverage   54.56%   56.68%   +2.11%     
==========================================
  Files          19       22       +3     
  Lines        1468     1362     -106     
==========================================
- Hits          801      772      -29     
+ Misses        667      590      -77     
Impacted Files Coverage Δ
R/avail_forecasts.R 0.00% <0.00%> (ø)
R/correlations.R 0.00% <0.00%> (ø)
R/plot.R 17.19% <9.87%> (-3.11%) ⬇️
R/bias.R 75.34% <57.50%> (-12.59%) ⬇️
R/summarise_scores.R 69.44% <69.44%> (ø)
R/input-check-helpers.R 70.12% <70.12%> (ø)
R/check_forecasts.R 76.23% <74.11%> (+19.09%) ⬆️
R/metrics_point_forecasts.R 75.00% <75.00%> (+75.00%) ⬆️
R/pit.R 77.35% <76.92%> (+21.26%) ⬆️
R/utils.R 88.23% <85.36%> (+16.80%) ⬆️
... and 14 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4f583a5...92ecf09. Read the comment docs.

R/utils_data_handling.R Outdated Show resolved Hide resolved
R/zzz.R Outdated Show resolved Hide resolved
@nikosbosse nikosbosse merged commit c0d4459 into master Mar 18, 2022
@seabbs
Copy link
Contributor

seabbs commented Mar 21, 2022

🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Typo in the documentation of weigh argument to interval_score() rename sharpness as dispersion?
3 participants