Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meta v0 SummRt #8

Open
12 tasks
kaitejohnson opened this issue Sep 25, 2024 · 0 comments
Open
12 tasks

Meta v0 SummRt #8

kaitejohnson opened this issue Sep 25, 2024 · 0 comments
Labels

Comments

@kaitejohnson
Copy link

kaitejohnson commented Sep 25, 2024

Outlining tasks for a v0 of an R(t) evaluation package

Thinking each of these points can be separate issues that individuals can create and assign to themselves/ their small group

  • Create and document/describe canonical dataset (s) as package data. Start with data used in RtEval repo
  • Write S3 method to take in package function arguments needed to fit a dataset and generate standardized output formats across multiple packages.
    To discuss: what methods do we want (e.g. fit, plot, score and are they specific to a dataset or general across datasets? What is a standardized output format conducive to evaluation across all the packages? E.g. Chad has created plot data for all packages (just R(t) by day, Rt_median, lb, ub) but should we have additional ones such as predicted observations?)
    • EpiNow2
    • EpiEstim
    • rtestim
    • EpiLPS
  • Function to evaluate standardized outputs for each dataset (onus is on the dataset team to tell the eval team how to eval e.g. if real data this will be forecast/nowcast based eval of generated observations, if simulated data could be that + R(t) eval)
  • For each canonical dataset, a vignette combining fits and standardized outputs, running the evaluating, plotting outputs for comparison
  • README.md with sections describing the purpose of the repo, what package developers can do to contribute (e.g. check their implementation, create a vignette in their package that fits to the canonical dataset and link to it, add a dataset that their package is intended for, open a PR to add their package, etc), what this is intended to provide to users/evaluators (a roadmap for which R(t) packages to use when)
  • Automate updating of package versions in CI at some decided upon cadence and rerun vignettes (vignettes could also be rerun on merge to main, but this might be prohibitively slow). See epinowcast for example of automated vignette updating
  • Set up github actions for creating package down site, running tests, and checking pre-commit (and set up locally)

A lot of this functionality is already in https://github.com/cmilando/RtEval so I think the main goal should be to reuse as much of that infrastructure as possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant