{riskassessment}
could rely on {riskmetric}
more
#268
Unanswered
AARON-CLARK
asked this question in
Ideas
Replies: 1 comment
-
I have opened a number of issues on the riskmetric repo to address a number of these issues (linked below).
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
👋 Purpose
This discussion space exists to define & display where
{riskassessment}
is kind of re-inventing the wheel that{riskmetric}
as already built. In other words, it would be better practice if{riskmetric}
could export some additional info SO THAT as things change in the future,{riskassessment}
could just slurp them up.As reference, here is the link to the beta version (v0.0.1) of the app, which loosely corresponds to the
dev
branch..💡 Needs / Ideas
1. Assessment Criteria
This is pretty low hanging fruit. The Assessment Criteria tab (right now) has taken a bunch of stuff from the
{riskmetric}
documentation such as description text about the risk calculation, maintenance metrics, community usage metrics, and testing. Could these descriptions live within{riskmetric}
and be exported? Then as they are updated overtime, they would port over to the app really well.In addition, we manually created csv files compiling the tables you see for each metric type - which records the name of the metrics, how it's measured, and the reason for inclusion. It would be excellent if this metadata could live within
{riskmetric}
and be exported? Then as they are updated overtime, they would port over to the app really well.Also, can you remind me if
{riskmetric}
is exporting any testing metrics right now?2.
get_latest_pkg_info()
For whatever reason, we didn't feel we could get sufficient package info from
{riskmetric}
, so we wrote a function to perform some light web scrapping of 'https://cran.r-project.org/web/packages/{pkg_name}'. See code here. At the end of the day, it just retrieves The package title, version, description, maintainer, author, license, and published date. This info is used in various places throughout the app, but you can see it all in any of the downloadable reports:3.
insert_community_metrics_to_db()
This function performs two tasks, the second of which you can ignore:
So again, we felt we needed more data around package versions & release history for the plot we display on the community usage tab. At the end of the day, we are coming away with a data.frame containing the following:
cranlogs::cran_downloads()
to grab theseWe use that data to produce the object below via
{plotly}
.We also manually calculate all of stats/metrics reported on these cards:
4. Unknown items
There may be some other items that I haven't dug deep enough to find, so I'll leave this space open for when I do.
Items not yet implemented
This is just an initial list to get the discussion started. I think we also talked about some things not yet incorporated but are items certainly on the horizon that will require coordination between
1. Dependency tree
Some folks have requested to see what packages a package was built with (it's dependencies). It would nice to get that list from
{riskmetric}
.2. Both tools need a r pkg hex!
I'm thinking we need to collaborate on this so that they both look aesthetically similar. Is there a R Validation Hub aesthetic we should adhere to? Or should we make something from scratch for these tools? Or is there money in the budget to hire a graphic designer to sketch up something for both tools? Maybe we could propose some ideas to the rest of the exec committee?
Beta Was this translation helpful? Give feedback.
All reactions