Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

submission: eia #342

Closed
10 of 25 tasks
leonawicz opened this issue Sep 16, 2019 · 21 comments
Closed
10 of 25 tasks

submission: eia #342

leonawicz opened this issue Sep 16, 2019 · 21 comments

Comments

@leonawicz
Copy link
Member

leonawicz commented Sep 16, 2019

Submitting Author: Matthew Leonawicz (@leonawicz)
Repository: https://github.com/leonawicz/eia
Version submitted: 0.3.3
Editor: @melvidoni
Reviewer 1: @daranzolin
Reviewer 2: @ottlngr
Archive: TBD
Version accepted: TBD


  • Paste the full DESCRIPTION file inside a code block below:
Package: eia
Title: API Wrapper for 'US Energy Information Administration' Open Data
Version: 0.3.3
Authors@R: c(
    person("Matthew", "Leonawicz", email = "[email protected]", role = c("aut", "cre"), comment = c(ORCID = "0000-0001-9452-2771")),
    person(given = "E Source", role = c("cph", "fnd"))
    )
Description: Provides API access to data from the 'US Energy Information Administration' ('EIA') <https://www.eia.gov/>. 
    Use of the API requires a free API key obtainable at <https://www.eia.gov/opendata/register.php>.
    The package includes functions for searching 'EIA' data categories and importing time series and geoset time series datasets. 
    Datasets returned by these functions are provided in a tidy format or alternatively in more raw form. 
    It also offers helper functions for working with 'EIA' date strings and time formats and for inspecting different summaries of series metadata.
    The package also provides control over API key storage and caching of API request results.
License: MIT + file LICENSE
Encoding: UTF-8
LazyData: true
URL: https://github.com/leonawicz/eia
BugReports: https://github.com/leonawicz/eia/issues
Date: 2019-09-16
Imports: 
    tibble,
    magrittr,
    httr,
    jsonlite,
    dplyr,
    purrr,
    memoise,
    lubridate
Suggests: 
    testthat,
    knitr,
    rmarkdown,
    covr,
    lintr,
    tidyr
VignetteBuilder: knitr
Language: en-US
RoxygenNote: 6.1.1

Scope

  • Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):

    • data retrieval
    • data extraction
    • database access
    • data munging
    • data deposition
    • reproducibility
    • geospatial data
    • text analysis
  • Explain how and why the package falls under these categories (briefly, 1-2 sentences):

The package specifically is for retrieving energy-related datasets from the US Energy Information Administration (EIA) open data API.

  • Who is the target audience and what are scientific applications of this package?

Researchers, data analysts, etc. in academia, government and the energy industry and related industries. While it is the US EIA, international energy datasets are available through the API as well.

There are two other overlapping R packages. (1) EIAdata, also on CRAN. This package provides basic data access, but data formats and other options are more limited. It does not cover as many API endpoints or offer as much functionality. (2) There is also a different eia, github only: https://github.com/krose/eia. It is similar to (1) in its offerings and level of development. Both are fairly minimalist packages that have been around for about five years (around when the EIA API was first launched, I believe) but have not been developed much further since initial releases.

This submission for eia (CRAN) is for a package with greater coverage in terms of functionality, API endpoint access, documentation, and robust test coverage and error handling, In comparison to the others it also offers three data output format options for users working with the API data in different contexts (for example, strict R usage or wanting pure JSON results to pipe to other software applications). You can control the level of "tidy"-ness. It also includes a user agent in calls and takes other steps to play well with the API.

  • If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.

Technical checks

Confirm each of the following by checking the box. This package:

Note added by author: This particular API requires users to use their own API key. I cannot run function examples or unit tests on CRAN, but all examples and unit tests run successfully in multiple other environments, on local and remote systems. Full test suite also runs on Travis-CI where I am able to import an encrypted key. Previously for CRAN purposes, a no-key copy of the main vignette was included (API call examples not executed) in the package, while the main and all additional vignettes showing executed API calls and results are part of the pkgdown web documentation. Also for CRAN purposes, function examples in the R package source code are not run if they require an API key to be present.

Publication options

JOSS Options
  • The package has an obvious research application according to JOSS's definition.
    • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
    • The package is deposited in a long-term repository with the DOI:
    • (Do not submit your package separately to JOSS)
MEE Options
  • The package is novel and will be of interest to the broad readership of the journal.
  • The manuscript describing the package is no longer than 3000 words.
  • You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see MEE's Policy on Publishing Code)
  • (Scope: Do consider MEE's Aims and Scope for your manuscript. We make no guarantee that your manuscript will be within MEE scope.)
  • (Although not required, we strongly recommend having a full manuscript prepared when you submit here.)
  • (Please do not submit your package separately to Methods in Ecology and Evolution)

Code of conduct

@melvidoni
Copy link
Contributor

melvidoni commented Sep 18, 2019

Editor checks:

  • Fit: The package meets criteria for fit and overlap
  • [] Automated tests: Package has a testing suite and is tested via Travis-CI or another CI service.
  • License: The package has a CRAN or OSI accepted license
  • Repository: The repository link resolves correctly

Editor comments

Hello @leonawicz, thanks for submitting to rOpenSci. I'll be your handling editor. The following is the output of goodpractice::gp(). Please, take a look at all of this, and let me know once it is done, so I can check again and start looking for reviewers if everything is fixed.

── GP eia ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

It is good practice to

  ✖ write unit tests for all functions, and all package code in general. 23% of code lines are covered by test cases.

    R/categories.R:34:NA
    R/categories.R:35:NA
    R/categories.R:36:NA
    R/categories.R:40:NA
    R/categories.R:41:NA
    ... and 227 more lines

  ✖ omit "Date" in DESCRIPTION. It is not required and it gets invalid quite often. A build date will be added to the package when you perform `R CMD build`
    on it.
  ✖ avoid long code lines, it is bad for readability. Also, many people prefer editor windows that are about 80 characters wide. Try make your lines shorter
    than 80 characters

    R\categories.R:111:1
    R\categories.R:114:1
    R\geoset.R:56:1
    R\key.R:72:1
    R\key.R:85:1
    ... and 3 more lines

  ✖ avoid sapply(), it is not type safe. It might return a vector, or a list, depending on the input data. Consider using vapply() instead.

    R\categories.R:46:18
    R\categories.R:48:19
    R\geoset.R:86:18
    R\series.R:58:10
    R\series.R:84:26
    ... and 3 more lines

  ✖ avoid 1:length(...), 1:nrow(...), 1:ncol(...), 1:NROW(...) and 1:NCOL(...) expressions. They are error prone and result 1:0 if the expression on the
    right hand side is zero. Use seq_len() or seq_along() instead.

    R\series.R:71:20
    R\series.R:168:17
    R\series.R:218:18

  ✖ fix this R CMD check WARNING: LaTeX errors when creating PDF version. This typically indicates Rd problems.
  ✖ fix this R CMD check ERROR: Re-running with no redirection of stdout/stderr. Hmm ... looks like a package Error in texi2dvi(file = file, pdf = TRUE,
    clean = clean, quiet = quiet, : pdflatex is not available Error in texi2dvi(file = file, pdf = TRUE, clean = clean, quiet = quiet, : pdflatex is not available
    Error in running tools::texi2pdf() You may want to clean up by 'rm -Rf C:/Users/e95207/AppData/Local/Temp/Rtmp2hHotb/Rd2pdff30c3f634cfa'

Reviewers: @ottlngr and @daranzolin
Due date: October 28th, 2019

@leonawicz
Copy link
Member Author

Hi @melvidoni Thank you for considering this submission. I have addressed your requested changes, details below, along with a couple questions.

  • Date field now omitted from DESCRIPTION.
  • References like 1:nrow now replaced.
  • Lines greater than 80 characters fixed an re-linted.
  • Almost all uses of sapply switched to vapply. There are two cases remaining where this does not work well. An example of the basic issue. However, in all cases of sapply that occurred in the package, I strictly used it within contexts where it was known to be type-safe in those instances. The two remaining cases require variable length, however. Let me know if you have another suggestion.

Unit tests

The package has 100% testthat unit test coverage, see here. The complete set of tests is executed on Travis-CI by including my encrypted API key for Travis here. This config also shows there are just a few lines in the package that are explicitly excluded from coverage because they are not practically testable.

If the tests are run without an API key present, the vast majority of tests must be skipped. To run/build/check locally an EIA API key must be placed somewhere like your .Renviron file as EIA_KEY=MYKEY. I could provide you (and reviewers) with my key offline or you could request one from EIA here (only requires an email address and they send you a key a moment later).

R CMD check errors and warnings

I am unable to reproduce these on various Windows or Linux platforms. Is there more information about the environment in which you are building or checking the package? Even without a key present, it will skip the impacted functions and tests and so should still successfully build even in that case.

Thank you! :)

@melvidoni
Copy link
Contributor

Hello @leonawicz, thanks for taking care of this.

Don't worry about the remaining sapply, as long as it is tested and controlled. goodpractices::gp() now gave me:

── GP eia ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

It is good practice to


  ✖ avoid sapply(), it is not type safe. It might return a vector, or a list, depending on the input data. Consider using vapply() instead.

    R\series.R:84:26
    R\series.R:85:25

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 

So I'm going to start searching for reviewers!

@melvidoni
Copy link
Contributor

Hello @leonawicz. The first reviewer is @daranzolin. I will establish a deadline for the review as soon as I get a second reviewer in.

@melvidoni
Copy link
Contributor

Just a heads up to both @leonawicz and @daranzolin. I am having issues finding a second reviewer. Please bear with me while I email more people.

@melvidoni
Copy link
Contributor

We got reviewers assigned, @leonawicz!

Thanks @ottlngr and @daranzolin for reviewing this package. You have until October 28th to submit the review through the GitHub issue.

@daranzolin
Copy link
Member

daranzolin commented Oct 18, 2019

Hi all, review is below.

Package Review

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package
    and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 2

  • Should the author(s) deem it appropriate, I agree to be acknowledged as a package reviewer ("rev" role) in the package DESCRIPTION file.

Review Comments

Terrific package that is already mostly polished. I learned how to get my API key, peruse the vignettes, search through various child categories, and subset relevant data in less than 30 minutes. I felt free to explore aggressively knowing about the built-in memoization and rate-limiting. Great features. The pattern of searching for parent/child ids and using eia_series or eia_geoset would soon become familiar and comfortable after more use. The ability to search for regions like "New England" or "USA" was nifty.

Some questions/observation:

  • Is there perhaps an easier way to explore/traverse the parent-child relationships? An interactive shiny gadget that cascades select inputs might be cool. Or maybe a recursive = TRUE argument in eia_cats, similar to dir or list.files? It might also be useful if I could do something similar to here::here with eia_cats, e.g. eia_cats("Electricity", "Total consumption").

  • You have an example for unnest-ing data in the Time series vignette, but not the Geosets vignette. A similar example (since I clicked that one first), would be useful.

  • Inconsequential, but there is some inconsistency in the vignettes ending sentences with periods vs. colons.

  • I think the README and vignettes would benefit from a visualization. I've used a lot of API wrappers, and seeing what visualizations I can get to helps me understand what data is available quickly.

Some alerts from goodpractice::gp():

✖ avoid long code lines, it is bad for readability. Also, many people prefer
editor windows that are about 80 characters wide. Try make your lines shorter than 80
characters

R\cache.R:5:1
R\categories.R:5:1
R\categories.R:6:1
R\categories.R:10:1
R\categories.R:11:1
... and 80 more lines

Looks like most pertain to roxygen help text and decorators.

And these last two are possibly the result of my machine, not yours, but I'll flag them anyway:

✖ fix this R CMD check WARNING: LaTeX errors when creating PDF version. This typically indicates Rd problems.

✖ fix this R CMD check ERROR: Re-running with no redirection of
stdout/stderr. Hmm ... looks like a package Error in texi2dvi(file = file, pdf =
TRUE, clean = clean, quiet = quiet, : pdflatex is not available Error in
texi2dvi(file = file, pdf = TRUE, clean = clean, quiet = quiet, : pdflatex is not
available Error in running tools::texi2pdf() You may want to clean up by 'rm -Rf
C:/Users/918831~1/AppData/Local/Temp/RtmpSmuM4O/Rd2pdf934192f4b19'

@melvidoni
Copy link
Contributor

Hello @ottlngr! How is your review going?

@leonawicz
Copy link
Member Author

Hi all, I plan to mostly get to work on this once both reviews are in but I can chime in on some initial things.

@daranzolin Thank you for your review. :) Regarding your questions about exploring the category tree, I will continue to give some thought to this but the API is vast and I'd want to be extra careful with recursion, possibly recursing up (eia_parent_cats already does this) but not down. I think a Shiny app would be a nice touch but I think users would be much quicker to just browse the existing API where they obtained their key, using the EIA's existing API explorer. Perhaps low return on effort semi-duplicating what it offers.

I like the idea of allowing names like "Electricity" in place of their opaque category ID numbers. However, these numbers are actually better, in part because no one would want to type out many of the category names, which can get really long. Please let me know if you envision something different or I'm misunderstanding. But yes, I agree, the most frustrating part of such a massive collection of datasets behind an API is getting acquainted with the IDs you need for your project. My hunch is most users do this through the API explorer on the website before ever encountering this particular API wrapper package.

I will address the other inconsistencies in the code and documentation you mentioned soon.

I think I fixed all the code lines that were longer than 80 characters, but is it necessary to apply this rule to all the roxygen2 help file text? It has no impact on how help docs render and from an author/maintainer standpoint it's a lot easier to write, read and edit them if the sentences aren't so broken up. I'm happy to do so if required, just let me know.

I agree on the value of visualizations and will be adding some. The console-heavy feel is too much without having any. I'll plan to add some basic visualizations as part of this review process.

Yes I think the errors relate to possibly not having the pdf-creating utility installed on a Windows machine if you are checking/building the package locally.

Thanks so much for your feedback!

@ottlngr
Copy link

ottlngr commented Oct 24, 2019

@melvidoni @leonawicz I'm planning to finish my review on sunday :)

@daranzolin
Copy link
Member

daranzolin commented Oct 25, 2019

@leonawicz We recently developed an internal shiny gadget with cascading select filters from user input, and it struck me as a possible application here. Probably superfluous here, though. I agree with you on both points: the API explorer is sufficient and also shows the category ids (obviating the need to type long strings).

@leonawicz
Copy link
Member Author

leonawicz commented Oct 25, 2019

@ottlngr Sounds good, I will check back next week. Thank you

@melvidoni @daranzolin I have had a chance today to make revisions. The latest version on GitHub cleans up the vignette formatting. I also added basic example ggplot graphs to the readme, getting started vignette, and the most applicable other vignettes- series and geoset, per your suggestion.

Out of general curiosity, can you point me to a resource about this cascading select filter Shiny gadget you mentioned?

Also, after merging your vignette PR, I realized I brought in a new feature with this recent push. I don't want that to confuse anyone, but there is a new function eia_report, which is an initial version of a basic wrapper function for downloading report data from the EIA website for data from popular canned reports they release; data that is not directly available through the API itself. It only offers one report at this time and has unit tests and documentation. This is a very minor feature and I don't want to draw attention to it in the more long-form vignette documentation at this time, but it may grow in the future with more reports as I continue to field user requests and feedback. But I want to mention it just in case you wonder why a new function has appeared.

Thanks! :)

@ottlngr
Copy link

ottlngr commented Oct 27, 2019

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package
    and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 2.5

  • Should the author(s) deem it appropriate, I agree to be acknowledged as a package reviewer ("rev" role) in the package DESCRIPTION file.

Review Comments

eia is in many ways an exemplary package: Great vignettes and documentation, clean and consistent code as well as state-of-the-art dependecies and 'infrastructure' (tests, CI, pkgdown, ...). Using eia is pretty straight-forward and I like the way eia supplies additional functionality (compared to the raw API) where useful, but generally keeps things simple.

A few questions/remarks:

  • I think its a useful feature to basically have three output options (tidy, fromJSON, raw), but I felt a bit uncomfortable using TRUE, FALSE and NA to switch between these choices. For me personally, a setup using tidy = TRUE|FALSE, raw = FALSE|TRUE would be more compelling.

  • You are using httr in the background, but when creating the request url in .eia_url() you rely on paste0() instead of using httr::modify_url() - is there a reason for this?

  • Explicit compliment for setting the user agent globally!

Regarding @daranzolin 's review, I agree with using some visualizations in the vignettes. Also, I wished the exploration of the category tree could be improved - but as you mentioned, @leonawicz, this would result in many additional API requests.

R CMD CHECKS does not show any problems on my machine.

Great package, definitely a good supplement to the rOpenSci universe!

@melvidoni
Copy link
Contributor

All reviews are in @leonawicz !

@leonawicz
Copy link
Member Author

@melvidoni thank you and @ottlngr thank you for your review, responses below.

The tidy argument

Here was my thought process when developing this: I agree that TRUE, FALSE, NA may seem a little unusual at first. I also had considered alternatives including raw = TRUE. I decided against raw anything because it seemed to suggest getting actual raw-type data, which some APIs return and that was confusing. I was also looking to avoid creating two separate arguments that in combination determined which of the three output formats the user receives, with one having to be ignored depending on the other.

So I stuck with three values for one argument, and these three values do capture the meaning. I also figure that very few users will ever want something other than tidy = TRUE so it is the default and captures the essence of "anything else is not 'tidy'." The NA option does capture the sense of "tidy or un-tidied R dataset structure just is not applicable here because R is doing nothing to post process your API output; R is just the messenger."

I would like to keep it to a single 3-valued argument, but I am open to suggestions along those lines. I considered things like output = c("data.frame", "list", "character") or similar but found this frustrating because it portrays the options as being about nothing more than object class. Or output = c("tidy", "fromJSON", "string"), etc. Nothing was feeling quite right. But tidy = TRUE seems spot on. Turning that off with FALSE seems okay, and NA is really only going to matter to hardcore power users who probably don't even want to be accessing the API from R in the first place but just happen to be.

httr functions

Honestly I didn't even know about httr::modify_url(). For package development I not only import as few packages as I can, but when I do, I also import and use few functions. I use paste, paste0 and sprintf often in packaged code. I also don't add dependencies like glue inside other packages. It wouldn't make a difference here since httr is already required. The only distinction between where I relied on httr and where I didn't was with the API communications that I didn't otherwise know how to put together. Outside of necessities like that, I tend to use base R for things like string manipulation.

User agent

Thanks! That was my first time doing that. Just for context, that is one thing that would have to be updated following any repository transfer and it would be up to ROpenSci if the new user agent was the new repo url or something else. I have no opinion on that but it would have to change from what it is now. On that note, the other breaking effect of a transfer would be the encrypted API key in .travis.yml would stop working because that is user/repo specific. But this is just details possibly for later.

Next steps

@melvidoni @daranzolin @ottlngr Okay, I think I have addressed all reviewers' and editor's comments and questions, provided clarifications, etc. I've made updates that you all called for such as the addition of graphs in the README and vignettes, fixes to documentation, etc.

There wasn't a lot of explicit "change this, fix that" in the reviews so I'd like to turn it over to you all to review the changes I did make, e.g., are the basic demo graphs acceptable?

I did use ggplot2 so they are a bit sleeker right out of the box than base graphics (added package only to DESCRIPTION Suggests field). But I did keep them very minimal just like any use of dplyr or tidyr in the vignettes because I don't want to draw attention away from the tutorial message with too much extraneous code.

Is there anything you all firmly want changed that I missed? Should we continue discussion on certain topics?

Thank you!

@daranzolin
Copy link
Member

@leonawicz graphs look good. No further comments or suggestions from me. But as for our previous discussion on cascading selectInputs in Shiny, here's a good example.

@melvidoni I updated my original review comment confirming that the package author has responded to my review.

@ottlngr
Copy link

ottlngr commented Oct 28, 2019

@leonawicz Thanks for your detailed answers and sharing your reasoning, especially on the tidy argument. I can totally relate to it and it was not meant to be a "change this, fix that" comment, anyway.

No further questions, suggestions or comments.

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

@melvidoni I updated my review.

@melvidoni
Copy link
Contributor

Approved! Thanks @leonawicz for submitting and @ottlngr and @daranzolin for your reviews!

To-dos:

  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin once you do.
  • Add the rOpenSci footer to the bottom of your README
    " [![ropensci_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)"
  • Fix all links to the GitHub repo to point to the repo under the ropensci organization.
  • If you already had a pkgdown website, fix its URL to point to https://docs.ropensci.org/package_name and deactivate the automatic deployment you might have set up, since it will not be built centrally like for all rOpenSci packages, see http://devdevguide.netlify.com/#docsropensci. In addition, in your DESCRIPTION file, include the docs link in the URL field alongside the link to the GitHub repository, e.g.: URL: https://docs.ropensci.org/foobar (website) https://github.com/ropensci/foobar
  • Add a mention of the review in DESCRIPTION via rodev::add_ro_desc().
  • Fix any links in badges for CI and coverage to point to the ropensci URL. We no longer transfer Appveyor projects to ropensci Appveyor account so after transfer of your repo to rOpenSci's "ropensci" GitHub organization the badge should be [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/ropensci/pkgname?branch=master&svg=true)](https://ci.appveyor.com/project/individualaccount/pkgname).
  • We're starting to roll out software metadata files to all ropensci packages via the Codemeta initiative, see https://github.com/ropensci/codemetar/#codemetar for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.

Should you want to acknowledge your reviewers in your package DESCRIPTION, you can do so by making them "rev"-type contributors in the Authors@R field (with their consent). More info on this here.

Welcome aboard! We'd love to host a blog post about your package - either a short introduction to it with one example or a longer post with some narrative about its development or something you learned, and an example of its use. If you are interested, review the instructions, and tag @stefaniebutland in your reply. She will get in touch about timing and can answer any questions.

We've put together an online book with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding. Please tell us what could be improved, the corresponding repo is here.

@leonawicz
Copy link
Member Author

Hi all, @ottlngr @daranzolin thanks so much for your help reviewing the package! And thanks @melvidoni for your guidance. I have transferred the repo and made the package changes affected by the transfer. It appears my encrypted API key in travis.yml may still work (?) but I'll only know for sure after I get codecov.io integration working again. Please let me know when the admin status is active. Thanks! :)

@melvidoni
Copy link
Contributor

Hello @leonawicz! I already gave you admin rights on your repo, and invited you to an rOpenSci team. Can I publish this in Twitter? If so, please, let me know your handle.

@leonawicz
Copy link
Member Author

Hi @melvidoni I see the team and settings tab now, thanks! Yes, we're all set to move forward. I just pushed a cran-comments.md update for the next CRAN release and it looks like Travis-CI and CodeCov are still working as intended after the transfer. My Twitter handle is the same. I plan to mention on Twitter as well. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants