Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: AdOpT-NET0: A technology-focused Python package for the optimization of multi-energy systems #7402

Open
editorialbot opened this issue Oct 23, 2024 · 56 comments
Assignees
Labels
Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 3 (PE) Physics and Engineering

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Oct 23, 2024

Submitting author: @JeanWi (Jan F. Wiegner)
Repository: https://github.com/UU-ER/AdOpT-NET0
Branch with paper.md (empty if default branch): joss-submission
Version: v0.1.7
Editor: @AdamRJensen
Reviewers: @trevorb1, @datejada
Archive: 10.5281/zenodo.14361112

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/12578885161d419241e50c5e745b7a11"><img src="https://joss.theoj.org/papers/12578885161d419241e50c5e745b7a11/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/12578885161d419241e50c5e745b7a11/status.svg)](https://joss.theoj.org/papers/12578885161d419241e50c5e745b7a11)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@trevorb1 & @datejada, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @AdamRJensen know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @datejada

📝 Checklist for @trevorb1

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1016/j.apenergy.2017.07.142 is OK
- 10.1021/acs.iecr.2c00681 is OK
- 10.1016/j.apenergy.2017.01.089 is OK
- 10.1016/j.compchemeng.2022.107816 is OK
- 10.46855/energy-proceedings-5280 is OK
- 10.1038/s41597-019-0199-y is OK
- 10.1016/j.ijrefrig.2021.10.002 is OK
- 10.1021/ACS.IECR.3C02226 is OK
- 10.1016/j.apenergy.2023.120738 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Energy system integration options for the North Se...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.16 s (1335.3 files/s, 194235.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          55           3411           3658          12216
JSON                            90              2              0           4937
SVG                              2             64              0           1642
CSV                             13              0              0           1103
reStructuredText                30            393            310            793
Jupyter Notebook                 3              0            855            245
YAML                             9             38             20            178
Markdown                         2             26              0            139
TeX                              1             10              0            100
TOML                             1              7              0             53
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                           208           3963           4851          21441
-------------------------------------------------------------------------------

Commit count by author:

   331	6574114
   156	Jean
   141	Tiggeloven
    92	julia1071
    64	Jan Wiegner
    25	Luca
    24	Wiegner
    15	IngeOssentjuk
     7	6145795
     7	lucabert01
     2	Bertoni

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 895

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

✅ License found: MIT License (Valid open source OSI approved license)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@AdamRJensen
Copy link

Hi @trevorb1, @datejada

Getting started on a review may seem daunting, but even spending just 20 minutes will allow you to check off a few of the first check boxes of the review.

Don't hesitate to reach out if you have question on how to get started.

@datejada
Copy link

datejada commented Nov 6, 2024

Review checklist for @datejada

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/UU-ER/AdOpT-NET0?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@JeanWi) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@datejada
Copy link

datejada commented Nov 6, 2024

@AdamRJensen I have reviewed the checklist and the PDF. Overall, everything is in order regarding the checklist, except for the comparison with other similar packages. As for the PDF, I recommend that the authors clarify what they mean by the term "Flexible." For example, Table 1 states that the model offers flexibility in temporal resolution from 15 minutes to hours. However, does this flexibility apply to the entire model, or can different components of the model operate at varying resolutions (e.g., electricity every 15 minutes and gas every 6 hours)? More details on the term "Flexible" in the PDF would be appreciated.

Please let me know what the next steps are in the review process.

Thank you!

@AdamRJensen
Copy link

Please let me know what the next steps are in the review process.

@datejada Thank you very much for your swift review! The authors now have to address your comments, after which you'll hopefully be able to tick off the last boxes.

@JeanWi
Copy link

JeanWi commented Nov 8, 2024

@datejada: Thanks for the review and your comments! We have implemented changes to the paper based on your comments also referring to a recently published review on energy system models.
We hope that the changes contribute to the understanding, please let us know if you have any further comments!

@AdamRJensen: Is there a way to reproduce the article PDF for the article proof?

@AdamRJensen
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@datejada
Copy link

@JeanWi Thank you for the changes, I like that you use the definitions in Hoffmann's paper, then it is easier to relate to other available models.

@AdamRJensen all my checklist is complete, I don't have further comments. Please let me know if there is something else I need to do. BR, Diego.

@JeanWi
Copy link

JeanWi commented Nov 11, 2024

Thanks, Diego, for the review, we really appreciate it!

@AdamRJensen
Copy link

@AdamRJensen all my checklist is complete, I don't have further comments. Please let me know if there is something else I need to do. BR, Diego.

If you could just tick that last box it'd be great. Again thank you very much for your thorough review!

@trevorb1
Copy link

trevorb1 commented Nov 15, 2024

Review checklist for @trevorb1

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/UU-ER/AdOpT-NET0?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@JeanWi) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@trevorb1
Copy link

trevorb1 commented Nov 16, 2024

Comments

Hi, @JeanWi thanks so much for submitting AdOpT-NET0 and suggesting me to review! This is a super interesting project that has been fun to test out, and seems like a great addition to the ESOM space! Please find below my comments and questions. If any of my comments are unclear, or you feel they are not valid, please let me know and I can clarify and/or discuss further!

General Checks

All items complete!

Functionality

  • It would be good to include the link to the documentation site in the upper right hand side of the GitHub repository; under the "About" section.
  • Installation on Python 3.13 does not work for me. Please see this issue.
  • Functionality I have not been able to confirm yet, as I am experiencing a bug working through the documentation that I can't solve. Once these are solved I will come back and assess functionality.

Documentation

General Comments

I appreciate the effort that has gone into documenting this project! This is an often overlooked step that can easily kill a project; so thank you for the effort! However, it feels like the Getting Started portion of docs has not been updated to reflect new updates, as I am experiencing several issues that are described below. One of these issues is not letting me move forward with the review.

The current state of the documentation is not clear enough to guide the user through the configuration process. There is significant file management required to configure a model (which I fully understand, as this framework lends itself to user configuration), however, instructions on how to manage these configuration files needs to be improved. I would suggest either simplifying the configuration steps (ie. not having to copy files after running the template command), or being more explicit on what files the user is modifying/copying in each step (ie. using graphics such as a tree diagram or something similar). Furthermore, directing users to locations that describe some of the core configuration options is needed (see specific issues below for examples.)

With that said, I think a more structured tutorial page may help answer many of my concerns. For example, if users are given a graphic showing a simple energy system to build (2 nodes, 2 networks, and just a couple techs), with an accompanying simple question to answer, and are walked through how to set up this model, this will help. Generalized documentation process can then have a specific example; allowing instructions like below (taken from Define Input Data Step 2).

Then, for each of the networks that you specify, an input data folder with that network name should be added in the corresponding folder (“existing” or “new”) in the network_topology folder.

to be replaced with something along the lines of:

  1. Create the folder data/period1/network_topology/existing/existingSimple
  2. Create the folder data/period1/network_topology/new/existingSimple

CaseStudy data and results are provided, however, I have not been able to generate this data myself through configuring a model following the documentation site.

Specific Issues

  • In Defining System Topology it is unclear where the .\data\technology_data folder is to reference for naming conventions. I see that in a later step this folder is created, but in the instructions its unclear what this folder is at this point in time.

  • In Defining System Topology it states that investment periods are "a future feature, for now only single-period analysis is possible." This contradicts that paper that states that Multi-period studies are possible. Please clarify.

  • In Define Input Data the users are told to specify the networks in the model in step 2. While I can see the different network lists, its unclear what the differences are between the networks. For example, are electricityOffshore networks for only underwater interconnectors that may have different constraints/parameters applied to them? Then how does electricityOnshore, electricityOffshore and electricitySimple interact? Are all the constraints under the Network class available to all Network types? I would guess so based on the documentation note that All networks are modelled using this class, and there are no subclasses of specific network models, but then why are users only allowed to pick these networks?

  • In Define Input Data Step 2, the users are told to copy a size_max_arcs.csv to a folder. This csv does not exist in my templates.

  • When defining my Input Data in Step 2, its unclear why I need distance? My node locations are already geolocated from step 1? Is this to represent scenarios where my node my be at a central location in an arbitrary region, but I want my connection to be at a demand centre? But if that is the case, we don't define these 'arbitrary shapes'? Can you please clarify?

  • In Define Input Data Step 2, the second example for creating a connection (Connection: Example - allow_only_one_direction) seems like this is a bidirectional link? But its labelled as only allowing one-direction? Please clarify.

  • In Define Input Data Step 2, the docs state "In order to read in the required data for our “electricityOffshore” network". It is still unclear the difference between these different networks. Does the electricityOffshore network contain data on sea depth, tides, ect.. that assist in assessing technology deployment decisions? Please clarify.

  • In Define Input Data Step 3, the example json data is giving an error, and I have not been able to move past this point. Please see this and this issue.

Software Paper

Overview

This paper describes the python package AdOpT-NET0; a modelling framework for performing energy system optimization modelling (ESOM) studies with a focus on the technology representation. Motivation for the need of multi-sector ESOM frameworks is first presented, followed by an overview of the features of AdOpT-NET0. Furthermore, a note on how AdOpT-NET0 builds on an existing MATLAB model to be open-source with feature improvements is given. Finally, a statement of need is presented highlighting how AdOpT-NET0 fits into the expanding field of ESOM frameworks.

Overall, the paper read well, was clear and concise, and stated clearly how AdOpT-NET0 fits into the literature (with the main emphasis placed on technology representation). Please find below minor clarification comments.

Comments

  • Line 21-22: You state that mulit-energy systems "are highly complex but also offer synergies to reduce costs and environmental impact." This reads like a definitive statement that deserves a reference. (Not that I disagree with the statement at all! Just a single reference here would be good to defend the need for multi-energy modelling).

  • Table 1: Thanks for the comprehensive table! Can you please be more specific with the pareto front methods implemented. Similar to how you have given stochastic scenarios through "Monte Carlo" sampling, listing methods of pareto fronts would be good (ie. MGA, ɛ-constraint, ect.. following definitions from the Hoffmann paper)

  • Line 30-31: (Apologies in advance for this rather pedantic wording comment!) You note that AdOpT-NET0 "allows for a highly realistic assessment of individual technologies and their integration into an energy system". I have trouble with this wording when no evidence is provided (ie. a ref to a research paper / study) to show that improved technology representation correlates to more realistic technology investment/integration decisions. AdOpT-NET0 can describe technology specific operational constraints well (and closer represents reality), but this does not guarantee highly realistic integration answers over multiple time horizons. Can you please clarify how AdOpT-NET0's detailed technology representation translates into realistic assessments of energy integration questions. Or can you please soften the wording to align with a prescriptive analytics tool (ie. guide decision making) rather than a predictive analytics tool (ie. predicting what will happen in the future). To clarify, I have no issue with the statement of representing technologies more realistically (which you give references for later), but rather how this translates to more realistic future energy integration questions (which is likely difficult to measure?).

  • Figure 1: The figure is clear, I just have a question about the reference. I see it says adapted from Tiggeloven et al.; and it seems to be a cropped version of Figure 2 from that paper (specifically, box 2d)? I am unsure if this requires the license of the figure to also be included with caption? @JeanWi, if this figure is cropped from the Tiggeloven et al. reference, maybe @AdamRJensen can confirm if stating "adaptation" alone is okay, or if the figure license is also required?

  • Line 32-33: (A manuscript suggestion that you are free to take or leave) You note that "several complexity reduction algorithms can be adopted to deal with infeasible computation times". I believe this is a super valuable point, and explicitly listing them for the reader is useful. ie. Something along the lines of "Complexity reduction algorithms for modelling seasonal storage (Gabrielli et al., 2018) and high VRE systems (Weimann & Gazzani, 2022) are implemented." Having these algorithms already integrated is a big plus of AdOpT-NET0 and would be good to highlight!

  • Line 36: Its great that the closed MATLAB version has been ported to Python and made openly available. Is there any relevant publications or similar that use or describe the MATLAB version that require a referencing here? Or has it been used solely for in-house business applications?

  • Statement of need paragraph: Listing the specific technologies you represent in greater detail compared to existing models is great, as it helps the reader understand the novelty of this framework! Where further clarification will help is with the target user. In the opening sentence you break down the field into complex NLP frameworks, and simplified LP frameworks. The users of this different frameworks may be different, though? (ie. policy focused research questions will use LP frameworks, while power flow research question (as you mention) may want the NLP detail). It is a little unclear to me if AdOpT-NET0 can be used both as a policy support tool and a power flow analysis tool, or just one or the other (or a category I am missing!).

  • Has AdOpT-NET0 been used in any publications (other than the forthcoming one by Wiegner et al.)? If so, please list them, else please just disregard this comment.

@JeanWi
Copy link

JeanWi commented Nov 18, 2024

Hi @trevorb1, thanks a lot for taking the time to write this review, we really appreciate the time you put into this! Sorry for the bug in the documentation, that's indeed plain wrong. For the existing technologies it should have read (note the curly brackets and the double quotes):

"existing": {"WindTurbine_Onshore_1500": 2, "Photovoltaic": 2.4},

We will start tackling your points now, but I thought I'd let you know already so you can continue in case you find the time.
Looking forward for your comments on trying out the tool!

@JeanWi
Copy link

JeanWi commented Nov 18, 2024

@AdamRJensen Regarding Figure 1 (and actually also Figure 2): This is an adaption of a figure from a work by Julia Tiggeloven (referenced in the caption, she also coauthored this paper, licensed under CC-BY-4.0). Also Figure 2 is an adaption of a figure in one of my papers (only available as a pre-print so far). So I think it shouldn't be a problem. Let me know, if I missed something.

@AdamRJensen
Copy link

AdamRJensen commented Nov 18, 2024

@AdamRJensen Regarding Figure 1 (and actually also Figure 2): This is an adaption of a figure from a work by Julia Tiggeloven (referenced in the caption, she also coauthored this paper, licensed under CC-BY-4.0). Also Figure 2 is an adaption of a figure in one of my papers (only available as a pre-print so far). So I think it shouldn't be a problem. Let me know, if I missed something.

As long as the figures are sufficiently different to actually be considered "adapted" and not copied then I think it's fine.

@trevorb1
Copy link

Hi @JeanWi; thanks so much for providing the fix here so I can move on with the review! I appreciate it. Regarding the figures, thank you for clarifying the license on them. If you and @AdamRJensen are okay with the current citing of the figures, thats works great for me :)

Also, apologies for this note of mine from the previous comment:

With that said, I think a more structured tutorial page may help answer many of my concerns.

I completely missed seeing that three different case studies have already been set up and documented here. I only saw the case_study_results folder in the repo during my first look through. Totally my fault for missing that. My comments below include now walking through these case studies.

Documentation

Please find below the remainder of my items. This list can be seen as an extension to Documentation - Specific Issues list from my previous comment:

Specific Issues

  • In Define Input Data Step 4, it notes "For the carriers, whether or not curtailment of generic production is possible in EnergybalanceOptions.JSON". I found it difficult to location this file and would recommend putting exact location (<data_path>/<investment_period>/node_data/<node>/carrier_data/EnergybalacneOptions.json) or some other way to guide the user there.
  • In In Define Input Data, for specifying time dependent data, I could not get the adopt.load_climate_data_from_api(...) function to run. Please see this issue. (Once running, Im excited to try this function, though! Having AdOpT-NETO handle climate data for users is a nice feature!)
  • In In Define Input Data, the instruction numbers do not carry over to time-dependent data (ie. it goes from 1., 2., ... to just a paragraph. Its unclear if these steps are required.
  • In Define Model Configuration, there is a broken table link. See this issue.
  • Please see some general comments on Case Study 1 in this issue
  • Case study 2 worked great for me!
  • Please see a minor issue on case study 3 here. (Sidenote; this case study was very well presented and descriptive!)
  • In the Advanced Topics: Model Configuration section, it would be good to add what the different objective options are (ie. what are the differences between [‘costs’, ‘emissions_pos’, ‘emissions_net’, ‘emissions_minC’, ‘costs_emissionlimit’, ‘pareto’]. I do see that under some sections, they are mentioned (ie. for paerto studies is says to change the option to pareto). But its not necessarily clear what the difference between the emission scenarios are and how the interact with the emission_limit option (unless I just missed seeing where this is in the docs!)
  • In the In Short section, there is an issue when specifying the path of the input data. Please see this issue
  • In the In Short section, there is an issue with the arguments in the fill_carrier_data(...) function. Please see this issue.

General Notes

  • The case studies are great! Exactly what I was looking for to learn AdOpT-NET0. Just a suggestion (ie. not a required change), but highlighting these better in the docs would be beneficial. My first instinct was to go to the getting started section. However, having these examples to accompany the instructions in that section would have been great.
  • The visualization dashboard is quite nice!
  • Again, apologies for missing the case studies originally. I hope this didn't cause too much additional work on your side.

Automated Tests

  • Tests exist with automated CI workflow!
  • One test fails for me; please see this issue.
  • Please add that users must pip install coverage to run tests; please see this issue
  • The tests create a whole bunch of new files in the directory I run from? Do you experience this as well? If so, I would suggest either using temporary files, or deleting these files at the end of the tests. Please see this issue.
  • If a significant portion of the code base is tested, I would suggest added a code coverage badge to showcase that!. However, I do understand that testing both optimization models and file management functions can be tricky, so converge may not necessarily be reflective of the work that has gone into this project. Therefore, this is just a suggestion (not a requirement) as I do see there is tests for main functionality, and the claimed functionality works for me.

Community Guidelines

  • A contributing guide exists that includes detailed instructions on how to contribute, code style, where to report bugs, ect.
  • The only thing I would add is including at least one issue template. Issue template are quite easy to set up and can force contributors to give certain information you are looking for (ie. OS, package version, ect...). This way users will know exactly what you mean by detailed description. Please see this issue.

Final Notes

  • Well done to all the authors! This is a super cool project!
  • If any of my comments require further explanation, please just let me know and I will elaborate.
  • If you do not agree with any of my comments, please just let me know your concerns and we can discuss further.

@JeanWi
Copy link

JeanWi commented Dec 3, 2024

Hi @trevorb1, thanks for checking everything! We really appreciate all the effort and for spotting he unclarities and inconsistencies that come with being a bit lazy with the documentation sometimes...

All changes have been implemented in this PR and are part of v0.1.6 of AdOpT. Find below the answers to your remaining comments.

Please see UU-ER/AdOpT-NET0#379 with the gurobipy install

This is a very annoying problem, we answered in the issue itself. Thanks for highlighting! We hope it was solved to your satisfaction.

I still think clarifying what data is used by the geolocated points vs. manual distances is required. I see the note of "the distance needs to be added manually, even if the node locations were specified previously" but this does not clarify what data is being pulled from where? Geolocations are only used for renewable potentials?

To tackle this, we have (1) made the specification of node locations a requirement and (2) added the following description in the documentation: "Climate data is loaded from this data. Additionally it is used to calculate the position of the sun for PV modelling (using pvlib). Note that the distance between nodes is not based on the provided locations."

Clarification on the differences between different networks still needs to be improved, I feel.

We also added to the documentation the following information to the documentation: "The technologies and networks shipped with AdOpT-NET0 can be seen as templates and the performance and cost parameters as well as technology specific options can be further modified by the user. Therefore, you can modify the json files in the input data folder of your case study after they have been copied. Also it is possible to specify technologies or networks that are not provided in AdOpT by using the models defined in the network class or the technology classes."

Additionally we now provide more information on bi-directional networks in the network documentation.

Software Paper

We have fixed the issues in the latest commit to the joss_submission branch. The two papers mentioned are forthcoming, so we did not specify a year. @AdamRJensen: could you produce a new article proof? Also let us know if you would like to see the MATLAB papers cited in the respective section.

@trevorb1
Copy link

trevorb1 commented Dec 3, 2024

Thanks for the quick fixes, @JeanWi!

  • gurobipy issue is solved.
  • Node location issue solved.
  • The documentation for networks has improved. However, it still remains a little vague to me on the purpose of pre-defined networks when parameters are the same between them and users are free to add new networks. With that said, I don't want to hold up the review on this single point that may just be a wording issue and would sort itself out if using AdOpT-NET0 for a large model (plus the other reviewer did not have an issue with wording). So I am fine to mark this point as complete :)

@trevorb1
Copy link

trevorb1 commented Dec 3, 2024

@AdamRJensen; all items in my checklist are now complete, and I recommend to accept AdOpT-NET0!

Well done to @JeanWi and the other co-authors! Thank you for your detailed feedback to all of my comments!

Thank you to @AdamRJensen for facilitating the review process!

@JeanWi
Copy link

JeanWi commented Dec 4, 2024

  • The documentation for networks has improved. However, it still remains a little vague to me on the purpose of pre-defined networks when parameters are the same between them and users are free to add new networks. With that said, I don't want to hold up the review on this single point that may just be a wording issue and would sort itself out if using AdOpT-NET0 for a large model (plus the other reviewer did not have an issue with wording). So I am fine to mark this point as complete :)

I agree with you that this can be a bit confusing for new users and I have to admit that we also did not update the data very well. When we use the model, typically we specify the data per case study. But our long-term vision is to have a database of technology data connected to AdOpT-NET0 with respective financial and technical data.
We discussed this issue internally and came up with two options: (1) deleting the network json files, keeping just one to use as a template or (2) keeping it as it is and allowing for a wider range of network templates but requiring the user to specify the data for each case study individually. Both options have their pros and cons (of course we have a preference on leaving it as it is though :D).

With this, I would also like to thank you for the in-depth review! As mentioned before, you really helped to make AdOpT-NET0 a better package!

@JeanWi
Copy link

JeanWi commented Dec 9, 2024

@AdamRJensen: Do you need anything from our side still? To our knowledge we addressed all comments, it only remains how you prefer previous work has been cited. Maybe you are on vacation, so I do not want to pressure you, but just wanted to make sure that we didn't miss anything.

@AdamRJensen
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@AdamRJensen
Copy link

@JeanWi once you address the following issues, we should be ready for publishing 🥳
UU-ER/AdOpT-NET0#389
UU-ER/AdOpT-NET0#388
UU-ER/AdOpT-NET0#387
UU-ER/AdOpT-NET0#386
UU-ER/AdOpT-NET0#385
UU-ER/AdOpT-NET0#384

@JeanWi
Copy link

JeanWi commented Dec 10, 2024

@AdamRJensen: Thanks for checking 👍. We have processed the issues and the PR. Could you close the issues if you feel we addressed them respectively? Let us know, if you need anything else!

@AdamRJensen
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@AdamRJensen
Copy link

AdamRJensen commented Dec 16, 2024

@JeanWi At this point could you:

  • Make a tagged release of your software, and list the version tag of the archived version here.
  • Archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository)
  • Check the archival deposit (e.g., in Zenodo) has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
  • Please list the DOI of the archived version here.

I can then move forward with recommending acceptance of the submission.

@AdamRJensen
Copy link

AdamRJensen commented Dec 16, 2024

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a pull request)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@AdamRJensen
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1016/j.apenergy.2017.07.142 is OK
- 10.1021/acs.iecr.2c00681 is OK
- 10.48550/arXiv.2411.00540 is OK
- 10.1016/j.apenergy.2017.01.089 is OK
- 10.1016/j.compchemeng.2022.107816 is OK
- 10.46855/energy-proceedings-5280 is OK
- 10.1038/s41597-019-0199-y is OK
- 10.1016/j.ijrefrig.2021.10.002 is OK
- 10.1021/ACS.IECR.3C02226 is OK
- 10.1016/j.apenergy.2023.120738 is OK
- 10.1016/j.adapen.2024.100190 is OK
- 10.1016/j.energy.2013.10.041 is OK
- 10.1016/j.rser.2019.109629 is OK
- 10.1007/978-3-030-68928-5 is OK
- 10.21105/joss.05994 is OK
- 10.1016/j.apenergy.2022.119029 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Optimizing emissions reduction in ammonia-ethylene...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@JeanWi
Copy link

JeanWi commented Dec 16, 2024

@JeanWi At this point could you:

  • Make a tagged release of your software, and list the version tag of the archived version here.
  • Archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository)
  • Check the archival deposit (e.g., in Zenodo) has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
  • Please list the DOI of the archived version here.

I can then move forward with recommending acceptance of the submission.

@AdamRJensen: Thanks!
I did not make a new tag, as there were no new changes. The version is 0.1.7 and is archived here, including the respective meta data (authors and title). The doi of that version is: 10.5281/zenodo.14361112.

Is that ok, or would you like to have a new release/tag?

Currently the branch with the joss paper is out of date with the main branch, is this ok? We can also merge the main branch into the joss_submission branch, so it is up to date.

@AdamRJensen
Copy link

Is that ok, or would you like to have a new release/tag?

Currently the branch with the joss paper is out of date with the main branch, is this ok? We can also merge the main branch into the joss_submission branch, so it is up to date.

I think all those things are fine.

@JeanWi
Copy link

JeanWi commented Dec 17, 2024

Perfect, thanks for checking! I cannot tick of the tasks in the box. But then we have them all covered.

@AdamRJensen
Copy link

@editorialbot set 10.5281/zenodo.14361112 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.14361112

@AdamRJensen
Copy link

@editorialbot set 0.1.7 as version

@AdamRJensen
Copy link

@editorialbot set v0.1.7 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.1.7

@AdamRJensen
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@AdamRJensen
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1016/j.apenergy.2017.07.142 is OK
- 10.1021/acs.iecr.2c00681 is OK
- 10.48550/arXiv.2411.00540 is OK
- 10.1016/j.apenergy.2017.01.089 is OK
- 10.1016/j.compchemeng.2022.107816 is OK
- 10.46855/energy-proceedings-5280 is OK
- 10.1038/s41597-019-0199-y is OK
- 10.1016/j.ijrefrig.2021.10.002 is OK
- 10.1021/ACS.IECR.3C02226 is OK
- 10.1016/j.apenergy.2023.120738 is OK
- 10.1016/j.adapen.2024.100190 is OK
- 10.1016/j.energy.2013.10.041 is OK
- 10.1016/j.rser.2019.109629 is OK
- 10.1007/978-3-030-68928-5 is OK
- 10.21105/joss.05994 is OK
- 10.1016/j.apenergy.2022.119029 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Optimizing emissions reduction in ammonia-ethylene...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/pe-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6260, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 3 (PE) Physics and Engineering
Projects
None yet
Development

No branches or pull requests

5 participants