-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up resource to allow users to quickly decide on what simulator/tool they want to use #61
Comments
Hi folks, any ideas on this? We discussed it briefly at the meeting, and it turns out that this sort of list is not trivial to set up. On one hand, there's no objective way to say that simulator A does something better than simulator B under all conditions, and on the other, just listing each simulator's features won't really be helpful because then new users will not be able to decide if they should use NEURON or a brand new simulator that's not so well established. So, if you have any ideas on how we can implement this sort of thing, please let us know. Let's time box this and close it in 2 weeks (on the 25th of May) if we don't get any responses. |
I think this would require quite a bit of discussion (as you mentioned, its not a very straighforward decision). That said.... some criterias/choices would include:
Then there are more subjective ones:
|
I think the main problem, as @sanjayankur31 mentioned, is that such a list will either be very opinionated, say "for multicompartmental modeling use NEURON, for point-neuron modeling use NEST", or it will end up being a huge table like https://en.wikipedia.org/wiki/Comparison_of_wiki_software which has a lot of information but is certainly not a resource to "quickly decide". Side remark: The list by Jim Perlewitz is probably the most complete curated list for comp neurosci: https://compneuroweb.com/sftwr.html But maybe there is some middle ground. If we can come up with some reasonably objective criteria to include software in the list (e.g., first release more than X ago, last release not longer than Y ago, more than Z citations, or something along these lines), then we could list them all, have tags/features like those that @appukuttan-shailesh suggested before, and allow the user to filter the list according to these criteria. They would still end up with something like NEURON + Moose + Arbor + … if they ask for "multi-compartmental modeling on Linux clusters", but all would be generally reasonable suggestions. |
I agree with both Ankur and Marcel! The availability of tutorials, courses etc could also be a good thing to factor in (though I assume most simulators have some of that). Personally, I also think it is good if recommendations that come with some sort of organizational connection (in this case both INCF and OCNS) are seen as objective. Having clearly stated, reasonably objective criteria is a good start; I think we should make at least a partial list of criteria available with the guide. |
In addition to a list/table, perhaps something useful would be a collection of small "benchmark" models with implementations in each simulator, to the extent of their capabilities: Let developers propose models/tasks that highlights their tool's advantages; let them implement their own as well as the competitors' benchmarks, if possible. Ideally, the models will cover a wide range of use cases. Potential users can then look at models that resemble their own use cases, and decide which tool to use based on the code, performance, and tradeoffs in terms of what can be done. |
Recently we merged an overview into our docs: https://docs.arbor-sim.org/en/latest/ecosystem/index.html#wider-ecosystem, where some commonly used simulators are listed by level of detail (we're probably a bit overfocussed on the column of morph. detailed cells ;)). This 'hierarchy of detail' usually works well in bringing some clarity to people new to the field is our experience. @mstimberg Thanks for the compneuroweb link, I'm happy to see some similarity in categorization! They are mixing in frameworks with simulators though. To me a clear separation between simulator and modelling frameworks makes sense, does it to others too? The level in that hierarchy of details is probably the most important decision a researcher needs to make: what level do I focus on? I'm happy to put our diagram forward as a starting point; as first pass we could make sure all are agreed on the categories and that all simulators are mentioned. I should probably add some links in there, and no doubt some people would like to discuss the order ;) |
Would folks be up for having maybe a half day (2--3 hour) sprint/hackathon where we can get together to work on this doc? I think we may make a lot more progress there than we'll make if we work on this asynchronously. It should at least give us an initial draft that can then be tweaked later. If folks are up for this, I can set up a whenisgood etc. |
I will unfortunately have to sit out of most activities till around the end of August. |
How about something in September/October when everyone is back from holidays (after Bernstein conf etc.)? |
Perhaps even at the Bernstein conf? Things to agree on:
Meanwhile, I realized that the Ebrains Knowledge Graph can be/is used for this purpose, or at least in part: https://search.kg.ebrains.eu/?category=Software |
I'm happy to meet at the Bernstein conf (but I don't think there will be much time to actually do something) or at a later date. IMO, it would be good to start with something simple (e.g. I'd leave @kernfel's benchmark suggestion for a later stage or even a separate project, since benchmarks are non-trivial to do right). Regarding @brenthuisman's questions, I believe (as I commented earlier) that we might find a reasonable middle ground: I'd have some very basic cut-off criteria (at least one paper published by someone who is not a developer using the simulator/framework? at least one year since the first release?) – this is a bit tricky, since we don't want to punish very recent but promising project, but we also do not want to include every tool that a someone hacked together over a weekend. Maybe it could be a Wikipedia-ish two-tiered system: if a software fulfills some basic criteria, it is automatically "note-worthy", but it can also be included if it does not fulfill the criteria yet, e.g. if several members of the working group vote for its inclusion. And then I wouldn't do any recommendations. If we have a number of quantitative values for each software (github stars, pypi downloads, year of first release, ...) we could allow sorting by these which would implicitly rank things. We might update some metadata automatically using the GitHub API (or something like eBrains Knowledge Graph), e.g. with a GitHub Action cron job every week? In any case, I hope we can come up with something friendly and accessible instead of a long boring table :) To practice my barely existing JavaScript skills I actually hacked together a little prototype of what I have in mind: https://spiky-acidic-throne.glitch.me |
Hi all
Here is that curated list of open software resources for computational neuroscience that I mentioned at the last meeting. It may be of use.
https://github.com/asoplata/open-computational-neuroscience-resources <https://github.com/asoplata/open-computational-neuroscience-resources>
I wont be at the Bernstein conference but I will be at OCNS in Melbourne next week. So hopefully I will see some of you there!
All the best!
Stewart Heitmann (Brain Dynamics Toolbox)
***@***.***
… On 12 Jul 2022, at 1:25 am, Marcel Stimberg ***@***.***> wrote:
I'm happy to meet at the Bernstein conf (but I don't think there will be much time to actually do something) or at a later date. IMO, it would be good to start with something simple (e.g. I'd leave @kernfel <https://github.com/kernfel>'s benchmark suggestion for a later stage or even a separate project, since benchmarks are non-trivial to do right).
Regarding @brenthuisman <https://github.com/brenthuisman>'s questions, I believe (as I commented earlier <#61 (comment)>) that we might find a reasonable middle ground: I'd have some very basic cut-off criteria (at least one paper published by someone who is not a developer using the simulator/framework? at least one year since the first release?) – this is a bit tricky, since we don't want to punish very recent but promising project, but we also do not want to include every tool that a someone hacked together over a weekend. Maybe it could be a Wikipedia-ish two-tiered system: if a software fulfills some basic criteria, it is automatically "note-worthy", but it can also be included if it does not fulfill the criteria yet, e.g. if several members of the working group vote for its inclusion. And then I wouldn't do any recommendations. If we have a number of quantitative values for each software (github stars, pypi downloads, year of first release, ...) we could allow sorting by these which would implicitly rank things. We might update some metadata automatically using the GitHub API (or something like eBrains Knowledge Graph), e.g. with a GitHub Action cron job every week?
In any case, I hope we can come up with something friendly and accessible instead of a long boring table :) To practice my barely existing JavaScript skills I actually hacked together a little prototype of what I have in mind: https://spiky-acidic-throne.glitch.me <https://spiky-acidic-throne.glitch.me/>
This is of course lacking all kind of styling (and the actual content ;-) ), but I hope it conveys the basic idea.
—
Reply to this email directly, view it on GitHub <#61 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEN6U4U5ZPGLQ4JUTHMQVBTVTQ4FZANCNFSM5PXVP2ZA>.
You are receiving this because you are on a team that was mentioned.
|
@mstimberg Nice "wizard" :) Yeah, I agree that this can be nice, but, as more people will have their say, the list of qualifiers/options might explode. Also, statements like 'NMODL support' are highly qualified :) We could reduce the wizard to just helping select simulators/frameworks based on level of abstraction, and have an ecosystem list (broken down by categories) separate. Personally I'd argue to minimize the use of any criteria (because that leads to discussion, and ultimately that won't be terribly useful for anyone). Sorting by Github stars/forks or somesuch is also hairy; not all tools are hosted here, nor PyPI. Making it a matter of voting might also slow down the process. It's boring, but a list, at least for starters, seems most tenable to me. But, as I said, we should agree on whatever we chose. @stewart-heitmann Nice list! Maybe we actually don't have to do any work :) |
I've seen the GitHub list before. There's also this: https://open-neuroscience.com/ but whereas they both provide lists, I'm not sure they answer the question we're trying to address here: "what simulator should I use?" I think the next step is to come up with a set of criteria for inclusion. What do folks think about that? I've got a few to propose:
Thoughts? |
In the meantime, I've added a new page for "resources" to the website. Please feel free to add more to the list via PRs: |
@sanjayankur31 Sounds good to me. We can always refine later. |
Note that on the list mentioned below, the text link [Jim Perlewitz' "Computational Neuroscience on the Web”] points to an out of date URL. It should point to https://compneuroweb.com/
Best,
Jim
… On Jul 11, 2022, at 4:52 PM, Stewart Heitmann ***@***.***> wrote:
Hi all
Here is that curated list of open software resources for computational neuroscience that I mentioned at the last meeting. It may be of use.
https://github.com/asoplata/open-computational-neuroscience-resources <https://github.com/asoplata/open-computational-neuroscience-resources>
I wont be at the Bernstein conference but I will be at OCNS in Melbourne next week. So hopefully I will see some of you there!
All the best!
Stewart Heitmann (Brain Dynamics Toolbox)
***@***.***
> On 12 Jul 2022, at 1:25 am, Marcel Stimberg ***@***.***> wrote:
>
>
> I'm happy to meet at the Bernstein conf (but I don't think there will be much time to actually do something) or at a later date. IMO, it would be good to start with something simple (e.g. I'd leave @kernfel <https://github.com/kernfel>'s benchmark suggestion for a later stage or even a separate project, since benchmarks are non-trivial to do right).
>
> Regarding @brenthuisman <https://github.com/brenthuisman>'s questions, I believe (as I commented earlier <#61 (comment)>) that we might find a reasonable middle ground: I'd have some very basic cut-off criteria (at least one paper published by someone who is not a developer using the simulator/framework? at least one year since the first release?) – this is a bit tricky, since we don't want to punish very recent but promising project, but we also do not want to include every tool that a someone hacked together over a weekend. Maybe it could be a Wikipedia-ish two-tiered system: if a software fulfills some basic criteria, it is automatically "note-worthy", but it can also be included if it does not fulfill the criteria yet, e.g. if several members of the working group vote for its inclusion. And then I wouldn't do any recommendations. If we have a number of quantitative values for each software (github stars, pypi downloads, year of first release, ...) we could allow sorting by these which would implicitly rank things. We might update some metadata automatically using the GitHub API (or something like eBrains Knowledge Graph), e.g. with a GitHub Action cron job every week?
>
> In any case, I hope we can come up with something friendly and accessible instead of a long boring table :) To practice my barely existing JavaScript skills I actually hacked together a little prototype of what I have in mind: https://spiky-acidic-throne.glitch.me <https://spiky-acidic-throne.glitch.me/>
> This is of course lacking all kind of styling (and the actual content ;-) ), but I hope it conveys the basic idea.
>
> —
> Reply to this email directly, view it on GitHub <#61 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEN6U4U5ZPGLQ4JUTHMQVBTVTQ4FZANCNFSM5PXVP2ZA>.
> You are receiving this because you are on a team that was mentioned.
>
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
|
Thanks @jimperlewitz : I've updated our resources page too. |
@ all Notes here: https://hackmd.io/M5PCRZgyQNO36qoaXixJqQ?edit |
This was brought up at the recent meeting. The issue is that we seem to be missing a place where users can quickly decide on what simulator they should use for their task. An idea was to set up a page with maybe a table lists the features of various simulators.
@OCNS/software-wg : since we have lots of simulator developers here, what are your thoughts?
The text was updated successfully, but these errors were encountered: