Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Road-map and milestones #9

Open
GilesStrong opened this issue May 19, 2020 · 0 comments
Open

Road-map and milestones #9

GilesStrong opened this issue May 19, 2020 · 0 comments

Comments

@GilesStrong
Copy link
Member

Scope

From the discussions (particularly the library versus framework in #7) I'm starting to grasp the scope of what we mean by "differentiable analysis". I had thought the aim was to package Neos/Inferno into a loss-function to allow it to be used easily in HEP analyses. It seems, though, that the aim more broadly is to make the full analysis-chain from reco. ntuples to result be fully differentiable.

Milestones

Given the scale of this work, I'm wondering if might be best to break the effort down into milestones which gradually work backwards from result to ntuple skimming. This would offer a clearer scope for each stage of development and allow us to constantly monitor performance in a realistic benchmark analyses (rather than differentiating ALL TEH THINGS and finding out that it doesn't beat a more traditional approach).

An example could be:

  1. Differentiably optimise a 1D cut on a summary stat (e.g. @alexander-held 's example)
  2. Differentiably optimise the binning of the whole summary stat.
  3. Differentiably optimise the summary stat (e.g. Inferno/Neos) on set training samples
  4. Differentiably optimise the skimming of the training samples

This would allows us to continually evaluate the gain in sensitivity with every step. to help convince ourselves and other of the advantage of DA.

Framework versus library

At the end of the day, we want other researchers to use DA for official analyses. These are already time-consuming affairs and on top of that we are then expecting the people involved to completely change their approach to something with which they will are most likely unfamiliar. Whilst every analysis will be slightly different, I think it make the transition much smoother (pun mildly intended) if we were to offer a framework with a set series of steps to help walk new researchers through the process of making the analysis differentiable.

This framework could of course call our own internal libraries, but something with an intuitive API which abstracts away the technicalities and provides a recommended workflow, would (I think) be much more appealing than instead being provided with a large library of new methods and classes and being expected to piece everything together with a few limited examples.

As the community becomes more au fait with DA, then the importance of our particular framework may diminish as various groups either build their own frameworks, or write their own libraries in place of ours. I think that getting to this point would be good, however reaching it requires a critical mass of experience within the community, which an introductory framework could help accelerate. An example might be how Keras made DL vastly more accessible to new practitioners, but as community knowledge has grown, people are now moving to work with the more low-level libraries that Keras previously abstracted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants