You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the discussions (particularly the library versus framework in #7) I'm starting to grasp the scope of what we mean by "differentiable analysis". I had thought the aim was to package Neos/Inferno into a loss-function to allow it to be used easily in HEP analyses. It seems, though, that the aim more broadly is to make the full analysis-chain from reco. ntuples to result be fully differentiable.
Milestones
Given the scale of this work, I'm wondering if might be best to break the effort down into milestones which gradually work backwards from result to ntuple skimming. This would offer a clearer scope for each stage of development and allow us to constantly monitor performance in a realistic benchmark analyses (rather than differentiating ALL TEH THINGS and finding out that it doesn't beat a more traditional approach).
Differentiably optimise the binning of the whole summary stat.
Differentiably optimise the summary stat (e.g. Inferno/Neos) on set training samples
Differentiably optimise the skimming of the training samples
This would allows us to continually evaluate the gain in sensitivity with every step. to help convince ourselves and other of the advantage of DA.
Framework versus library
At the end of the day, we want other researchers to use DA for official analyses. These are already time-consuming affairs and on top of that we are then expecting the people involved to completely change their approach to something with which they will are most likely unfamiliar. Whilst every analysis will be slightly different, I think it make the transition much smoother (pun mildly intended) if we were to offer a framework with a set series of steps to help walk new researchers through the process of making the analysis differentiable.
This framework could of course call our own internal libraries, but something with an intuitive API which abstracts away the technicalities and provides a recommended workflow, would (I think) be much more appealing than instead being provided with a large library of new methods and classes and being expected to piece everything together with a few limited examples.
As the community becomes more au fait with DA, then the importance of our particular framework may diminish as various groups either build their own frameworks, or write their own libraries in place of ours. I think that getting to this point would be good, however reaching it requires a critical mass of experience within the community, which an introductory framework could help accelerate. An example might be how Keras made DL vastly more accessible to new practitioners, but as community knowledge has grown, people are now moving to work with the more low-level libraries that Keras previously abstracted.
The text was updated successfully, but these errors were encountered:
Scope
From the discussions (particularly the library versus framework in #7) I'm starting to grasp the scope of what we mean by "differentiable analysis". I had thought the aim was to package Neos/Inferno into a loss-function to allow it to be used easily in HEP analyses. It seems, though, that the aim more broadly is to make the full analysis-chain from reco. ntuples to result be fully differentiable.
Milestones
Given the scale of this work, I'm wondering if might be best to break the effort down into milestones which gradually work backwards from result to ntuple skimming. This would offer a clearer scope for each stage of development and allow us to constantly monitor performance in a realistic benchmark analyses (rather than differentiating ALL TEH THINGS and finding out that it doesn't beat a more traditional approach).
An example could be:
This would allows us to continually evaluate the gain in sensitivity with every step. to help convince ourselves and other of the advantage of DA.
Framework versus library
At the end of the day, we want other researchers to use DA for official analyses. These are already time-consuming affairs and on top of that we are then expecting the people involved to completely change their approach to something with which they will are most likely unfamiliar. Whilst every analysis will be slightly different, I think it make the transition much smoother (pun mildly intended) if we were to offer a framework with a set series of steps to help walk new researchers through the process of making the analysis differentiable.
This framework could of course call our own internal libraries, but something with an intuitive API which abstracts away the technicalities and provides a recommended workflow, would (I think) be much more appealing than instead being provided with a large library of new methods and classes and being expected to piece everything together with a few limited examples.
As the community becomes more au fait with DA, then the importance of our particular framework may diminish as various groups either build their own frameworks, or write their own libraries in place of ours. I think that getting to this point would be good, however reaching it requires a critical mass of experience within the community, which an introductory framework could help accelerate. An example might be how Keras made DL vastly more accessible to new practitioners, but as community knowledge has grown, people are now moving to work with the more low-level libraries that Keras previously abstracted.
The text was updated successfully, but these errors were encountered: