-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making pyhf differentiable #882
Comments
Rather than overriding We generally want to support users switching backends transparently without a huge amount of re-computation. |
@phinate I just went ahead and did the rebase. Try it now. To be honest, there's not a ton of changes now... |
I'm reactivating this based on the recent #1625 PR. The goal is to make this fucnttion diffable (and removiing the def makeit(
pars,
nominal,
uncorr_data,
corrup_data,
corrdn_data,
stater_data,
normsys_up,
normsys_dn,
lumi_sigma,
):
spec = {
"channels": [
{ "name": "secondchannel",
"samples": [
{ "name": "background",
"data": nominal,
"modifiers": [
{ "name": "mu", "type": "normfactor", "data": None},
{ "name": "lumi", "type": "lumi", "data": None },
{ "name": "mod_name", "type": "shapefactor", "data": None },
{ "name": "uncorr_bkguncrt2", "type": "shapesys", "data": uncorr_data} ,
{ "name": "corr_bkguncrt2", "type": "histosys", "data": {'hi_data': corrup_data, 'lo_data': corrdn_data}},
{ "name": "staterror", "type": "staterror", "data": stater_data},
{ "name": "norm", "type": "normsys", "data": {'hi': normsys_up, 'lo': normsys_dn}},
]
}
]
}
],
}
model = pyhf.Model({'channels': spec['channels'], 'parameters': [{'name':'lumi', 'auxdata': [1.0], 'bounds': [[0.5,1.5]], 'inits': [1.0], "sigmas": [lumi_sigma]}]})
exp_data = model.expected_data(pars)
return model.logpdf(pars,exp_data)
makeit(
pars = tb.astensor([0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]).tolist(),
nominal = tb.astensor([60.0, 62.0]).tolist(),
uncorr_data = tb.astensor([5.0, 9.0]).tolist(),
corrup_data = tb.astensor([65,65]).tolist(),
corrdn_data = tb.astensor([55,55]).tolist(),
stater_data = tb.astensor([10.,10]).tolist(),
normsys_up = tb.astensor(1.05).tolist(),
normsys_dn = tb.astensor(0.95).tolist(),
lumi_sigma= tb.astensor(0.017).tolist(),
) |
On my to do list. I know how to make it happen. |
Great to see efforts on this again, super exciting for me in particular :)
Edit: ah, this is the issue! I saw a message about a rebase and assumed otherwise... Of course, this wouldn't interfere with #742, my mistake :P |
Description
It's become clear to me through the discussions that have taken place in IRIS-HEP and gradhep that it would be awesome to have pyhf as part of the modelling step in a differentiable analysis workflow.
Having backends that support autograd is most of the work, but me and @lukasheinrich have found through experiments in the development of neos that there are still operations that don't play nice with the differentiability of
pyhf.Model
construction, for instance. This is also just in the situation of usingpyhf.tensor.jax_backend()
-- I haven't tried this with other backends.Pull requests/issues
There's already been an attempt to make part of this differentiable in #742, which I made small modifications to in my fork of this branch. (I make an explicit call to override the backend at one point, which isn't pretty...)
Scope
So far, this idea has only been used for the model construction and likelihood evaluation steps, but I wonder if this could also extend to inference in
pyhf.infer
? This could influence design decisions made for a library to provide differentiable HEP operations, which could be imported intopyhf.infer
down the line if the actual underlying functionality doesn't change, e.g. wrapping ascipy
optimizer with the two-phase method in thefax
library.All this seems big enough a task that I felt like it warranted an issue as a documented way forward :)
The text was updated successfully, but these errors were encountered: