Replies: 1 comment 2 replies
-
Hi @TommyDESY, scan_results = cabinetry.fit.scan(model, data, "your_poi_name")
cabinetry.visualize.scan(scan_results) which presumably is non-parabolic as well. I don't think there is anything wrong here and I do not see a need to symmetrize uncertainties. This asymmetric behavior can arise naturally (e.g. due to asymmetric systematic uncertainties or large systematic uncertainties acting on your signal). |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear all,
In my current analysis (B --> Xu l nu at Belle II) we observe biases in the pull distributions obtained from toys. A minimal example of the
pyhf_spec
we're using is (1 channel, 3 samples, 1 POI, 1 NP):We produce a certain number of toys with:
We have 3 templates and for each we add a
normfactor
modifier. Then, the pulls are computed with:When no nuisance parameters are added, the resulting pull distribution corresponds to a normal distribution as expected. However, we need to add
histosys
nuisance parameters to the fit and for some of those, thehi_data
andlo_data
yields are not symmetric around the nominal yield (both in their shape and total normalisation as in the spec above). You can see below that the Gaussian distribution is slightly shifted towards higher values of my POI.My questions are therefore the following:
Is pyhf supposed to be able to deal with this kind of asymmetric errors ?
If yes, then what am I missing in the setup shown here ?
If no, then I suppose one needs to symmetrise these errors. We tried different ways which indeed eliminate the bias but force us to overestimate our systematic uncertainties. What would be the best way to do that in your opinion ?
Thank you in advance !
Beta Was this translation helpful? Give feedback.
All reactions