uncertainties computation for non-minuit backends #1237
Replies: 3 comments 3 replies
-
I am leaning towards 2, mostly because I think that is the implementation most users will expect. I will have to go through the notebook in more detail to understand the cases where this distinction matters. @nsmith- might also have some input based on earlier discussions. |
Beta Was this translation helpful? Give feedback.
-
These are my thoughts as well, @lukasheinrich. I think between your very nice learn notebook you have going in PR #1233 and additional clarification and notes in the docs this should be reasonable to achieve. |
Beta Was this translation helpful? Give feedback.
-
Minos is not more correct than Hesse in theory, but the Minos interval is invariant to monotonic transformations, which means in practice that you may get a better error estimate when you set limits on your parameter. Both Minos and Hesse are theoretically wrong when evaluated near a boundary, since this violates the assumptions under which these error estimates were derived. |
Beta Was this translation helpful? Give feedback.
-
We need to decide how we should report
in backends that are not minuit. There are two options
we can do this see notebooks in #1233 his makes it easy to compare to minuit b/c we're computing the same thing. This deppens on internals in minuit like it's map to "internal" unbounded parameters etcc.
e.g. parameter covariance is computed from the hessian (via inverse) and parameter uncertainties form the diag elements of the covarianec. This seems more principled but ignores boundary effects (though it's unclear to me that what MINUIT is doing is "more" correct. There might be an argument that both are not hyper-correct anyways and the "real" error comes from a MINOS lilke line search. In that case we can just do the more principled version and support an API for lhood scans
The two approaches differ numerically whenever a minimum is close to a parameter boundary. In the "bulk" they should be quite similar.
I personally lean towards 2) with the knowledge that we can in principle reproduce 1) if necessary. However this will require more communication when comparing results.
Thoughts @alexander-held @matthewfeickert @kratsg @cranmer @HDembinski (for possible comment on minuit philoosophy)
Beta Was this translation helpful? Give feedback.
All reactions