32b vs 64b backend #1921
Replies: 4 comments 5 replies
-
The issue seems to appear for simple maximum likelihood fits as well, and doesn't exist with import pyhf
model = pyhf.simplemodels.uncorrelated_background(
signal=[5.0], bkg=[50.0], bkg_uncertainty=[5.0]
)
for backend in ["numpy", "jax"]:
for precision in ["64b", "32b"]:
pyhf.set_backend(backend, precision=precision)
print(
f"{backend} {precision}: {pyhf.infer.mle.fit([53.0] + model.config.auxdata, model)}"
) This outputs
with |
Beta Was this translation helpful? Give feedback.
-
It also appeared for my |
Beta Was this translation helpful? Give feedback.
-
So this is expected. We don't promise good support for 32b backends (part of this is just due to the fact that it's very hard to really keep everything perfectly consistent, but we've done our best for now). You can see this in the test we have here ( Lines 48 to 94 in 8e242c9 |
Beta Was this translation helpful? Give feedback.
-
This is of course not to me to decide, but I would rather disable 32b than having wrong results that are different by factors than 64b. The documentation is not clear about it and I can imagine that there are cases where people would not realise that something is going wrong. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I was trying out different backends for runtime optimisation, when I noticed that my computed limits differ by a factor of 2, which goes beyond floating point precision for "normal" computations.
Below is an example, where I would not expect the "32b" backend to fail as it does.
The difference between
numpy-64b
backend andnumpy-32b
backend also occurs when I only work onnp.float32
arrays, which should be save to convert tonp.float64
, so I suspect I either misunderstand what the backend does or there is something very sensitive to floating point precision.There is also this GitHub-issue, aking something related
Beta Was this translation helpful? Give feedback.
All reactions