You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The main con of this would be that then onnx inference would be required for conversion. Maybe it could just be an optional thing if onnx is installed.
Why is this necessary? According to musl:
[Check] If the converted onnx is relatively close to the outputs of pytorch inference. This is not strictly necessary, but the documentation recommends it (np.testing.assert_allclose). It obviously won't change the model, but I found some situations where the converted onnx gave like 20% difference between pytorch and onnx, which should not be acceptable. So if this is not tested with assert_allclose, the outputs could be bad while still exporting just fine.
The text was updated successfully, but these errors were encountered:
Is there any update on this? It'd be great to have. Some users exclusively use neosr's onnx conversion script now as it verifies conversion whereas chaiNNer does not
The main con of this would be that then onnx inference would be required for conversion. Maybe it could just be an optional thing if onnx is installed.
Why is this necessary? According to musl:
The text was updated successfully, but these errors were encountered: