Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible bug: investigate std/crypto's vs old std/hash's digest() for large streams #3774

Open
iuioiua opened this issue Nov 7, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@iuioiua
Copy link
Contributor

iuioiua commented Nov 7, 2023

A while back, IIRC, @lino-levan discovered that digest() of the old std/hash sub-module was faster and consumed less memory than the current digest() of the current std/crypto for large streams. This scenario should be proactively investigated, validated and possibly fixed.

Note: Neither of us recalls the specifics or can find the convos surrounding this issue.

@iuioiua iuioiua added the bug Something isn't working label Nov 7, 2023
@jeremyBanks
Copy link
Contributor

I don't know if this is related, but throwing out one possible improvement if you're investigating (I haven't confirmed whether this actually makes a difference): in the past large objects were chunked on the JavaScript side:

https://github.com/denoland/deno_std/blob/52e42d61098ebad1deafc138530b3f182870ec7c/hash/_wasm/hash.ts#L44-L57

while more recently the chunking is done on the Rust/WASM side, through JavaScript bindings:

https://github.com/denoland/deno_std/blob/16db0d5e6f7a0fb558ae1845c892b5dc810b53c8/crypto/_wasm/src/lib.rs#L54-L82

My stated rationale at the time was that the WASM code was being invoked from multiple places, so it was simpler to only have this logic in one place. However, that is no longer the case, since std/node and std/hash have been removed, and the WASM code is only being invoked from std/crypto.

I'm not sure whether this would actually be significant, but maybe it's possible that these additional calls through through the JavaScript bindings could have an impact on performance. Maybe they're unfriendly for the JIT? 🤷 Maybe not.

Since the rationale no longer applies, we might want to move the chunking back to JavaScript. Maybe the bufferSourceBytes function could be replaced/modified to return iterable (maybe a generator?) of Uint8Array chunks instead of a single Uint8Array, if the input is larger than 64KiB. This change might be soft-blocked by #3811.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants